I'm trying to compile the tutorial-2-mixedprocess application in android, this application comes with the sdk. But my application crashes when it tries to load **mixed_sample**
// Load native library after(!) OpenCV initialization
System.loadLibrary("opencv_java3");
Log.d(TAG, "Loaded the opencv"); //This line appears on the monitor
System.loadLibrary("mixed_sample"); // C-R-A-S-H z z z
I looking inside the libs directory that came with opencv-3.2.0-android-sdk.zip but libmixed_sample.so is nowhere to be found.
Where to find this file?
Is there any relation between this libfile and the opencv-manager?
↧
Where is the lib file **mixed_sample.so** ?
↧
object of class VideoCapture cannot be constructed
It is put in a sub thread not main thread.
I am not sure if capture was constructed, capture.isOpened() return true, but get totally 1 frame.

Thanks a lot if you can provide any help!
Thank you so much. ^_^
↧
↧
Optimized sliding window approach
Hi!
I have been searching around for a bit for an optimized version of the sliding window approach based on OpenCV, but I have not yet found one. What I am looking for is some version of the approach that is multi-threaded and/or employs a variable stride without basing the stride on classification confidences or the like. If anyone knows of any such existing approach a heads up would be appreciated :) Cheers!
PS: I am also utilizing the a pyramid implementation in my scheme. Would it be better to multi-thread the sliding window for each layer of the pyramid, or for each row/column in each image?
↧
OpenCV on Mac and Raspberry Pi performance comparison
Hi, guys.
What I am working on is to use OpenCV + Raspberry Pi 3 Model B + Raspberry Cam V2 + Gstreamer to capture video frames, process them and save them into a video file.
My goal is to capture the frames at frame rate of at least 30fps for 1280x720 resolutions.
My code is also here:
cv::VideoCapture cap("v4l2src device=/dev/video0 ! video/x-raw, framerate=30/1, width=1280, height=720, format=RGB ! videoconvert ! appsink");
if (!cap.isOpened()) {
printf("=ERR= can't create video capture\n");
return -1;
}
cv::VideoWriter writer;
writer.open("appsrc ! videoconvert ! omxh264enc ! h264parse ! mpegtsmux ! filesink location=test.mp4", 0, (double)30, cv::Size(640, 480), true);
if (!writer.isOpened()) {
printf("=ERR= can't create video writer\n");
return -1;
}
On raspberry pi, write is not writing fast enough: a single frame write uses 50ms~60ms, which is less than what I am desiring.
However, a strange thing I find is, CAP >> FRAME costs from 20ms to 60ms on my Mac, however, it costs 10ms on RPi.
However, VIDEOWRITER << FRAME costs 60ms on RPi, but it costs 10ms on my Mac.
This is my code to measure the time:
auto start = std::chrono::high_resolution_clock::now();
cap >> frame;
auto end = std::chrono::high_resolution_clock::now();
std::chrono::duration diff = end-start;
std::cout << diff.count() << " s\n";
I am looking for any solution to reduce the CAP >> FRAME time on Mac and
reduce the VIDEOWRITER << CAP time on RPi.
Thank you in advance!
↧
Extract boundingRect in extra images
I would like to extract **all** of my `boundingRect()`s to single images. This code show my all boundingRects in one image:
//...
for (size_t i = 0; i < lines.size(); i++) {
Vec4i current = lines[i];
Point pt1 = Point(current[0], current[1]);
Point pt2 = Point(current[2], current[3]);
Vec4i previous = lines[i - 1];
Point ppt1 = Point(previous[0], previous[1]);
Point ppt2 = Point(previous[2], previous[3]);
double angle = atan2(pt2.y - pt1.y, pt2.x - pt1.x) * 180.0 / CV_PI;
if (angle) {
line(cdst, pt1, pt2, Scalar(0, 0, 255), 2, CV_AA);
}
vector pt;
vector subregions;
pt.push_back(Point(current[0], current[1]));
pt.push_back(Point(current[2], current[3]));
pt.push_back(Point(previous[0], previous[1]));
pt.push_back(Point(previous[2], previous[3]));
Rect boundRect = boundingRect(pt);
rectangle(src, boundRect, Scalar(0, 255, 0), 1, 8, 0);
imshow("boundings", src);
}
Now I addeed this one and it doesn't work:
for (int i = 0; i < pt.size(); i++) {
Rect roi = boundingRect(Mat (pt[i]));
Mat mask = Mat::zeros(src2.size(), CV_8UC1);
drawContours(mask, pt, i, Scalar(255), CV_FILLED);
Mat contourRegion;
Mat imageROI;
src2.copyTo(imageROI, mask);
contourRegion = imageROI(roi);
// Mat maskROI = mask(roi);
subregions.push_back(contourRegion);
}
}
Tthe exception is: ` Assertion failed (npoints >= 0 && (depth == CV_32F || depth == CV_32S)) in cv::pointSetBoundingRect, file D:\opencv320\opencv\modules\imgproc\src\shapedescr.cpp, line 466`
I need to iterate through all of my `boundingRect()` and for each of them should be given a output. Already looked at [a python code and "translated" it][1] and also [tried this one][2]
Thank you in advance! Please share suggested improvements of my code. Heres an image to show you all the boundingRect and now they should be **all** ectracted in separat images. 
[1]: https://stackoverflow.com/questions/21104664/extract-all-bounding-boxes-using-opencv-python
[2]: https://stackoverflow.com/questions/22875397/create-a-mask-from-a-boundingrect-in-opencv
↧
↧
OpenCV Undefined symbols for architecture x86_64: lineDescriptor
I built OpenCV from source along with *opencv_contrib* as well.
For some reason all my attempts to access the classes in [`lineDescriptor`][1] lead to a linker error.
All of these declarations throw a linker error
BinaryDescriptor bsd = BinaryDescriptor();
Ptr bsd1 = BinaryDescriptor::createBinaryDescriptor();
Ptr lsd1 = LSDDetector::createLSDDetector();
I fully understand what the error means but I don't know why it is thrown in the first place.
I've looked around and tried different solutions; [changing the compiler][2], [verified linker flags][3] and [linked my libraries][4], but the error was still getting thrown.
#include
#include "opencv2/opencv.hpp"
#include "opencv2/line_descriptor.hpp"
using namespace cv;
using namespace std;
using namespace line_descriptor;
void detectLines(Mat& original, Mat grey)
{
Ptr lsd = createLineSegmentDetector(2);
vector lines;
lsd->detect(grey, lines);
cout << "Detected " << lines.size() << endl;
lsd->drawSegments(original, lines);
// Linker problems galore
// BinaryDescriptor bsd = BinaryDescriptor();
// Ptr bsd1 = BinaryDescriptor::createBinaryDescriptor();
// Ptr lsd1 = LSDDetector::createLSDDetector();
}
These are my current linker flags
-lopencv_calib3d -lopencv_core -lopencv_features2d -lopencv_flann -lopencv_highgui -lopencv_imgcodecs -lopencv_imgproc -lopencv_ml -lopencv_objdetect -lopencv_photo -lopencv_shape -lopencv_stitching -lopencv_superres -lopencv_ts -lopencv_video -lopencv_videoio -lopencv_videostab
I personally feel like it has something to do with my flags but I am not sure of the flag that corresponds to the `lineDescriptor`. Any help will be greatly appreciated!
[1]: http://docs.opencv.org/master/d2/d51/namespacecv_1_1line__descriptor.html
[2]: https://stackoverflow.com/a/27027103/4962554
[3]: https://stackoverflow.com/a/32595769/4962554
[4]: https://stackoverflow.com/a/36113293/4962554
↧
OpenCV how the frame is grabbed out from WebCam?
I am wondering in a single frame grab
cap >> frame
If I am using Logitech C920 which has its own hardware encoder, and I am grabbing 720p image,
Am I getting 1280 * 720 * 3=921600 * 3=2700000 bytes from the USB port directly?
Or I am getting a compressed image out first and decode it into 2700000bytes?
↧
How to save an image with JPEG output in opencv 3.2.0?
Hello,
I am using opencv 3.2.0 and I simply want to read an image and simple save the same image to JPEG output with compression ratio of 75%. There is an example for PNG format: http://docs.opencv.org/trunk/d4/da8/group__imgcodecs.html#gabbc7ef1aa2edfaa87772f1202d67e0ce . I use the following instructions:
#include
#include
#include
#include
#pragma comment(lib, "opencv_world320.lib")
#pragma comment(lib, "opencv_world320d.lib")
using namespace std;
using namespace cv;
int main()
{
/************************************************************
Declaring Variables
************************************************************/
int x, y;
Mat my_img;
vector compression_params;
/************************************************************
load and dispaly an image
************************************************************/
my_img = imread("E:/Standard_Images/goldhill.jpg", CV_LOAD_IMAGE_COLOR);
cout << "The image dimensions:" << my_img.rows << "*" << my_img.cols << endl;
cout << "The number of the channels:" << my_img.channels() << endl;
cout << "The image depth: " << my_img.depth() << endl;
namedWindow("Lena", WINDOW_AUTOSIZE);
imshow("Lena", my_img);
waitKey(0);
destroyWindow("Lena");
/************************************************************
Compress the image with JPEG and write into a new file
************************************************************/
compression_params.push_back(CV_IMWRITE_JPEG_QUALITY);
compression_params.push_back(75);
bool bSuccess = imwrite("E:/Standard_Images/My_Image.jpg", my_img, compression_params);
if (!bSuccess)
{
cout << "Couldn't save the file" << endl;
}
else
{
namedWindow("The Saved file", CV_WINDOW_AUTOSIZE);
Mat openImage = imread("example.jpg", CV_LOAD_IMAGE_UNCHANGED);
imshow("The Saved file", openImage);
}
getchar();
return 1;
}
But it fails to save the output image and the following error is issued: "Exception thrown at 0x00007FFF08DC86C2 (opencv_world320.dll) in ConsoleApplication1.exe: 0xC0000005: Access violation reading location 0x0000020F4BADF000."
Do you have any idea about how to save an image with JPEG format? Thanks a lot for your help
↧
Speeding up my code for object recognition
I am working on a multi-object recognition program. I have succeeded recognising two objects. However, the speed of my program is really slow and laggy. Can somebody please tell a way to speed the program up?
So far, I have seen that ORB is quite fast at matching features. However, my program is still too slow. I have also noticed that there are three for loops, possibly making the program slow down. Interestingly on YouTube, I have seen videos where the output is really smooth.
Is there a way to fix this? Thanks.
This is my code:
#include
#include
#include "opencv2/core/core.hpp"
#include "opencv2/features2d/features2d.hpp"
#include "opencv2/nonfree/features2d.hpp"
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/calib3d/calib3d.hpp"
using namespace std;
using namespace cv;
FlannBasedMatcher matcher(new flann::LshIndexParams(20, 10, 2));
void ttea(vector bwImgKP, Mat objectDS, Mat bwImgDS, vector objectKP, vector objectCRN, Mat colorImg, string text)
{
vector> matches;
vector matchesGD;
vector obj;
vector scene;
vector sceneCRN(4);
Mat H;
if (bwImgKP.empty() || objectDS.empty() || bwImgDS.empty())
{
return;
}
matcher.knnMatch(objectDS, bwImgDS, matches, 2);
for (int i = 0; i < min(bwImgDS.rows - 1, (int)matches.size()); i++)
{
if ((matches[i][0].distance < 0.6*(matches[i][1].distance)) && ((int)matches[i].size() <= 2 && (int)matches[i].size() > 0))
{
matchesGD.push_back(matches[i][0]);
}
}
if (matchesGD.size() >= 4)
{
for (int i = 0; i < matchesGD.size(); i++)
{
obj.push_back(objectKP[matchesGD[i].queryIdx].pt);
scene.push_back(bwImgKP[matchesGD[i].trainIdx].pt);
}
H = findHomography(obj, scene, CV_RANSAC);
perspectiveTransform(objectCRN, sceneCRN, H);
line(colorImg, sceneCRN[0], sceneCRN[1], Scalar(255, 0, 0), 4);
line(colorImg, sceneCRN[1], sceneCRN[2], Scalar(255, 0, 0), 4);
line(colorImg, sceneCRN[2], sceneCRN[3], Scalar(255, 0, 0), 4);
line(colorImg, sceneCRN[3], sceneCRN[0], Scalar(255, 0, 0), 4);
putText(colorImg, text, sceneCRN[1], FONT_HERSHEY_DUPLEX, 1, Scalar(0, 0, 255), 1, 8);
}
}
int main()
{
OrbFeatureDetector detector;
OrbDescriptorExtractor extractor;
VideoCapture capture(0);
Mat object0 = imread("Much Ado About Nothing.jpg", CV_LOAD_IMAGE_GRAYSCALE);
vector object0KP;
detector.detect(object0, object0KP);
Mat object0DS;
extractor.compute(object0, object0KP, object0DS);
vector object0CRN(4);
object0CRN[0] = (cvPoint(0, 0));
object0CRN[1] = (cvPoint(object0.cols, 0));
object0CRN[2] = (cvPoint(object0.cols, object0.rows));
object0CRN[3] = (cvPoint(0, object0.rows));
Mat object1 = imread("Popular Science.jpg", CV_LOAD_IMAGE_GRAYSCALE);
vector object1KP;
detector.detect(object1, object1KP);
Mat object1DS;
extractor.compute(object1, object1KP, object1DS);
vector object1CRN(4);
object1CRN[0] = (cvPoint(0, 0));
object1CRN[1] = (cvPoint(object1.cols, 0));
object1CRN[2] = (cvPoint(object1.cols, object1.rows));
object1CRN[3] = (cvPoint(0, object1.rows));
while (true)
{
Mat bwImg;
Mat bwImgDS;
vector bwImgKP;
Mat colorImg;
capture.read(colorImg);
cvtColor(colorImg, bwImg, CV_BGR2GRAY);
detector.detect(bwImg, bwImgKP);
extractor.compute(bwImg, bwImgKP, bwImgDS);
ttea(bwImgKP, object0DS, bwImgDS, object0KP, object0CRN, colorImg, "Play");
ttea(bwImgKP, object1DS, bwImgDS, object1KP, object1CRN, colorImg, "Magazine");
imshow("Fish Smart", colorImg);
if (waitKey(1) == 27)
{
return 0;
}
}
}
↧
↧
estimateRigidTransform()
After reading several posts about getting the 2D transformation of 2D points from one image to another, `estimateRigidTransform()` seems to be the recommendation. I'm trying to use it. I modified the source code (to change the `RANSAC` parameters, because its hardcoded, and the hardcoded parameters are not very good)(the source code for this function is in `lkpyramid.cpp`). I have read up on how `RANSAC` works, and am trying to understand the steps in `estimateRigidTransform()`.
// choose random 3 non-complanar points from A & B
...
// additional check for non-complanar vectors
a[0] = pA[idx[0]];
a[1] = pA[idx[1]];
a[2] = pA[idx[2]];
b[0] = pB[idx[0]];
b[1] = pB[idx[1]];
b[2] = pB[idx[2]];
double dax1 = a[1].x - a[0].x, day1 = a[1].y - a[0].y;
double dax2 = a[2].x - a[0].x, day2 = a[2].y - a[0].y;
double dbx1 = b[1].x - b[0].x, dby1 = b[1].y - b[0].y;
double dbx2 = b[2].x - b[0].x, dby2 = b[2].y - b[0].y;
const double eps = 0.01;
if( fabs(dax1*day2 - day1*dax2) < eps*std::sqrt(dax1*dax1+day1*day1)*std::sqrt(dax2*dax2+day2*day2)
|| (fabs(dbx1*dby2 - dby1*dbx2) < eps*std::sqrt(dbx1*dbx1+dby1*dby1)*std::sqrt(dbx2*dbx2+dby2*dby2) )
continue;
Is it a typo that it uses non-coplanar vectors? I mean the 2D points are all on the same plane right?
My second question is what is that `if` condition doing? I know that the left hand side (gives the area of triangle times 2) would be zero or near zero if the points are collinear, and the right hand side is the multiplication of the lengths of 2 sides of the triangle.
↧
opencv 3.2.0 can not create window
Hi all,
I installed opencv 3.2.0 on my new laptop. It seems that it can not create window. The same patch works fine on my desktop.
https://github.com/opencv/opencv/blob/master/samples/cpp/tutorial_code/introduction/display_image/display_image.cpp
Can anyone help?
Thanks in advance!
[C:\fakepath\Screen Shot 2017-06-19 at 7.12.02 PM.png](/upfiles/14978887554955819.png)
Mac book pro, OS X sierra 10.12.5
↧
Calculate corners locations from known dimensions and 1st corner's location
Hello,
I have a board with known dimensions (50cm along x-axis and 30 cm along y-axis). Now i have an Aruco marker taped to the top left corner of the board. I can detect the marker and get its location in the image through `aruco::detectMarkers`, location in the world coordinates through `triangulatePoints` and orientation through `estimatePoseSingleMarker` perfectly.
Now I have the location of the 1st corner and I want to calculate the locations of the other 3 corners of the board w.r.t my world origin. the problem is that the board can be in any orientation so i cannot add 50cm to the x of the 1st corner and 30 to the y. The known dimensions are w.r.t my world coordinate system.
My question is, How can i scale these dimensions depending on the board's orientation and then calculate the 2D/3D locations of the other corners?
↧
Trying to determine value of pixels found at certain coordinates
So I found coordinates of pixels based on blob detection, and I need to determine whether or not these pixels are within a certain hsv range. I have done this before with nested for loops successfully, but I can't figure out how to feed the pixels both coordinates instead of just going through every single pixel in the picture.
vector coor;
//populate vector
Vec3b* hsv = output.ptr(img.rows, img.cols);
for(int I = 0; I < coor.size(); I++) {
Point2f j = coor[I];
uchar h = hsv[j][2];
uchar s = hsv[j][1];
uchar v = hsv[j][0];
//check ranges
}
My errors basically just say there's no match for these operands (related to the j). I have had no luck with google or my own ideas.
Update: I have fixed it. Proper context is:
int h = img.at(j.y, j.x).val[2];
int s = img.at(j.y, j.x).val[1];
int v = img.at(j.y, j.x).val[0];
↧
↧
Image acquisition problem in VisualStudio
Hi
I have the problem shown in the picture attached: on the right the capture i have from the camera default program (it's a Point Grey Grasshopper), on the left the image i get with cv:videocapture in Visual Studio (2017), you can see it's kind of the same image, but shown 3 times with bad resolution.
I already tried using different OpenCv libraries but nothing changes.
Using a regulare USB webcam and same software everything works fine.
Can anybody help me?
[C:\fakepath\Immagine.png](/upfiles/14967561683441972.png)
Thanks
Giacomo
↧
warpPerspective With two cams
Hello to all, I'm here to ask for advice.
I have two cams that resume the same thing. A cam has a filter that only passes light ir. The other cam is a normal cam that does not pass the light ir. I use this last cam (normal cam) to detect four balls (The four balls are projected onto a wall). These four balls are the vertices of a rectangle.
When I find these four points I calculate the matrix H and apply findHomography() to fix the perspective.
I would like to calculate this matrix H also for the IR cam but it cant see these 4 balls. How can I calculate matrix H for this new cam?
I know the distance to which the two cams are placed
Can anybody give me some advice?
↧
stitch function is not stitching all the input images
I am using stitch function to stitch all the overlapped images to create a panorama view.But only some images are getting stitched, not all.






The code is:
bool try_use_gpu = false;
vector imgs;
string result_name = "field.jpg";
int parseCmdArgs(int argc, char** argv)
{
if (argc == 1)
{
return -1;
}
for (int i = 1; i < argc; ++i)
{
{
Mat img = imread(argv[i]);
if (img.empty())
{
cout << "Can't read image '" << argv[i] << "'\n";
return -1;
}
imgs.push_back(img);
}
}
return 0;
}
int main(int argc, char* argv[])
{
int retval = parseCmdArgs(argc, argv);
if (retval) return -1;
Mat pano;
Stitcher stitcher = Stitcher::createDefault(try_use_gpu);
stitcher.setRegistrationResol(-1); /// 0.6
stitcher.setSeamEstimationResol(-1); /// 0.1
stitcher.setCompositingResol(-1); //1
stitcher.setPanoConfidenceThresh(-1); //1
stitcher.setWaveCorrection(true);
stitcher.setWaveCorrectKind(detail::WAVE_CORRECT_HORIZ);
Stitcher::Status status = stitcher.stitch(imgs, pano);
if (status != Stitcher::OK)
{
cout << "Can't stitch images, error code = " << status << endl;
return -1;
}
imwrite(result_name, pano);
return 0;
}
The output i am getting is

Please help me to fix this problem.Thanks in advance.
↧
C++ Mat.at to Java
Hi I am trying to convert some C++ OpenCV code to Java OpenCV code. I am trying to convert the following lines to Java:
for(int i = 0; i < image.rows; i++){
for(int j = 0; j < image.cols; j++){
uchar p = im.at(i, j);
}
}
Can anyone help me out in explaining what the C++ code is doing and how to convert it? Thanks in advance
↧
↧
OpenCV : Judge .bmp image
My boss told me to create program to judge an image. And the type of the image is to be .bpm, not .jpg or .png, and give a Number as an output.
The image is like this:
OK

OK-1

OK-2

OK-3

NO GOOD 1

NO GOOD 2

There are 6 Images, 4 images are good images, and 2 are bad images.
The difference between OK-0, OK-1, OK-2, OK-2, OK-3 is the angle when image is taken.
The purpose is to get a number or percentage, and from it we can know the difference of each images (what kind of number, or where does it come from i don`t know), they asked me to use opencv, but I don't know how to get the input for the output number.
Can someone please point me in the right direction:
What function should i use? or where the number come from? or what the program flow should be?
Because I don`t know what should i do.
Thank you. Your answer really means a lot to me.
↧
accessing row and col pixel array


for (int j = 0; j(j, i) < 100) //change color less than 100
{
image.at(j, i) = 255; // make it white
}
}
}
i want to change all the color < 100 to white, but only less than half of the image that change, is it something wrong with row and col calculation ?.
and how to select specific color in the image and change it to red.
Thank you.
↧
ORB feature matching in Java
So i am currently working on converting an opencv program in c++ to java to do 2d feature matching. I've been having trouble understanding what some of the lines are doing and how i might be able to find their java equivalent, any help would be appreciated thanks!
// Calculate the ORB descriptor based on the keypoint
Ptr orb_feat = ORB::create();
Mat descriptors;
orb_feat->compute(input, keypoints, descriptors);
↧