Hello, OpenCV community! I need your help!!
I'm trying to get the kinect depth image using Visual Studio 2017 (C++) and OpenCV 3.2.0, though it should be something simple, I have tried a lot of things and codes but nothing works!! I've tried Kinect Developer Toolkit code in C++ to get kinect depth image and it doesn't work as well (if necessary, I'll print the error message)...
I have already tested my Kinect (it works) and imported the necessary libraries.
I did plenty of research on the internet and found nothing... If someone could upload a simple code that gets Kinect depth frames I would be very grateful.
**PS: I'm using Kinect V1.0 (Xbox 360)**
↧
How to get Kinect Depth Image with OpenCV?
↧
Stitching Video Stream
Hi guys,
I'm attempting to stitch the following video stream (youtube.com/watch?v=VDTEyQhZzKA).
I've used SURF and homographs to carry out the stitching and this part is going well. However, I'm having issues knowing when to push back my stitched image into the loop to carry on stitching for the whole video.
frame2: current video frame
stitched: stitched image (also frame1 at the start)
the loop: stitched + frame2 = stitched
Thanks in advance guys!
↧
↧
cpp-tutorial-pnp_registration throw error
Hi everyone,
I am trying to use cpp-tutorial-pnp_registration in opencv sample code whose location is /home/***/opencv-3.2.0/samples/cpp/tutorial_code/calib3d/real_time_pose_estimation/src/main_registration.cpp
I just try to use opencv original data resized_IMG_3875.JPG and box.ply to get textured 3D model (yml file), however, opencv throw the following error for me:
--------------------------------------------------------------------------
This program shows how to create your 3D textured model.
Usage:
./cpp-tutorial-pnp_registration
--------------------------------------------------------------------------
init done
Click the box corners ...
Waiting ...
COMPUTING POSE ...
OpenCV Error: Assertion failed (npoints >= 0 && npoints == std::max(ipoints.checkVector(2, CV_32F), ipoints.checkVector(2, CV_64F))) in solvePnP, file /tmp/binarydeb/ros-kinetic-opencv3-3.2.0/modules/calib3d/src/solvepnp.cpp, line 63
terminate called after throwing an instance of 'cv::Exception'
what(): /tmp/binarydeb/ros-kinetic-opencv3-3.2.0/modules/calib3d/src/solvepnp.cpp:63: error: (-215) npoints >= 0 && npoints == std::max(ipoints.checkVector(2, CV_32F), ipoints.checkVector(2, CV_64F)) in function solvePnP
Aborted (core dumped)
I am not sure what the problem is and could you give me some idea? Any idea will be appreciate.
Thanks in advance.
My screen shot is shown as following:

↧
Copy vector single matrix into 2d matrix
Hello
I have the following code attached
using namespace std;
using namespace cv;
int main()
{
CvCapture *capture=cvCaptureFromFile("C:\\test.mp4");
static const int length = (int) cvGetCaptureProperty(capture, CV_CAP_PROP_FRAME_COUNT); //250
static const int width = (int) cvGetCaptureProperty(capture, CV_CAP_PROP_FRAME_WIDTH); //480
static const int height = (int) cvGetCaptureProperty(capture, CV_CAP_PROP_FRAME_HEIGHT); //360
static const int fps = (int) cvGetCaptureProperty(capture, CV_CAP_PROP_FPS);
Mat image;
Mat image_split[3];
Mat image_new;
Mat image_temp;
Mat image_vec = (Mat_(1, width*height) << 0);
//Mat A = (Mat_(length, width*height) << 0);
Mat A[length];
Mat B;
IplImage* frame;
for (int i = 0; i < length; i++)
{
frame = cvQueryFrame(capture);
if (!frame)
{
break;
}
image = cvarrToMat(frame);
split(image,image_split);
image_new = image_split[2];
image_vec = image_new.reshape(1, height*width);
image_vec.copyTo(A[i]);
}
cvReleaseCapture(&capture);
return 0;}
and my problem is that I need to put image_vec into 2d matrix A and not create a multi-dimensional matrix
what I tried to do is image_vec.col(0).copyTo(A.col(i)); and for some reason it didn't work
thanks
↧
Kalman filter: add samples between measurements.
I am trying to set up kalman filtering to add samples between the measured samples. I have this test data:
std::vector < cv::Point3d > data; (the x value is the timestamp)
cv::Point3d one(10, 11, 100);
cv::Point3d two(20, 12, 200);
cv::Point3d three(30, 13, 300);
cv::Point3d four(40, 14, 500);
cv::Point3d five(50, 15, 600);
data.push_back(one);
data.push_back(two);
data.push_back(three);
data.push_back(four);
data.push_back(five);
this is my kalman code: based on:
https://stackoverflow.com/questions/25702120/how-to-use-a-kalman-filter-to-predict-gps-positions-in-between-meassurements?rq=1
int main()
{
KalmanFilter KF(4, 2, 0);
Mat_ state(4, 1);
Mat_ processNoise(4, 1, CV_64F);
// intialization of KF...
float dt = 1; //time
KF.transitionMatrix = (Mat_(4, 4) << 1, 0, dt, 0,
0, 1, 0, dt,
0, 0, 1, 0,
0, 0, 0, 1);
KF.processNoiseCov = (cv::Mat_(4, 4) << 0.2, 0, 0.2, 0, 0, 0.2, 0, 0.2, 0, 0, 0.3, 0, 0, 0, 0, 0.3);
Mat_ measurement(2, 1); measurement.setTo(Scalar(0));
KF.statePre.at(0) = 0;
KF.statePre.at(1) = 0;
KF.statePre.at(2) = 0;
KF.statePre.at(3) = 0;
setIdentity(KF.measurementMatrix);
setIdentity(KF.processNoiseCov, Scalar::all(1e-4));
setIdentity(KF.measurementNoiseCov, Scalar::all(1e-1));
setIdentity(KF.errorCovPost, Scalar::all(.1));
std::cout << data.size() << std::endl;
for (int i = 1; i < data.size(); ++i)
{
const cv::Point3d & last = data[i - 1];
const cv::Point3d & current = data[i];
double steps = current.x - last.x;
std::cout << "Time between Points:" << current.x - last.x << endl;
std::cout << "Measurement:" << current << endl;
Mat prediction = KF.predict();
measurement(0) = last.y;
measurement(1) = last.z;
Mat estimated = KF.correct(measurement);
std::cout << "Estimated: " << estimated.t() << endl;
Mat prediction;
for (int j = 0; j < steps; j ++) //between
{
prediction = KF.predict();
std::cout << (long)((last.x - data[0].x) + j) << " " << prediction.at(0) << " " << prediction.at(1) << endl;
}
std::cout << "Prediction: " << prediction.t() << endl << endl;
}
return 0;
}
What i get is:
Time between Points:10
Measurement:[20, 12, 200]
Estimated: [7.3345551, 66.677773, 3.6654449, 33.322227]
0 5.27766e+13 1.14294e+10
1 7.5021e+14 1.14294e+10
2 3.18693e+15 1.14294e+10
3 1.34921e+16 1.14294e+10
4 5.69456e+16 1.14294e+10
5 1.91904e+17 1.14294e+10
6 3.95717e+17 1.14294e+10
7 8.15253e+17 1.14294e+10
8 1.67815e+18 1.14294e+10
9 3.45157e+18 1.14294e+10
Prediction: [43.989002, 399.90012, 3.6654449, 33.322227]
Time between Points:10
Measurement:[30, 13, 300]
Estimated: [12.395308, 202.58578, 0.60824382, 13.324508]
10 2.69206e+16 5.55539e+06
11 4.78165e+16 5.55539e+06
12 8.35837e+16 5.55539e+06
13 1.43592e+17 5.55539e+06
14 2.036e+17 5.55539e+06
15 2.63608e+17 5.55539e+06
16 3.59003e+17 5.55539e+06
17 4.79019e+17 5.55539e+06
18 6.2161e+17 5.55539e+06
19 8.61643e+17 5.55539e+06
Prediction: [18.477747, 335.83078, 0.60824382, 13.324508]
Time between Points:10
Measurement:[40, 14, 500]
Estimated: [13.987329, 307.97446, 0.29939148, 10.829972]
20 5.65692e+17 959433
21 7.50019e+17 959433
22 9.45115e+17 959433
23 1.14021e+18 959433
24 1.51769e+18 959433
25 1.90788e+18 959433
26 2.29807e+18 959433
27 3.07069e+18 959433
28 3.85107e+18 959433
29 4.65122e+18 959433
Prediction: [16.981243, 416.27432, 0.29939148, 10.829972]
Time between Points:10
Measurement:[50, 15, 600]
Estimated: [14.705034, 484.33414, 0.1413258, 14.342192]
30 2.92129e+19 1.12591e+07
31 3.74806e+19 1.12591e+07
32 4.57483e+19 1.12591e+07
33 5.4016e+19 1.12591e+07
34 6.22837e+19 1.12591e+07
35 7.05514e+19 1.12591e+07
36 8.38512e+19 1.12591e+07
37 1.00387e+20 1.12591e+07
38 1.16922e+20 1.12591e+07
39 1.33457e+20 1.12591e+07
Prediction: [16.118294, 627.7558, 0.1413258, 14.342192]
These results make no sense to me, the steps should go up by one, and the prediction is way off.. Where am i going wrong here? And what exactly are the values in the prediction mat?
Thank you!
↧
↧
Image acquisition problem in VisualStudio
Hi
I have the problem shown in the picture attached: on the right the capture i have from the camera default program (it's a Point Grey Grasshopper), on the left the image i get with cv:videocapture in Visual Studio (2017), you can see it's kind of the same image, but shown 3 times with bad resolution.
I already tried using different OpenCv libraries but nothing changes.
Using a regulare USB webcam and same software everything works fine.
Can anybody help me?
[C:\fakepath\Immagine.png](/upfiles/14967561683441972.png)
Thanks
Giacomo
↧
Adding speed as a feature in an image based ANN
Good day everyone, thanks for taking the time to look into my question.
based on this example : http://answers.opencv.org/question/119300/how-to-start-with-neural-network-implementation-with-opencv-and-c/
I've started implementing an ANN classification to drive a car in unity.
I read the camera feed as I control the car to record my training data.



Then I train a neural network using this code :
int nclasses = 5;
String att = FOLDER;
vector fn;
Mat train_data, train_labels, test_data, test_labels;
int cnt = 0;
for (int p = 0; p < nclasses; p++)
{
cerr << "p " << p << "\r";
glob(att + std::to_string(p), fn, false);
for (int i = 0; i < fn.size(); i++)
{
cv::Mat image = cv::imread(fn[i], 0);
if (image.empty()) {
cerr << "no !" << fn[i] << endl; continue;
}
image.convertTo(image, CV_32F);//1.0/255);
resize(image, image, Size(80, 80));
Mat feature = image;
train_data.push_back(feature.reshape(1, 1));
train_labels.push_back(p);
}
}
// setup the ann:
int nfeatures = train_data.cols;
Ptr ann = ml::ANN_MLP::create();
Mat_ layers(4, 1);
layers(0) = nfeatures; // input
layers(1) = nclasses * 8; // hidden
layers(2) = nclasses * 4; // hidden
layers(3) = nclasses; // output, 1 pin per class.
ann->setLayerSizes(layers);
ann->setActivationFunction(ml::ANN_MLP::SIGMOID_SYM, 0, 0);
ann->setTermCriteria(TermCriteria(TermCriteria::MAX_ITER + TermCriteria::EPS, 300, 0.0001));
ann->setTrainMethod(ml::ANN_MLP::BACKPROP, 0.0001);
// ann requires "one-hot" encoding of class labels:
Mat train_classes = Mat::zeros(train_data.rows, nclasses, CV_32FC1);
for (int i = 0; i < train_classes.rows; i++)
{
train_classes.at(i, train_labels.at(i)) = 1.f;
}
cerr << train_data.size() << " " << train_classes.size() << endl;
ann->train(train_data, ml::ROW_SAMPLE, train_classes);
ann->save("output.ann");
return 0;
At runtime I then read the camera feed and predict if I should go forward, left, right...
https://youtu.be/PB5NiIGFTNo
Using only 3500 images (2 laps) I have the result shown in the video.
**My question is,**
How can I add the speed as a feature, so that the same image won't output the same classification depending of the speed. I'm using 80x80 image, so 6400 floats, I'm afraid adding the speed as 1 float won't have any weight in the calculation.
Speed is an example, but friction, weather, mass of the car, stuff like that could also be added during the training phase.
Thanks you for your help.
↧
OpenCV Camera Calibration Parsing Error. How to solve it?
Im using the camera calibration source code from OpenCV in this link (http://docs.opencv.org/2.4/doc/tutorials/calib3d/camera_calibration/camera_calibration.html)
In the configuration file may choose to use camera as an input, a video file or an image list.So I choose camera. So I tried to create a configuration file. Here’s my xml configuration file
9 6 50 "CHESSBOARD" 30 100 25 1 1 1 "out_camera_data.xml" 1 1 1
The camera Id is 3. When run the code I got this error: `OpenCV Error: Parsing error : Preliminary end of the stream> in icvXMLParseTag`
I dont understand what is the problem . Is it the chesssboard size? How should I chose the BoardSize_Width and BoardSize_Height? Any help? Or is there any bug in the code. In the cameraCalibration code im using this to read the settings:
Settings s;
const string inputSettingsFile = argc > 1 ? argv[1] :"C:/Calibration/Clibration/CameraCalibration1/config1.xml";
FileStorage fs(inputSettingsFile, FileStorage::READ); // Read the settings
I try to solve this problem for two days and still stuck as Im new in Camera Calibration and never done it before. Would really appreciate the help . Thanks
↧
Can't seem to use OpenCV
I have followed the instructions included in the documentation for installing OpenCV (several times, just to make sure), and yet I am still not able to link my code, which is written in C++. Are there better outlined steps I can follow?
For context, I am using g++ as my compiler (with Cygwin), and I have two C++ sources I am trying to link. I am running on a Windows 10 x64 machine. The actual .cpp sources compile fine, but the linker is having a heart attack.
Cheers!
↧
↧
OpenCV keypoint copy does not work properly
I am trying to copy keypoints from one vector to another keypoint vector. I want to do this so that I can split the keypoints to thread them. The below is done by using all the keypoints found. (Copying is the problem, not the splitting, and these are done in the main thread, no threading here)
The values of **keypoint1** and **keypoint2** after copying are the same, but when I put them through the extracting descriptor and matching algorithm, keypoint1 and keypoint2 produces different results.
**keypoint1** produces accurate results, whereas **keypoint2** produces a lot of wrong results. I am using `ORB algorithm` for detecting keypoints and descriptor extraction, and `FlannBasedMatcher` for matching. I have tried a few methods to copy the keypoints, I tried `push_back()` too, but its the same.
Method 1:
keypoint2.clear();
keypoint2.insert(keypoint2.begin(), keypoint1.begin(), keypoint1.end());
Method 2:
keypoint2.clear();
keypoint2.resize(keypoint1.size());
for (int i = 0; i < keypoint1.size(); ++i) {
keypoint2[i].pt.x = keypoint1[i].pt.x;
keypoint2[i].pt.y = keypoint1[i].pt.y;
keypoint2[i].size = keypoint1[i].size;
keypoint2[i].angle = keypoint1[i].angle;
keypoint2[i].response = keypoint1[i].response;
keypoint2[i].octave = keypoint1[i].octave;
keypoint2[i].class_id = keypoint1[i].class_id;
}
After copying the keypoints
extractor->compute(GrayImage2, keypoint2, descriptor_img2);
matcher.match(descriptor_img1, descriptor_img2, matches);
I can tell that its wrong because after this step, they go through the same filtering step to get better results. The differences in amount of correct data from using keypoint1 and keypoint2 are very large.
I tried using pointers to split the keypoint too, but I couldn't get the pointer to point at part of the keypoint vector.
↧
opencv display all Multi video together in Single frame
I am writing a openCV simple program to display multivideo in single frame.
I could write for two video , I want for 4 videos, Could some body guide how to display 4 videos in single frame (Like 4 videos in Square kind of frame. below is my code
int main(int argc, char** argv)
{
string filename = "/home/user/testaviravi.avi";
VideoCapture capture(filename);
VideoCapture capture1(filename);
Mat frame;
Mat frame1;
if( !capture.isOpened() )
throw "Error when reading steam_avi0";
if( !capture1.isOpened() )
throw "Error when reading steam_avi1";
namedWindow( "w", 1);
for( ; ; )
{
capture >> frame;
capture1 >> frame1;
if(frame.empty())
break;
if(frame1.empty())
break;
Mat canvas = Mat::zeros(frame.rows*2+1, frame.cols*2+1, frame.type());
frame.copyTo(canvas(Range::all(), Range(0, frame.cols)));
frame1.copyTo(canvas(Range::all(), Range(frame1.cols+1, frame1.cols*2+1)));
// if it is too big to fit on the screen, then scale it down by 2, hopefully it'll fit :-)
imshow("w", canvas);
}
waitKey(0); // key press to close window
// releases and window destroy are automatic in C++ interface
}
↧
Eclipse Neon c++ openCV 3.0.0+ install and usage
I am trying to work on a project for myself this summer and at its core, I am planning to use the NVIDIA Jetson platform and need the benefits of the CUDA acceleration. I am primarily a Java / C# programmer but due to the support issues this is the first time I am really working on the c++ side of things. During the process of setting up my environment for use in eclipse neon.3 everything seems fine, I used the install package and then added the libraries with the setx command in the tutorial. When I attempt to build any c++ application using OpenCV I get an error on the namespace and any other OpenCV references.
Any help would be appreciated.
here is the output form the compiler:
19:46:43 **** Incremental Build of configuration Debug for project testproj2 ****
Info: Internal Builder is used for build
g++ "-IC:\\Users\\cole\\Documents\\opencv\\build\\include\\opencv2" -O0 -g3 -Wall -c -fmessage-length=0 -o "src\\test.o" "..\\src\\test.cpp"
g++ "-LC:\\Users\\cole\\Documents\\opencv\\build\\include\\opencv2" -o testproj2.exe "src\\test.o" -lopencv_core -lopencv_imgproc -lopencv_highgui
c:/mingw/bin/../lib/gcc/mingw32/5.3.0/../../../../mingw32/bin/ld.exe: cannot find -lopencv_core
c:/mingw/bin/../lib/gcc/mingw32/5.3.0/../../../../mingw32/bin/ld.exe: cannot find -lopencv_imgproc
c:/mingw/bin/../lib/gcc/mingw32/5.3.0/../../../../mingw32/bin/ld.exe: cannot find -lopencv_highgui
collect2.exe: error: ld returned 1 exit status
19:46:43 Build Finished (took 255ms)
↧
Unable to run programs of SFM (Structure From Motion) module
I am getting an error when I am trying to execute a sample code in SFM module. Initially SFM was not present in my contrib directory. So i downloaded the latest contrib and pasted the sfm folder in my contrib directory. I used the CMakeLists.txt that was present in the sfm module.
Then I tried to build it in my own directory with scene_reconstruction.cpp and the CMakeLists.txt which I took from the sfm module. Following is the error which I am getting> -- The C compiler identification is GNU 5.4.0
-- The CXX compiler identification is GNU 5.4.0
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Found installed version of Eigen: /usr/lib/cmake/eigen3
-- Found required Ceres dependency: Eigen version 3.2.92 in /usr/include/eigen3
-- Found required Ceres dependency: glog
-- Performing Test GFLAGS_IN_GOOGLE_NAMESPACE
-- Performing Test GFLAGS_IN_GOOGLE_NAMESPACE - Success
-- Found required Ceres dependency: gflags
-- Found Ceres version: 1.13.0 installed in: /usr/local with components: [LAPACK, SuiteSparse, SparseLinearAlgebraLibrary, CXSparse, SchurSpecializations, OpenMP]
-- Checking SFM deps... TRUE
-- Module opencv_sfm disabled because the following dependencies are not found: Eigen
CMake Error at CMakeLists.txt:35 (ocv_module_disable):
Unknown CMake command "ocv_module_disable".
CMake Warning (dev) in CMakeLists.txt:
No cmake_minimum_required command is present. A line of code such as
cmake_minimum_required(VERSION 3.5)
should be added at the top of the file. The version specified may be lower
if you wish to support older CMake versions for this project. For more
information run "cmake --help-policy CMP0000".
This warning is for project developers. Use -Wno-dev to suppress it.
-- Configuring incomplete, errors occurred!
PS: I went through all the required downloads of the libraries mentioned in the SFM documentation page before starting off.
↧
↧
Sending Mat over the web
I need to send a Mat over the web (to ASP .net c# backend)
Any ideas or ready examples?
↧
Key colors from artificial image
Hi everyone!
I got a little issue while trying to separate elements of the image by color to extract the curves.
It is non-natural images therefore color thresholding, dbscan or other means of separating them by color could work nicely to create the several masks (background, grid, curve[n],...) but I can't figure out a good way to do it; there's always noise or bad clustering.
Therefore I was wondering what would be your approach on this problem? Maybe I'm missing something trivial.
Thank you for reading!


↧
stream a webcam image from a c++ opencv dll, through to a c# windows form?
Hi, as above. I have an opencv dll application that i need to run through a windows forms application. I have this code compiling, but the resulting image is just black. Where am i going wrong here?
//c++ dll code:
extern "C" {
cv::Mat frame;
__declspec(dllexport) void camera()
{
cv::VideoCapture cap;
if (!cap.open(0))
std::cout << "cam not up" << std::endl;
while(true)
{
cap >> frame;
cv::imshow(":", frame);
cv::waitKey(1);
}
}
__declspec(dllexport)uchar* img(void)
{
return frame.data;
}
};
//c# form code:
[DllImport("Cranium.dll", CallingConvention = CallingConvention.Cdecl)]
public static extern void camera();
[DllImport("Cranium.dll", CallingConvention = CallingConvention.Cdecl)]
public static extern IntPtr img();
private void stream()
{
Bitmap a = new Bitmap(640, 480, 3 * 640, PixelFormat.Format24bppRgb, img());
pictureImg.Image = a;
}
private void buttonConnect_Click(object sender, EventArgs e)
{
Thread connectThread = new Thread(camera);
connectThread.Start();
Thread displayThread = new Thread(stream);
displayThread.Start();
}
When i run `buttonConnect`, the camera starts, and the c++ window runs fine. But the c# image in teh picturebox stays black.
↧
Detect color segemts between parallel HoughLines
I hope someone will help me. I have an image of a floor and I already detect all vertical HoughLinesP in there. Here is the code how I manage it:
Mat dst, cdst, dx, dy;
int cnt;
vector lines;
void HoughTransf(Mat src) {
Canny(src, dst, 150, 100, 3, false);
cvtColor(dst, cdst, CV_GRAY2BGR);
Sobel(cdst, dx, CV_32F, 1, 0);
Sobel(cdst, dy, CV_32F, 1, 0);
HoughLinesP(dst, lines, 1, CV_PI / 2, 50, 50, 10);
for (size_t i = 0; i < lines.size(); i++) {
Vec4i l = lines[i];
Point pt1 = Point(l[0], l[1]);
Point pt2 = Point(l[2], l[3]);
double angle = atan2(pt2.y - pt1.y, pt2.x - pt1.x) * 180.0 / CV_PI;
if (angle) {
line(cdst, pt1, pt2, Scalar(0, 0, 255), 2, CV_AA);
}
}
imshow("detected lines", cdst);
for (int i = 0; i < lines.size(); i++) {
cout << lines.size() << "\n";
cnt = (int)lines.size();
cout << cnt << "\n";
lines.clear();
continue;
}
}
Here's the output:

Now I would like to detect the diffrent colors between those Lines and save those detected segments as ROIs. To be honest I've no idea how I could resolve this issueHere's another picture of the original one, so I'd be glad, if you could probably understand what I mean: (the marked areas are those areas which I would like to detect between the HoughLines)
I've done some research, but I couldn't find anything. I know I should iterate between the lines, but here's the point where I need help me out. Thanks Thank you in advance!
↧
↧
Speeding up a sliding window approach!
Hi!
I'm currently implementing a system for multi-class detection in an underwater environment. I have chosen to use the LBP-HF features found [here](https://github.com/nourani/LBP) to deal with rotation and illumination invariance. For the actual detection I have trained classifiers using libSVM, and I have successfully detected objects in my input images (1920 x 1080) by the use of a sliding window to achieve location invariance and a image scale pyramid for scale invariance. The problem with this approach is that it is very slow on my large input images, so I am trying to find ways of speeding it up. I have read a bit about efficient subwindow search (ESS), but to me it seems that this technique requires confidence scores for each prediction to be used. Is this true? Are there any other ways I could speed up my detection scheme? Any help would be appreciated!
PS: The reason why this is posted here is because the code is written for a framework using OpenCV.
↧
open stream video takes ages
Hi everyone!
I am about to create a program with Visual Studio 2015, which streams video from an IP camera. The program is written in C++ and runs correctly except a latency problem. Using the code below:
if (!vcap.open(videoStreamAddress)) {
printf("Error opening video stream or file \n");
}
else {
if (!vcap.read(image)) { //capture image
cv::waitKey(); //if no image captured wait
}
else {
cv::imshow("Output Window", image); //open output window and show the image
if (cv::waitKey(1) >= 0) break; //wait 1ms if key 1 from keyboard pressed -> break
}
}
I get a latency at start up. The problem is that the **vcap.open(videoStreamAddress)** takes about 2-5 minutes to run. After that, I get a nice video stream without any lag. The videoStreamAddress is something like that: http://192.168.1.7:78080. I am using OpenCV v3.1.0. The IP camera is actually a raspberry Pi 3 connected with a USB web cam and running Motion. Streaming from raspberry Pi through web browser runs nice.
Thanks in advance
↧
Video window not loading frame
I know this question has been asked multiple times but none of the solutions have worked for my context plus my window is actually getting loaded.
When I run this program, the webcam lights up and the window actually appears but without the footage. The same thing happens when I try to load an image. I tried with and without the `namedWindow()` method but still no luck. My `frame` matrix consists of data and everything seems to run smoothly with exception to actually displaying the image/video.
I compiled OpenCV3.2.0 from source and running it on Mac OS Sierra.
#include
#include "opencv2/opencv.hpp"
using namespace cv;
using namespace std;
VideoCapture cap(0);
void grabVideoFeed(Mat& frame)
{
if (!cap.isOpened()) cerr << "Issue grabbing camera";
cap.read(frame);
}
int main(int argc, const char * argv[])
{
Mat frame;
for(;;)
{
grabVideoFeed(frame);
if (frame.empty()) break;
namedWindow("Main", CV_WINDOW_AUTOSIZE);
imshow("Main", frame);
if (waitKey(30) >= 0) break;
}
cap.release();
destroyAllWindows();
return 0;
}
When I execute this program, the created window looks like this.
↧