I have 3 formula, (P and S is the same picture that taken with different method)
1. P/S
2. P/(P+S)
3. |P-S|/|P+S|
Image P

Image S

and with one of these formula the result will become like this :

how to implemet these formula in c++ ?
if i`m using
subtract(P, S, min);
add(P, S, plus);`
is that same as P-S and P+S, and how to divide P with S ?
↧
How to Use formula in c++ opencv
↧
Basic add and subtract on different size image
First i want to appologize.
I`m new to opencv. currently I try to practice with basic stuff. i try to add and subtract 2 image
First Image :

Second Image :

Third Image :

when I try to add and subtract First and Second Image i got result :
Add

Subtract

My question Is :
can someone explain to me
1.why First Image + Second Image become brighter
2.why First Image - Second Image become dark
3.when i try to do add and subtract to FIrst Image + Third Image i got an error, why ? (I try search it and i found CV_32F ,CV_32FC1 what is that ?)
4.when i try to add and subtract same size image, but the image in greyscale i got an error too, why ?
Thank you.
↧
↧
How to detect infant cry using opencv ?
Hie i am working to build a baby monitoring system so here the parameter I should monitor is baby cry. I am trying to build a cry detection circuit by using computer vision. The cry detection device should detect cry accurately with out outer noise involved.
↧
Is it possible to turn this c++ code into python code?
int main() {
cv::Mat color = cv::imread("../houghCircles.png");
cv::namedWindow("input"); cv::imshow("input", color);
cv::Mat canny;
cv::Mat gray;
/// Convert it to gray
cv::cvtColor( color, gray, CV_BGR2GRAY );
// compute canny (don't blur with that image quality!!)
cv::Canny(gray, canny, 200,20);
cv::namedWindow("canny2"); cv::imshow("canny2", canny>0);
std::vector circles;
/// Apply the Hough Transform to find the circles
cv::HoughCircles( gray, circles, CV_HOUGH_GRADIENT, 1, 60, 200, 20, 0, 0 );
/// Draw the circles detected
for( size_t i = 0; i < circles.size(); i++ )
{
Point center(cvRound(circles[i][0]), cvRound(circles[i][1]));
int radius = cvRound(circles[i][2]);
cv::circle( color, center, 3, Scalar(0,255,255), -1);
cv::circle( color, center, radius, Scalar(0,0,255), 1 );
}
//compute distance transform:
cv::Mat dt;
cv::distanceTransform(255-(canny>0), dt, CV_DIST_L2 ,3);
cv::namedWindow("distance transform"); cv::imshow("distance transform", dt/255.0f);
// test for semi-circles:
float minInlierDist = 2.0f;
for( size_t i = 0; i < circles.size(); i++ )
{
// test inlier percentage:
// sample the circle and check for distance to the next edge
unsigned int counter = 0;
unsigned int inlier = 0;
cv::Point2f center((circles[i][0]), (circles[i][2]));
float radius = (circles[i][2]);
// maximal distance of inlier might depend on the size of the circle
float maxInlierDist = radius/25.0f;
if(maxInlierDist(cY,cX) < maxInlierDist)
{
inlier++;
cv::circle(color, cv::Point2i(cX,cY),3, cv::Scalar(0,255,0));
}
else
cv::circle(color, cv::Point2i(cX,cY),3, cv::Scalar(255,0,0));
}
std::cout << 100.0f*(float)inlier/(float)counter << " % of a circle with radius " << radius << " detected" << std::endl;
}
cv::namedWindow("output"); cv::imshow("output", color);
cv::imwrite("houghLinesComputed.png", color);
cv::waitKey(-1);
return 0;
}
↧
Making Positive Samples Quickly
Hello everyone,
I am currently trying to make my own haar cascade for detecting fishes. So far, I have collected 65 positive samples. However, this took me a very long time.
So I was wondering if there was any fast way to get over 1000 positive samples (as I have noted that this was the recommended number of samples)?
Thanks.
↧
↧
How to Print Pixel Color Value C++
I have image with all value is 127(grey), and I want to try to pick one of coordinate and print pixel value, value must return 127.
printf("%d ", nom1.at(Point(0, 0)));
but its give me an error .
My purpose is to get each pixel color value and print it.
Thank you.
↧
Receive video from network with 100*100 then resize the video into 640*368
Resizing 640*368 dimensions video to 100*100 using resize() in opencv 3.2 and sending 100*100 data to client in network after receiving from server then I do resize into 640*368 using resize function then, then video pixels has reduced. So how should I solve this ??
**client.cpp**
#include "opencv2/opencv.hpp"
#include
using namespace std;
using namespace cv;
int main(int argc,char **argv)
{
int clientsocket;
char* serverIP;
int serverPortid;
struct sockaddr_in clientaddr;
serverIP = argv[1];
serverPortid = atoi(argv[2]);
clientsocket=socket(AF_INET,SOCK_STREAM,0);
if(clientsocket==0)
{
perror("socket failed");
exit(0);
}
clientaddr.sin_family=AF_INET;
clientaddr.sin_addr.s_addr=inet_addr(serverIP);
clientaddr.sin_port=htons(serverPortid);
if(connect(clientsocket,(struct sockaddr *)&clientaddr,sizeof(clientaddr))<0)
{
perror("connection failed");
exit(0);
}
VideoWriter video("output.avi",CV_FOURCC('M','J','P','G'),25,Size(640,368),true);
Mat frame_mata(368,640,CV_8UC3, Scalar(0,255,255));
while(1)
{
vector vectData(30000);
int framereceive = recv(clientsocket,vectData.data(),vectData.size(),MSG_WAITALL);
Mat data_mata(100, 100, CV_8UC3, vectData.data());
cout<<"frame receive :" << framereceive << endl;
imshow("client view before resize", data_mata);
resize(data_mata,frame_mata,frame_mata.size(),1,1,INTER_LINEAR);
imshow("client view after resize", frame_mata);
video.write(frame_mata);
if (framereceive == 0)
{
break;
}
waitKey(1);
}
video.release();
close(clientsocket);
return 0;}
**server.cpp**
#include "opencv2/opencv.hpp"
using namespace cv;
using namespace std;
int main(int argc, char **argv)
{
int portid=8080;
int serversocket, clientsocket;
struct sockaddr_in serveraddr, clientaddr;
int addrLen = sizeof(struct sockaddr_in);
serversocket = socket(AF_INET,SOCK_STREAM,0);
if(serversocket == 0)
{
perror("socket failed");
exit(0);
}
serveraddr.sin_family=AF_INET;
serveraddr.sin_addr.s_addr=htonl(INADDR_ANY);
serveraddr.sin_port=htons(portid);
if(bind(serversocket,(struct sockaddr *)&serveraddr,sizeof(serveraddr))<0)
{
perror("bind failed");
exit(0);
}
if(listen(serversocket,4)<0)
{
perror("listen failed");
exit(0);
}
cout << "Waiting for connections...\n"<< "Server Port:" << portid << std::endl;
clientsocket = accept(serversocket,(struct sockaddr *)&clientaddr, (socklen_t*)&addrLen);
if(clientsocket<0)
{
perror("server cannot accept");
exit(0);
}
cout<<"server has connected" << endl;
VideoCapture serverCapture("/home/srmri/Downloads/videostream/SampleVideo_640x360_1mb.mp4");
int width = serverCapture.get(CV_CAP_PROP_FRAME_WIDTH);
int height = serverCapture.get(CV_CAP_PROP_FRAME_HEIGHT);
int totalframe = serverCapture.get(CAP_PROP_FRAME_COUNT);
cout << "Width : " << width << "Height : " << height <<"total Frame : " << totalframe<< endl;
//VideoWriter writer("serveroutput.avi",CV_FOURCC('M','J','P','G'),25,Size(width,height),true);
Mat frame;
if (!serverCapture.isOpened())
{
perror("camera is already is in use");
exit(0);
}
int imgSize =0 ;
Mat frame_send(100,100,CV_8UC3);
while(1)
{
frame = Mat::zeros( height,width, CV_8UC3);
serverCapture >> frame;
resize(frame,frame_send,frame_send.size(),1, 1,INTER_LINEAR);
waitKey(1);
imgSize = frame_send.total() * frame_send.elemSize();
if(waitKey(30)==27)
{
perror("esc key has pressed");
exit(0);
}
if(imgSize == 0 || imgSize < 0)
{
break;
}
else
{
imshow("server side", frame);
imshow("Resize server side ", frame_send );
//writer.write(frame);
}
cout<<"image size :" << imgSize <
↧
Algorithm for detect similar photos
Hello to everyone, I wanted to ask you advice. My goal is to create an algorithm that succeeds with a good percentage, to see if two pictures are similar.
I have a master photo and all the photos i want to compare with this to see if they are "the same".
I wanted to divide the algorithm in 3 points.
1)Load photo to compare and adjust brightness, contrast, histogram
2)I adjust the perspective by calculating matrix h with the original photo
3)I make the difference between the master picture and this (absdiff)
My problem is: how do I get a similar picture to that master in brightness and contrast
↧
Calibrate 360 degree camera
Hello,
I bought a 360 degree camera and I want to calibrate it. I know that video and images taken by 360 degree camera has metadata in the files taken.
In this metadata it is stored the camera parameters, like the matrix camera?
Thanks
↧
↧
what is normalize in opencv
I'm trying to understand what is normalize do. I have 2 images:
Image 1 :

Image 2 :

and then i try to divide it with
divide(image1, image2, div);
and i try to print it (10x10) :

all value will become 1 or 0. and when show it, it will be like this :

but when i normalize it :
normalize(div, nom, 0, 255, NORM_MINMAX, CV_8U);
it wil show like this :

when i try print all the pixel value, the value is only 85, but in image there is white dot area.
can someone please explain to me what happen ?
Thank you.
↧
how to prove subtract value opencv
I need to prove pixle subtract value is right , I'm using this code :
subtract(image1,image2, min); //subtract image1 - image2
cout << endl << endl << "image 1 pixel : " << endl << endl;
for (int y = 0; y < 10; y++)//To loop through all the pixels 10x10
{
for (int x = 0; x < 10; x++) //4 55
{
pix_val = image1.at(x, y);
cout << pix_val << " , ";
}
cout << endl;
}
cout << endl << endl << "image 2 pixel : " << endl << endl;
for (int x = 0; x < 10; x++)//To loop through all the pixels 10x10
{
for (int y = 0; y < 10; y++)
{
pix_val = image2.at(x, y);
cout << pix_val << " , ";
}
cout << endl;
}
cout <(x, y);
cout << pix_val <<" , ";
}
cout << endl;
}
but the result is different

but not all value match.
Please help me, i need to prove subtract value with number.
Thank you.
↧
How to use Tan-1 in OpenCV
I want to calculate with this formula
tan-1(Image 1 / Image2)
how to implement it in OpenCV. Thank you.
↧
Different undistorting results (first rotate, then undistort OR first undistort, then rotate)
Hello,
i don't know why, but i get different results if i first rotate the image and undistort it, or if i first undistort and then rotate it.
There is a very small difference at both results...
(In the middle of the image it seems to be same, but at the upper and lower borders of the images the difference is higher)
My minimal code is shown below:
Mat camera_matrix, distCoeffs;
char* out_file = "C:\\Users\\Bob\\Documents\\Visual Studio 2013\\Projects\\Samples\\camera.txt";
FileStorage fs(out_file, FileStorage::READ);
fs["K"] >> camera_matrix;
fs["D"] >> distCoeffs;
//Reading the images
Mat image1 = imread("C:\\Users\\Bob\\Documents\\Visual Studio 2013\\Projects\\Samples\\1.jpg");
Mat image2 = imread("C:\\Users\\Bob\\Documents\\Visual Studio 2013\\Projects\\Samples\\2.jpg");
Mat image1_undistorted, image2_undistorted;
Mat image1_rot, image2_rot, image1_flip, image2_flip;
//First workflow (first undistort image, then turn)
///////////////////////////////////////////////////////////////////////
undistort(image1, image1_undistorted, camera_matrix, distCoeffs);
undistort(image2, image2_undistorted, camera_matrix, distCoeffs);
transpose(image1_undistorted, image1_rot);
flip(image1_rot, image1_undistorted, 0);
transpose(image2_undistorted, image2_rot);
flip(image2_rot, image2_undistorted, 0);
imwrite("image1_undistorted_FIRSTWORKFLOW.jpg", image1_undistorted);
imwrite("image2_undistorted_FIRSTWORKFLOW.jpg", image2_undistorted);
//Second workflow (first turn image, then undistort)
///////////////////////////////////////////////////////////////////////
//Adjusting the camera matrix and distortion coefficients
double fx = camera_matrix.at(0, 0);
double fy = camera_matrix.at(1, 1);
double cx = camera_matrix.at(0, 2);
double cy = camera_matrix.at(1, 2);
camera_matrix.at(0, 0) = fy;
camera_matrix.at(1, 1) = fx;
camera_matrix_new.at(0, 2) = cy;
camera_matrix_new.at(1, 2) = image1.size().width - cx;
double p1 = distCoeffs.at(0, 2);
double p2 = distCoeffs.at(0, 3);
distCoeffs.at(0, 2) = p2;
distCoeffs.at(0, 3) = p1;
transpose(image1, image1_rot);
flip(image1_rot, image1_flip, 0);
transpose(image2, image2_rot);
flip(image2_rot, image2_flip, 0);
undistort(image1_flip, image1_undistorted, camera_matrix, distCoeffs, Mat());
undistort(image2_flip, image2_undistorted, camera_matrix, distCoeffs, Mat());
imwrite("image1_undistorted_SECONDWORKFLOW.jpg", image1_undistorted);
imwrite("image2_undistorted_SECONDWORKFLOW.jpg", image2_undistorted);
The results are shown below:
The first image (both workflow results)


The second image (both workflow results)


---------------------------------------------------------------------------------------------------------------------------------------
And if i calculate the differences of the images resulting from the different workflows, then i get these:
**image1_undistorted_FIRSTWORKFLOW - image1_undistorted_SECONDWORKFLOW =**

**image2_undistorted_FIRSTWORKFLOW - image2_undistorted_SECONDWORKFLOW =**

You can see the in the middle of the "difference images" it seems to be there is no difference...
But on the up and down side of the "difference image" there is a bigger difference...
Can someone tell me what im doing wrong? Or why i get different results in both workflows?
↧
↧
compression of big data and transit to other system then decompress
If I compress big video data-reduced the size of the video and transit to other system after, all compressed data has been received then decompress in the other system. For Example: If I am send big video file from system A to System B. During sending from System A , I compressed big video file (Reduce the size of the file) in System A then I transit that compressed data to System B . In System B, I decompress the video file. During decompression, whatever the original video file is there it should come without any loss of frames and pixels.
So, Is there any solution to solve this kind of problem??
I am new to this technology. I need help.
Thanks.
↧
Matching lines between 2 images with known motion ?
So I've got 2 images, I use a line detector for those 2 (canny or Laplacian of gaussian).
And I want to match the lines of the first image with the lines of the second, is there a way to do this with opencv ?
The purpose of my application is to estimate the distance between those 2 images (before and after the motion).
↧
Send cv::Mat from c++ to python
Hi I have created a c++ code that reads GIGE camera image using its sdk and converted it into cv::Mat. Now I have wrapped this code to use it in python but i dont know how to return Mat object to python.
PyObject* some_function(PyObject* self, PyObject* args)
{
//GIGE SDK gets image to hcamera3 object
//**************OPENCV PART*****************************
// do image processing
std::cout << "Acquired image...\n";
CvSize size;
size.width = ImageWidth(hCamera3);
size.height = ImageHeight(hCamera3);
retval = cvCreateImageHeader(size, ocv_depth(ImageDatatype(hCamera3, 0)), ImageDimension(hCamera3));
ppixels = nullptr;
GetLinearAccess(hCamera3, 0, &ppixels, &xInc, &yInc);
cvSetData(retval, ppixels, yInc);
//matToReturn = cv::cvarrToMat(retval, true);
matToReturn = cv::cvarrToMat(retval, true);
std::cout << matToReturn.size << "SIZE" << std::endl;;
cv::imwrite("E:/PythonGenie.bmp", matToReturn);
//******************************************************
// stop the grab (kill = true: wait for ongoing frame acquisition to stop)
result = G2Freeze(hCamera3, true);
// free camera
ReleaseObject(hCamera3);
}
return ?????????????????????
}
I am not sure what to put into return ?? and then how assign it in python. In java it was just passin a Mat object to c++ but here I am lost.
Any help appriciated!
↧
Open CV Support for USB3.0 cameras
Hi all,
I am working in a project where a USB3.0 device is being used as an input camera and the stream is collected in a QT application for which OpenCV backend is used in Video Acquisition.I wanted to know whether OpenCV Supports the USB3.0 devices or not.Any information on this would be really helpful.As far as i know OpenCV relies on V4W(Windows),Directshow API,V4L(Linux) to read the data from kernel mode.I want to know whether this configuration of Video I/O supports USB3 core driver sittting on xHCI driver for USB3 devices(Video Class Camera in this case)
↧
↧
I have the codes for live video and finding contours separately.I would like to know how both the codes can be stitched together to find contours in a live video
CODE FOR LIVE VIDEO
#include
#include
using namespace cv;
using namespace std;
int main()
{
Mat CameraFrame;
Mat Grey;
VideoCapture cap;
char keypressed;
//Opens the first imaging device.
cap.open(0);
//Check whether user selected camera is opened successfully.
if( !cap.isOpened() )
{
cout << "***Could not initialize capturing...***\n";
return -1;
}
//Create a windows to display camera preview.
namedWindow("Camera Preview", CV_WINDOW_AUTOSIZE);
//Loop infinitely to fetch frame from camera and display it.
for(;;)
{
//Fetch frame from camera.
cap >> CameraFrame;
//Check whether received frame has valid pointer.
if( CameraFrame.empty() )
break;
//Display the received frame
imshow("Camera Preview", CameraFrame);
//Wait for Escape keyevent to exit from loop
keypressed = (char)waitKey(10);
if( keypressed == 27 )
break;
}
//Release the camera interface.
cap.release();
return 0;
}
CODE FOR FINDING CONTOURS
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include
#include
#include
using namespace cv;
using namespace std;
Mat src; Mat src_gray;
int thresh = 100;
int max_thresh = 255;
RNG rng(12345);
/// Function header
void thresh_callback(int, void* );
/** @function main */
int main( int argc, char** argv )
{
/// Load source image and convert it to gray
src = imread( argv[1], 1 );
/// Convert image to gray and blur it
cvtColor( src, src_gray, CV_BGR2GRAY );
blur( src_gray, src_gray, Size(3,3) );
/// Create Window
char* source_window = "Source";
namedWindow( source_window, CV_WINDOW_AUTOSIZE );
imshow( source_window, src );
createTrackbar( " Threshold:", "Source", &thresh, max_thresh, thresh_callback );
thresh_callback( 0, 0 );
waitKey(0);
return(0);
}
/** @function thresh_callback */
void thresh_callback(int, void* )
{
Mat threshold_output;
vector> contours;
vector hierarchy;
/// Detect edges using Threshold
threshold( src_gray, threshold_output, thresh, 255, THRESH_BINARY );
/// Find contours
findContours( threshold_output, contours, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE, Point(0, 0) );
/// Approximate contours to polygons + get bounding rects and circles
vector> contours_poly( contours.size() );
vector boundRect( contours.size() );
vectorcenter( contours.size() );
vectorradius( contours.size() );
for( int i = 0; i < contours.size(); i++ )
{ approxPolyDP( Mat(contours[i]), contours_poly[i], 3, true );
boundRect[i] = boundingRect( Mat(contours_poly[i]) );
minEnclosingCircle( (Mat)contours_poly[i], center[i], radius[i] );
}
/// Draw polygonal contour + bonding rects + circles
Mat drawing = Mat::zeros( threshold_output.size(), CV_8UC3 );
for( int i = 0; i< contours.size(); i++ )
{
Scalar color = Scalar( rng.uniform(0, 255), rng.uniform(0,255), rng.uniform(0,255) );
drawContours( drawing, contours_poly, i, color, 1, 8, vector(), 0, Point() );
rectangle( drawing, boundRect[i].tl(), boundRect[i].br(), color, 2, 8, 0 );
circle( drawing, center[i], (int)radius[i], color, 2, 8, 0 );
}
/// Show in a window
namedWindow( "Contours", CV_WINDOW_AUTOSIZE );
imshow( "Contours", drawing );
}
↧
Homography Decomposition Results
Hello,
Please see the answer to the stackoverflow question linked below.
https://stackoverflow.com/questions/35942095/opencv-strange-rotation-and-translation-matrices-from-decomposehomographymat
It's pretty straight forward with a homography decomposition.
https://hal.inria.fr/inria-00174036v3/document
> We already know that there exist 4> solutions, in the general case, for> the homography decomposition problem,> two of them being the ”opposites” of
> the other two.>> Rtna = {Ra, ta, na} ; Rtna− = {Ra,
>−ta, −na} (131) Rtnb = {Rb, tb, nb} ;
> Rtnb− = {Rb, −tb, −nb} (132)
>> These can be reduced to only two> solutions applying the constraint that> all the reference points must be> visible from the camera (visibility> constraint). We will assume along the> development that the two solutions> verifying this constraint are Rtna and> Rtnb and that, among them, Rtna is the>”true” solution. These solutions are
> related according to (102)- (104). In> practice, in order to determine which> one is the good solution, we can use> an approximation of the normal n∗.
> Thus, having an approximated parameter> vector μ we build a non-linear state
> observer:
There must be code somewhere for an OpenCV implementation for the refinement of the homographies.
// decompose using identity as internal parameters matrix
std::vector Rs, Ts;
cv::decomposeHomographyMat(H,
K,
Rs, Ts,
cv::noArray());
Rs and Ts have multiple solutions. How do i determine which one?
1) Visibility Test - Project points to 3D space using R and t, and check it is in front of the camera. (+ve z value?)
2) How to reduce the final two solutions?
Regards,
Daniel
↧
QT5 opencv3.1 ubuntu 16.04 cv::destroyAllWindows
I have a Qt5 Gui app with some camera regulation & transform. For code semplicity I use imshow("mywin", myMatimage) for show every transform used. If all It is as desired, I save the sequence of transformation, convert cv::Mat into QImage and show the image/video onto QT5 gui.
All ok but for close thw imshow windows, I must release their matimage (it is all ok again), but after these I must manually close the opencv opened windows because I'm not able to detect and close these.
For sure I can connect a bool var on every imshow command, and when I want to close the windows use an if statement on every bool var ... `if(myboolvarMy1imshowWindows){
destroyWindows("windows_myboolvarMy1imshowWindows");}`
but there are a more eleegant way to do these?
regards
gfx
↧