Hello,
I am making a program that allows the user to put white square-shaped items (pieces of paper or blocks, whatever he has) on the table and get the pattern made of these blocks detected by the camera (for example one item, two items connected together and so on, including domino-like mazes). What I have at the moment:
* Getting data from camera and conversion to cv::Mat
* Conversion from RGB to HSV and thresholding
* Contour detection through findContours() (using CV_RETR_CCOMP and CV_CHAIN_APPROX_SIMPLE)
* Conversion from contours to rotated bounding boxes through minAreaRect()
My next idea is to get a list of points (coordinates on camera image) where the blocks are (to determine whether the block is in that particular point, and to possibly detect its color if the requirements change) based on the bounding box and the area it covers. Here's the idea and my issue:

I know that this is not a bug - the feature works correctly because this is indeed a smaller rectangle that the shape fits in.
My question is: are there any alternatives to minAreaRect() that would take shape edges into consideration?
Thanks in advance :)
↧
Issue with detecting combinations of squares
↧
opencv exception in abs function
i want use abs function but every time i use it a type of exception occur !! once this exception :
Exception thrown: read access violation.
on below code in ocl.cpp line 4 :
bool useOpenCL()
{
CoreTLSData* data = getCoreTlsData().get();
if( data->useOpenCL < 0 )
{
try
{
data->useOpenCL = (int)haveOpenCL() && Device::getDefault().ptr() && Device::getDefault().available();
}
catch (...)
{
data->useOpenCL = 0;
}
}
return data->useOpenCL > 0;
}
other this :
A heap has been corrupted .
on below code in alloc.cpp line 3:
void* fastMalloc( size_t size )
{
uchar* udata = (uchar*)malloc(size + sizeof(void*) + CV_MALLOC_ALIGN);
if(!udata)
return OutOfMemoryError(size);
uchar** adata = alignPtr((uchar**)udata + 1, CV_MALLOC_ALIGN);
adata[-1] = udata;
return adata;
}
another exception :
A heap has been corrupted (ntdll.dll).
on opencl_core.cpp line 110 : handle = LoadLibraryA(path);
I am totally confused please help.
↧
↧
extended and upright flags in SURF opencv c++ function
what are the equivalent flags of SURF in opencv C++ to python SURF flags extended and upright ?
- in python version upright flag decides whether to calculate orientation or not
- And extended flag gives option of whether to use 64 dim or 128 dim
Is there a to do this similar operation in opencv C++ version of SURF function
FYI I am using opencv version 2.4.13
↧
New to OpenCV, can't load an image
Hi! I am new to OpenCV and trying to follow the tutorial to load an image. However, Mat image, namedWindow, imshow, WINDOW_AUTOSIZE, and waitKey are not found in the cv namespace.
I have #included opencv\core.hpp, opencv2\imgcodecs.hpp, opencv2\highgui.hpp, opencv2\opencv.hpp, and opencv2\cv.hpp.
So far I have tried linking $(OPENCV_DIR)\lib, using namespace cv, and adding " CV_ ", " cv:: " and " cv_ " before each command.
The tutorial says to use this code:
Mat image;
image = imread(imageName.c_str(), IMREAD_COLOR);
if( image.empty() )
{
cout << "Could not open or find the image" << std::endl ;
return -1;
}
namedWindow( "Display window", WINDOW_AUTOSIZE );
imshow( "Display window", image );
waitKey(0);
//I have found that this code will fix all but the " image " problem:
Mat image = imread(imageName.c_str(), IMREAD_COLOR);// " Mat " must be on the same line as " imread "
if( image.empty() ) // " image " is underlined here
{
cout << "Could not open or find the image" << std::endl ;
return -1;
}
const std::string& windowName("Display Window"); // Must be declared first to work
void namedWindow(int windowName, int WINDOW_AUTOSIZE );// Must have the "void" and "int" types defined
void imshow(int windowName, int image ); // Also must have types, and " image " is ok here
int waitKey(0); // Must have int type
I am using OpenCV 3.2 on a clean Windows 10 computer with VS 2017 Enterprise.
Thanks for your help!!
↧
Inconsistent return values on pixel access.
Hello Community.
I've got that litte snipped here.
As you can see, Im accessing the pixel value kind of block-wise and use the columns pixel values as bits for a unsigned int. Block number and unsigned int serve as coordinates for another pixel in another image, set to 255.
cv::Mat1b devideAndConquer(const cv::Mat1b &raw, cv::Size2i &blockCount, cv::Size2i &blockSize){
int wordBlockSize = std::pow(2,blockSize.height);
cv::Mat1b word = cv::Mat1b::zeros(blockCount.width,wordBlockSize);
for(uint8_t b = 0; b < blockCount.width; ++b) {
for (uint8_t w = 0; w < blockSize.width; ++w) {
unsigned int n = 0;
for (uint8_t h = 0; h < (blockSize.height); ++h) {
std::cout << (int)(raw.at(b*blockSize.width+w,h));
n = (n << 1) + (raw.at(b*blockSize.width+w,h) > 0 ? 1 : 0);
}
std::cout <<":"<< n << "\t";
word.at(b,n) = 0xFF;
}
}
std::cout << std::endl;
return word;
}
I've tested everything. After hours of debugging I've found the following: Sometimes the return value of `raw.at(b*blockSize.width+w,h)` is inconsistent between calls of `devideAndConquer`.
I used this Code to test my assumptions:
cv::Size2i blockCount(13,1);
blockSize.height = 10;
blockSize.width = 16;
cv::Mat1b dummy(blockCount.height*blockSize.height,blockCount.width*blockSize.width);
do{
cv::randu(dummy, 0, 2);
} while(cv::countNonZero(dummy) == 0);
do{
cv::Mat1b a = devideAndConquer(dummy,blockCount,blockSize);
cv::Mat1b b = devideAndConquer(dummy,blockCount,blockSize);
cv::Mat1b c;
cv::compare(a,b,c,CV_COMP_BHATTACHARYYA);
cv::imshow("Window",c);
cv::waitKey();
}while(cv::countNonZero(c) != 0);
In theses inconsistend cases, the output of the first call for one block is `0000000000:0`, on the second call its `255000000000:512`. So the return value of on `.at` call that should be 0 is 255.
`dummy` isn't touched. When I CV_COMP_BHATTACHARYYA on the Dummy before the calls and after the Calls, the result is 0 nonZero Pixels.
On 3th and 4th call, even more values are flipped but they are consistently flipped.
Do I miss something? Have I made an error? Or is there some bogus magic happening thats flipping pixels?
Latest OpenCV 2.4 and 3.2 (both causing the same error) is used on an MacBookPro (13", Mid2011) with 4GB RAM and i7.
↧
↧
How to calculate number of people counting inside/outside in shop?
Hello. How to calculate how many people are in the shop a lot ?
Came out 7 people left 5 people in the shop left 2 people.
Could you tell me how it can be implemented?
#include
#include
#include
#include
#include // it may be necessary to change or remove this line if not using Windows
#include // file utils
#include // timestamp stuff
#include "Blob.h"
#define SHOW_STEPS // un-comment or comment this line to show steps or not
#define FRAME_SCALE 1 // divide frame dimentions by this number
// global variables ///////////////////////////////////////////////////////////////////////////////
const cv::Scalar SCALAR_BLACK = cv::Scalar(0.0, 0.0, 0.0);
const cv::Scalar SCALAR_WHITE = cv::Scalar(255.0, 255.0, 255.0);
const cv::Scalar SCALAR_YELLOW = cv::Scalar(0.0, 255.0, 255.0);
const cv::Scalar SCALAR_GREEN = cv::Scalar(0.0, 200.0, 0.0);
const cv::Scalar SCALAR_RED = cv::Scalar(0.0, 0.0, 255.0);
const cv::Scalar SCALAR_BLUE = cv::Scalar(255.0, 0.0, 0.0);
// function prototypes ////////////////////////////////////////////////////////////////////////////
void matchCurrentFrameBlobsToExistingBlobs(std::vector&existingBlobs, std::vector¤tFrameBlobs);
void addBlobToExistingBlobs(Blob ¤tFrameBlob, std::vector&existingBlobs, int &intIndex);
void addNewBlob(Blob ¤tFrameBlob, std::vector&existingBlobs);
double distanceBetweenPoints(cv::Point point1, cv::Point point2);
void drawAndShowContours(cv::Size imageSize, std::vector> contours, std::string strImageName);
void drawAndShowContours(cv::Size imageSize, std::vector blobs, std::string strImageName);
bool checkIfBlobsCrossedTheLine(std::vector&blobs, int &intVerticalLinePosition, int &ShopCountL, int &ShopCountR, std::ofstream &myfile);
void drawBlobInfoOnImage(std::vector&blobs, cv::Mat &imgFrame2Copy);
void drawShopCountOnImage(int &ShopCountL, int &ShopCountR, cv::Mat &imgFrame2Copy);
///////////////////////////////////////////////////////////////////////////////////////////////////
int main(void) {
cv::VideoCapture capVideo;
std::ofstream myfile; // log file
cv::Mat imgFrame1;
cv::Mat imgFrame2;
cv::Mat imgFrame1L;
cv::Mat imgFrame2L;
std::vector blobs;
cv::Point crossingLine[2];
int ShopCountL = 0;
int ShopCountR = 0;
capVideo.open("input1_2.MOV");
//capVideo.open("rtsp://192.168.1.254/sjcam.mov");
//capVideo.open(1);
// log file
myfile.open("/tmp/OpenCV-" + std::string() + "-" + std::to_string(time(0)) + ".txt");
std::cout << "Logging to: \"/tmp/OpenCV-" << "-" << std::to_string(time(0)) << ".txt\"" << std::endl;
myfile << "\"Timestamp\",\"Left\",\"Right\"" << std::endl;
if (!capVideo.isOpened()) { // if unable to open video file
std::cout << "error reading video file" << std::endl << std::endl; // show error message
_getch(); // it may be necessary to change or remove this line if not using Windows
return(0); // and exit program
}
if (capVideo.get(CV_CAP_PROP_FRAME_COUNT) < 2) {
std::cout << "error: video file must have at least two frames";
_getch(); // it may be necessary to change or remove this line if not using Windows
return(0);
}
capVideo.read(imgFrame1L);
capVideo.read(imgFrame2L);
resize(imgFrame1L, imgFrame1, cv::Size(imgFrame1L.size().width / FRAME_SCALE, imgFrame1L.size().height / FRAME_SCALE));
resize(imgFrame2L, imgFrame2, cv::Size(imgFrame2L.size().width / FRAME_SCALE, imgFrame2L.size().height / FRAME_SCALE));
//int intHorizontalLinePosition = (int)std::round((double)imgFrame1.rows * 0.35);
int intVerticalLinePosition = (int)std::round((double)imgFrame1.cols * 0.50);
crossingLine[0].y = 0;
crossingLine[0].x = intVerticalLinePosition;
crossingLine[1].y = imgFrame1.rows - 1;
crossingLine[1].x = intVerticalLinePosition;
char chCheckForEscKey = 0;
bool blnFirstFrame = true;
int frameCount = 2;
while (capVideo.isOpened() && chCheckForEscKey != 27) {
std::vector currentFrameBlobs;
cv::Mat imgFrame1Copy = imgFrame1.clone();
cv::Mat imgFrame2Copy = imgFrame2.clone();
cv::Mat imgDifference;
cv::Mat imgThresh;
cv::cvtColor(imgFrame1Copy, imgFrame1Copy, CV_BGR2GRAY);
cv::cvtColor(imgFrame2Copy, imgFrame2Copy, CV_BGR2GRAY);
cv::GaussianBlur(imgFrame1Copy, imgFrame1Copy, cv::Size(5, 5), 0);
cv::GaussianBlur(imgFrame2Copy, imgFrame2Copy, cv::Size(5, 5), 0);
cv::absdiff(imgFrame1Copy, imgFrame2Copy, imgDifference);
cv::threshold(imgDifference, imgThresh, 30, 255.0, CV_THRESH_BINARY);
//cv::imshow("imgThresh", imgThresh);
cv::Mat structuringElement3x3 = cv::getStructuringElement(cv::MORPH_RECT, cv::Size(3, 3));
cv::Mat structuringElement5x5 = cv::getStructuringElement(cv::MORPH_RECT, cv::Size(5, 5));
cv::Mat structuringElement7x7 = cv::getStructuringElement(cv::MORPH_RECT, cv::Size(7, 7));
cv::Mat structuringElement15x15 = cv::getStructuringElement(cv::MORPH_RECT, cv::Size(15, 15));
for (unsigned int i = 0; i < 2; i++) {
cv::dilate(imgThresh, imgThresh, structuringElement5x5);
cv::dilate(imgThresh, imgThresh, structuringElement5x5);
cv::erode(imgThresh, imgThresh, structuringElement5x5);
}
cv::Mat imgThreshCopy = imgThresh.clone();
std::vector> contours;
cv::findContours(imgThreshCopy, contours, cv::RETR_EXTERNAL, cv::CHAIN_APPROX_SIMPLE);
drawAndShowContours(imgThresh.size(), contours, "imgContours");
std::vector> convexHulls(contours.size());
for (unsigned int i = 0; i < contours.size(); i++) {
cv::convexHull(contours[i], convexHulls[i]);
}
drawAndShowContours(imgThresh.size(), convexHulls, "imgConvexHulls");
for (auto &convexHull : convexHulls) {
Blob possibleBlob(convexHull);
if (possibleBlob.currentBoundingRect.area() > 800 &&
possibleBlob.dblCurrentAspectRatio > 0.2 &&
possibleBlob.dblCurrentAspectRatio < 4.0 &&
possibleBlob.currentBoundingRect.width > 40 &&
possibleBlob.currentBoundingRect.height > 40 &&
possibleBlob.dblCurrentDiagonalSize > 90.0 &&
(cv::contourArea(possibleBlob.currentContour) / (double)possibleBlob.currentBoundingRect.area()) > 0.50) {
currentFrameBlobs.push_back(possibleBlob);
}
}
drawAndShowContours(imgThresh.size(), currentFrameBlobs, "imgCurrentFrameBlobs");
if (blnFirstFrame == true) {
for (auto ¤tFrameBlob : currentFrameBlobs) {
blobs.push_back(currentFrameBlob);
}
}
else {
matchCurrentFrameBlobsToExistingBlobs(blobs, currentFrameBlobs);
}
drawAndShowContours(imgThresh.size(), blobs, "imgBlobs");
imgFrame2Copy = imgFrame2.clone(); // get another copy of frame 2 since we changed the previous frame 2 copy in the processing above
drawBlobInfoOnImage(blobs, imgFrame2Copy);
int blnAtLeastOneBlobCrossedTheLine = checkIfBlobsCrossedTheLine(blobs, intVerticalLinePosition, ShopCountL, ShopCountR, myfile);
if (blnAtLeastOneBlobCrossedTheLine == 1) {
cv::line(imgFrame2Copy, crossingLine[0], crossingLine[1], SCALAR_GREEN, 2);
}
else if (blnAtLeastOneBlobCrossedTheLine == 2) {
cv::line(imgFrame2Copy, crossingLine[0], crossingLine[1], SCALAR_YELLOW, 2);
}
else {
cv::line(imgFrame2Copy, crossingLine[0], crossingLine[1], SCALAR_BLUE, 2);
}
drawShopCountOnImage(ShopCountL, ShopCountR, imgFrame2Copy);
cv::imshow("People_Counting_Cross_Line", imgFrame2Copy);
//cv::waitKey(0); // uncomment this line to go frame by frame for debugging
// now we prepare for the next iteration
currentFrameBlobs.clear();
imgFrame1 = imgFrame2.clone(); // move frame 1 up to where frame 2 is
capVideo.read(imgFrame2);
if ((capVideo.get(CV_CAP_PROP_POS_FRAMES) + 1) < capVideo.get(CV_CAP_PROP_FRAME_COUNT)) {
capVideo.read(imgFrame2L);
resize(imgFrame2L, imgFrame2, cv::Size(imgFrame2L.size().width / FRAME_SCALE, imgFrame2L.size().height / FRAME_SCALE));
}
else {
time_t now = time(0);
char* dt = strtok(ctime(&now), "\n");;
std::cout << dt << ",EOF" << std::endl;
return(0); // end?
}
blnFirstFrame = false;
frameCount++;
chCheckForEscKey = cv::waitKey(1);
}
if (chCheckForEscKey != 27) { // if the user did not press esc (i.e. we reached the end of the video)
cv::waitKey(0); // hold the windows open to allow the "end of video" message to show
}
// note that if the user did press esc, we don't need to hold the windows open, we can simply let the program end which will close the windows
return(0);
}
///////////////////////////////////////////////////////////////////////////////////////////////////
void matchCurrentFrameBlobsToExistingBlobs(std::vector&existingBlobs, std::vector¤tFrameBlobs) {
for (auto &existingBlob : existingBlobs) {
existingBlob.blnCurrentMatchFoundOrNewBlob = false;
existingBlob.predictNextPosition();
}
for (auto ¤tFrameBlob : currentFrameBlobs) {
int intIndexOfLeastDistance = 0;
double dblLeastDistance = 100000.;
for (unsigned int i = 0; i < existingBlobs.size(); i++) {
if (existingBlobs[i].blnStillBeingTracked == true) {
double dblDistance = distanceBetweenPoints(currentFrameBlob.centerPositions.back(), existingBlobs[i].predictedNextPosition);
if (dblDistance < dblLeastDistance) {
dblLeastDistance = dblDistance;
intIndexOfLeastDistance = i;
}
}
}
if (dblLeastDistance < currentFrameBlob.dblCurrentDiagonalSize * 0.5) {
addBlobToExistingBlobs(currentFrameBlob, existingBlobs, intIndexOfLeastDistance);
}
else {
addNewBlob(currentFrameBlob, existingBlobs);
}
}
for (auto &existingBlob : existingBlobs) {
if (existingBlob.blnCurrentMatchFoundOrNewBlob == false) {
existingBlob.intNumOfConsecutiveFramesWithoutAMatch++;
}
if (existingBlob.intNumOfConsecutiveFramesWithoutAMatch >= 5) {
existingBlob.blnStillBeingTracked = false;
}
}
}
///////////////////////////////////////////////////////////////////////////////////////////////////
void addBlobToExistingBlobs(Blob ¤tFrameBlob, std::vector&existingBlobs, int &intIndex) {
existingBlobs[intIndex].currentContour = currentFrameBlob.currentContour;
existingBlobs[intIndex].currentBoundingRect = currentFrameBlob.currentBoundingRect;
existingBlobs[intIndex].centerPositions.push_back(currentFrameBlob.centerPositions.back());
existingBlobs[intIndex].dblCurrentDiagonalSize = currentFrameBlob.dblCurrentDiagonalSize;
existingBlobs[intIndex].dblCurrentAspectRatio = currentFrameBlob.dblCurrentAspectRatio;
existingBlobs[intIndex].blnStillBeingTracked = true;
existingBlobs[intIndex].blnCurrentMatchFoundOrNewBlob = true;
}
///////////////////////////////////////////////////////////////////////////////////////////////////
void addNewBlob(Blob ¤tFrameBlob, std::vector&existingBlobs) {
currentFrameBlob.blnCurrentMatchFoundOrNewBlob = true;
existingBlobs.push_back(currentFrameBlob);
}
///////////////////////////////////////////////////////////////////////////////////////////////////
double distanceBetweenPoints(cv::Point point1, cv::Point point2) {
int intX = abs(point1.x - point2.x);
int intY = abs(point1.y - point2.y);
return(sqrt(pow(intX, 2) + pow(intY, 2)));
}
///////////////////////////////////////////////////////////////////////////////////////////////////
void drawAndShowContours(cv::Size imageSize, std::vector> contours, std::string strImageName) {
cv::Mat image(imageSize, CV_8UC3, SCALAR_BLACK);
cv::drawContours(image, contours, -1, SCALAR_WHITE, -1);
//cv::imshow(strImageName, image);
}
///////////////////////////////////////////////////////////////////////////////////////////////////
void drawAndShowContours(cv::Size imageSize, std::vector blobs, std::string strImageName) {
cv::Mat image(imageSize, CV_8UC3, SCALAR_BLACK);
std::vector> contours;
for (auto &blob : blobs) {
if (blob.blnStillBeingTracked == true) {
contours.push_back(blob.currentContour);
}
}
cv::drawContours(image, contours, -1, SCALAR_WHITE, -1);
//cv::imshow(strImageName, image);
}
///////////////////////////////////////////////////////////////////////////////////////////////////
bool checkIfBlobsCrossedTheLine(std::vector&blobs, int &intVerticalLinePosition, int &ShopCountL, int &ShopCountR, std::ofstream &myfile) {
bool blnAtLeastOneBlobCrossedTheLine = 0;
for (auto blob : blobs) {
if (blob.blnStillBeingTracked == true && blob.centerPositions.size() >= 2) {
int prevFrameIndex = (int)blob.centerPositions.size() - 2;
int currFrameIndex = (int)blob.centerPositions.size() - 1;
//going left
if (blob.centerPositions[prevFrameIndex].x > intVerticalLinePosition && blob.centerPositions[currFrameIndex].x <= intVerticalLinePosition) {
ShopCountL++;
time_t now = time(0);
char* dt = strtok(ctime(&now), "\n");;
std::cout << dt << ",1,0 (Left)" << std::endl;
myfile << dt << ",1,0" << std::endl;
blnAtLeastOneBlobCrossedTheLine = 1;
}
// going right
if (blob.centerPositions[prevFrameIndex].x < intVerticalLinePosition && blob.centerPositions[currFrameIndex].x >= intVerticalLinePosition) {
ShopCountR++;
time_t now = time(0);
char* dt = strtok(ctime(&now), "\n");;
std::cout << dt << ",0,1 (Right)" << std::endl;
myfile << dt << ",0,1" << std::endl;
blnAtLeastOneBlobCrossedTheLine = 2;
}
}
}
return blnAtLeastOneBlobCrossedTheLine;
}
///////////////////////////////////////////////////////////////////////////////////////////////////
void drawBlobInfoOnImage(std::vector&blobs, cv::Mat &imgFrame2Copy) {
for (unsigned int i = 0; i < blobs.size(); i++) {
if (blobs[i].blnStillBeingTracked == true) {
cv::rectangle(imgFrame2Copy, blobs[i].currentBoundingRect, SCALAR_RED, 2);
int intFontFace = CV_FONT_HERSHEY_SIMPLEX;
double dblFontScale = blobs[i].dblCurrentDiagonalSize / 60.0;
int intFontThickness = (int)std::round(dblFontScale * 1.0);
cv::putText(imgFrame2Copy, std::to_string(i), blobs[i].centerPositions.back(), intFontFace, dblFontScale, SCALAR_GREEN, intFontThickness);
}
}
}
///////////////////////////////////////////////////////////////////////////////////////////////////
void drawShopCountOnImage(int &ShopCountL, int &ShopCountR, cv::Mat &imgFrame2Copy) {
int intFontFace = CV_FONT_HERSHEY_SIMPLEX;
double dblFontScale = (imgFrame2Copy.rows * imgFrame2Copy.cols) / 200000.0;
int intFontThickness = (int)std::round(dblFontScale * 1.5);
cv::Size textSizeL = cv::getTextSize("Inside: " + std::to_string(ShopCountL), intFontFace, dblFontScale, intFontThickness, 0);
cv::Size textSizeR = cv::getTextSize("Outside: " + std::to_string(ShopCountR), intFontFace, dblFontScale, intFontThickness, 0);
cv::Point ptTextBottomLeftPositionL, ptTextBottomLeftPositionR;
ptTextBottomLeftPositionL.x = imgFrame2Copy.cols - 1 - (int)((double)textSizeL.width * 1.25);
ptTextBottomLeftPositionL.y = (int)((double)textSizeL.height * 1.25);
ptTextBottomLeftPositionR.x = ptTextBottomLeftPositionL.x - 350;
ptTextBottomLeftPositionR.y = ptTextBottomLeftPositionL.y + (textSizeL.height) * 1.25;
cv::putText(imgFrame2Copy, "Inside: " + std::to_string(ShopCountL), ptTextBottomLeftPositionL, intFontFace, dblFontScale, SCALAR_GREEN, intFontThickness);
cv::putText(imgFrame2Copy, "Outside: " + std::to_string(ShopCountR), ptTextBottomLeftPositionR, intFontFace, dblFontScale, SCALAR_YELLOW, intFontThickness);
}
↧
OpenCV Perspective corrention and cropping
I'm currently trying to correct the perspective of a random taken image showing a rectangle.
The perspective correction is working fine, but i want to crop the image to the target, too. Si I've tried to transform the given contour of my target by the perspective matrix (`cv::Mat`) and crop it with the results.
My method is currently crashing at the marked line with the following error.
OpenCV Error: Assertion failed (mtype == type0 || (CV_MAT_CN(mtype) == CV_MAT_CN(type0) && ((1 << type0) & fixedDepthMask) != 0)) in create, file /Volumes/build-storage/build/master_iOS-mac/opencv/modules/core/src/matrix.cpp, line 2430
libc++abi.dylib: terminating with uncaught exception of type cv::Exception: /Volumes/build-storage/build/master_iOS-mac/opencv/modules/core/src/matrix.cpp:2430: error: (-215) mtype == type0 || (CV_MAT_CN(mtype) == CV_MAT_CN(type0) && ((1 << type0) & fixedDepthMask) != 0) in function create
Code:
cv::Mat correctMat(cv::Mat mat, std::vector contour) {
double minObjectSize = 100.0;
if (contour.size() == 4) {
cv::Rect rect = cv::boundingRect(contour);
if (rect.height < minObjectSize || rect.width < minObjectSize) {
NSLog(@"Objects size was too small: %d * %d", rect.width, rect.height);
}
else {
std::vector quad_pts;
std::vector squre_pts;
quad_pts.push_back(Point2f(contour[0].x, contour[0].y));
quad_pts.push_back(Point2f(contour[1].x, contour[1].y));
quad_pts.push_back(Point2f(contour[3].x, contour[3].y));
quad_pts.push_back(Point2f(contour[2].x, contour[2].y));
squre_pts.push_back(Point2f(rect.x, rect.y));
squre_pts.push_back(Point2f(rect.x, rect.y + rect.height));
squre_pts.push_back(Point2f(rect.x + rect.width, rect.y));
squre_pts.push_back(Point2f(rect.x + rect.width, rect.y + rect.height));
Mat transmtx = getPerspectiveTransform(quad_pts, squre_pts);
Mat transformed = Mat::zeros(mat.rows, mat.cols, CV_8UC3);
cv::line(mat, quad_pts[0], quad_pts[1], Scalar(0,0,255), 5, CV_AA, 0);
cv::line(mat, quad_pts[1], quad_pts[2], Scalar(0,0,255), 5, CV_AA, 0);
cv::line(mat, quad_pts[2], quad_pts[3], Scalar(0,0,255), 5, CV_AA, 0);
cv::line(mat, quad_pts[3], quad_pts[0], Scalar(0,0,255), 5, CV_AA, 0);
warpPerspective(mat, transformed, transmtx, mat.size());
std::vector transformedPoints;
// Crash
cv::transform(quad_pts, transformedPoints, transmtx);
cv::Mat cropped = transformed(cv::boundingRect(transformedPoints));
fixColorOfMat(cropped);
return cropped;
}
}
return mat;
}
I don not really know what the error message is telling me so i hope somebody here could help me solving this crash.
↧
How to write a sequence as images as a avi video ?
Hi,
I want to create a video using a sequence as images, I have greayscale images with resolution 640x480,I tried to make this simple application fallowing the next steps:
1) I read each images with imread
2) I used VideoWriter for writing video, but my video after saving is the 0 second saved :(
This is my code:
#include "opencv2/opencv.hpp"
#include
using namespace std;
using namespace cv;
int main(int argc, char* argv[])
{
for(int i = 0; i < 30; i++)
{
char left[1024] = "";
sprintf(left, "left640x480_%d.bmp", i);
Mat image(640, 480, CV_8UC1);
image = imread(left, 0);
VideoWriter video("out.avi", CV_FOURCC('M','J','P','G'), 30, Size(640, 480), false);
Mat frame;
image.copyTo(frame);
video.write(frame);
imshow( "Frame", frame );
char c = (char)waitKey(33);
if( c == 27 )
break;
}
return 0;
}
I use OpenCV 3.1 in C++.
Thank you for your help!
↧
How OpenMP multi-threading can be utilized to parallelize an OpenCV based sequential program? Any example is not found.
Suppose I have to find the negative of a large dimension image, say 1000x1000. If I simply perform pixel value subtraction from 255, it needs to process 10,00,000 numbers of pixels one by one in sequential. If I want to allow OpenMP multi-thread for performing the whole operation partially in several threads which are run on different cores simultaneously. This concept has to be implemented in OpenCV using OpenMP specifications. Need a startup program structure to work on parallel image processing. Your advices will be valuable for me. Thanks.
↧
↧
capture and save with 2 webcams C++
what do i need to add to my code in order to take the second picture as well ?
right now it works only with one webcam, but i can see both on my screen.
Thanks !
#include
using namespace cv;
int main()
{
//initialize and allocate memory to load the video stream from camera
cv::VideoCapture camera0(2);
cv::VideoCapture camera1(1);
if( !camera0.isOpened() ) return 1;
if( !camera1.isOpened() ) return 1;
while(true) {
//grab and retrieve each frames of the video sequentially
cv::Mat3b frame0;
camera0 >> frame0;
cv::Mat3b frame1;
camera1 >> frame1;
cv::imshow("Video0", frame0);
cv::imshow("Video1", frame1);
//wait for 40 milliseconds
int c = cvWaitKey(40);
//exit the loop if user press "Esc" key (ASCII value of "Esc" is 27)
if(27 == char(c)) break;
}
// Get the frame
Mat save_img; camera0 >> save_img;
if(save_img.empty())
{
std::cerr << "Something is wrong with the webcam, could not get frame." << std::endl;
}
// Save the frame into a file
imwrite("test1.jpg", save_img); // A JPG FILE IS BEING SAVED
return 0;
}
↧
Is there something missing in the chain code?
I am not able to read the image and how can i retrieve the xml file and save it? Can someone help..
#include "opencv2/imgproc.hpp"
#include "opencv2/highgui.hpp"
#include "opencv2/imgproc/imgproc_c.h"
#include
using namespace std;
using namespace cv;
int main() {
Mat img = imread("Outline.jpg"); //outline of foot
imshow("Test", img);
vector> contours;
findContours(img, contours, RETR_EXTERNAL, CV_CHAIN_CODE);
cout << Mat(contours[0]) << endl;
findContours(img, contours, RETR_EXTERNAL, CHAIN_APPROX_SIMPLE);
cout << "CHAIN_APPROX_SIMPLE" << endl;
cout << Mat(contours[0]) << endl;
CvChain* chain = 0;
CvMemStorage* storage = 0;
storage = cvCreateMemStorage(0);
cvFindContours(&IplImage(img), storage, (CvSeq**)(&chain), sizeof(*chain), CV_RETR_TREE,
CV_CHAIN_CODE);
for (; chain != NULL; chain = (CvChain*)chain->h_next)
{
CvSeqReader reader;
int i, total = chain->total;
cvStartReadSeq((CvSeq*)chain, &reader, 0);
cout<<"--------------------chain\n";
for (i = 0; i
↧
how to use opencv in android studio in native c++ code ??
how can i use openCV in an android studio's project - but i need to use it in other native cpp files and not in java.
all the guides and tutorials i found explain how to use openCV in java files (loadLibrary..)
eventually i have all the .so in jniLibs folder or add openCV as a module with a dependency. I am using Android Studio 1.4.1 and opencv 2.4.11.
↧
Unable to use edgePreservingFilter and stylization
Hello, I am trying to reuse a [C++ program](http://stackoverflow.com/questions/35479344/how-to-get-color-palette-from-image-using-opencv) and facing below compilation errors, please help me figuring out the issue.
Compile erros:
identifier stylization is undefined.
identifier edgePreservingFilter is undefined.
Working in Visual Studio, installled OpenCV 2.4.11 form nuget
included opencv2\opencv.hpp and opencv2\photo\photo.hpp
↧
↧
OpenCV2.2 with Visual Studio 2008
I am very new to OpenCV. I downloaded and installed Open CV according to the instructions in the link https://rangadabarera.wordpress.com/opencv-with-visual-studio/ . My requirement is to create a video library that can be used to replace the existing video library that uses Video For Windows(VFW).
I am trying to write a sample code to open and read an avi file. I added the following libraries "opencv_core220.lib opencv_highgui220.lib opencv_video220.lib opencv_ffmpeg220.lib". The code i have written looks like as follows
#include
#include "opencv2/opencv.hpp"
#include "opencv2/highgui/highgui_c.h"
#include "opencv2/highgui/highgui.hpp"
using namespace std;
using namespace cv;
int main()
{
string filename ("C:\\ADAS_AHBC_201046_002.avi") ;
VideoCapture input;
bool status = input.open( filename );
if (!input.isOpened()) return 0;
int frameCount = input.get(CV_CAP_PROP_FRAME_COUNT);
}
But while running the code, i am getting an exception with the following text
Unhandled exception at 0x00905a4d in aviread.exe: 0xC0000005: Access violation.
The value of the obj and referance count fields within the VideoCapture object seems to be 0
I tried another version like as follows and the output is the same
int main()
{
string filename ("C:\\ADAS_AHBC_201046_002.avi") ;
VideoCapture input = VideoCapture( filename );
//bool status = input.open( filename );
if (!input.isOpened()) return 0;
int frameCount = input.get(CV_CAP_PROP_FRAME_COUNT);
}
But when i tried to to read from a camera, it seems like not throwing any error. The code is as follows
int main()
{
string filename ("C:\\ADAS_AHBC_201046_002.avi") ;
VideoCapture input = VideoCapture( 0 );
//bool status = input.open( filename );
if (!input.isOpened()) return 0;
int frameCount = input.get(CV_CAP_PROP_FRAME_COUNT);
}
While debugging this code, my lap's camera is getting invoked. Can you please help me understand as what is wrong with what i am doing.
Thanks
AJAI
↧
std::vector corrupted when calling HOGDescriptor::compute
The length of both std::vectors passed to HOGDescriptor::compute are changing their size to a random value when passed in to openCV. Of course the actually amount of memory allocated doesn't change, causing memory access violations.
I've worked around this by adding a wrapper function to HOGDescriptor::compute that copies the std::vector descriptors to a cv::Mat and removes the optional locations parameter.
Questions:
1) Any ideas why this is happening?
2) A quick google shows other have had this issue, should exposing std containers be avoided? (I don't know too much about this but have heard it is bad)
Running: Windows 10, Visual studio 2015 built opencv_world310.dll by me. Both the dll and my application are build using the visual studio 2015 tool chain on the same machine.
↧
translate example into Java
I'm trying to translate this demo into Java http://docs.opencv.org/3.2.0/d5/dc4/tutorial_video_input_psnr_ssim.html
but I'm stuck on the lines using overloading of `-=` on `Mat`. My understanding of C++ is very limited, but I expected to see an `operator-=()` here http://docs.opencv.org/3.2.0/d3/d63/classcv_1_1Mat.html but there isn't one. What is the Java equivalent of `-=` on a `Mat`?
Note that the Java docs for 3.2.0 are not available, so consult 3.1.0 instead http://docs.opencv.org/java/3.1.0/
↧
Problem compiling OpenCV 2.4.13 with Cmake and MinGW
I got this problem when trying to compile opencv using Cmake and mingw.
for Cmake:
sourcecode is located in "C:\CPP Libraries\OpenCV-2.4.13\opencv\sources"
where the binaries are goint to be build is in: "C:/CPP Libraries/OpenCV-2.4.13/opencv/build/x64/mingw"
I've already used Cmake to generate the makefile.
when i run the makefile:
C:\CPP Libraries\OpenCV-2.4.13\opencv\build\x64\mingw>mingw32-make
this is the output that I get after 31%
[ 31%] Building CXX object modules/highgui/CMakeFiles/opencv_highgui.dir/src/window_w32.cpp.obj
C:\CPP Libraries\OpenCV-2.4.13\opencv\sources\modules\highgui\src\window_w32.cpp: In function 'int icvCreateTrackbar(const char*, const char*, int*, int, CvTrackbarCallback, CvTrackbarCallback2, void*)':
C:\CPP Libraries\OpenCV-2.4.13\opencv\sources\modules\highgui\src\window_w32.cpp:1853:81: error: 'BTNS_AUTOSIZE' was not declared in this scope
WS_CHILD | CCS_TOP | TBSTYLE_WRAPABLE | BTNS_AUTOSIZE | BTNS_BUTTON,
^
C:\CPP Libraries\OpenCV-2.4.13\opencv\sources\modules\highgui\src\window_w32.cpp:1853:97: error: 'BTNS_BUTTON' was not declared in this scope
WS_CHILD | CCS_TOP | TBSTYLE_WRAPABLE | BTNS_AUTOSIZE | BTNS_BUTTON,
^
modules\highgui\CMakeFiles\opencv_highgui.dir\build.make:187: recipe for target 'modules/highgui/CMakeFiles/opencv_highgui.dir/src/window_w32.cpp.obj' failed
mingw32-make[2]: *** [modules/highgui/CMakeFiles/opencv_highgui.dir/src/window_w32.cpp.obj] Error 1
CMakeFiles\Makefile2:2203: recipe for target 'modules/highgui/CMakeFiles/opencv_highgui.dir/all' failed
mingw32-make[1]: *** [modules/highgui/CMakeFiles/opencv_highgui.dir/all] Error 2
Makefile:159: recipe for target 'all' failed
mingw32-make: *** [all] Error 2
Does anyone knows a solution, because i have no clue about this one.
↧
↧
Mat.get equivalent for C++?
I'm trying to duplicate an example for line detection from java to native code.
My Java code is as follows
int[] linesArray = new int[lines.cols() * lines.rows() * lines.channels()];
lines.get(0,0,linesArray); //the function in question. Reads Mat data into the lines array
for (int i=0; i < linesArray.length; i = i + 4)
{
//loop through the data and do stuff
}
but in C++ native code, there is no `Mat::get()` function. How can I read the data from my `lines` Mat into my `linesArray`
so far I have this for C++, mostly the same
int linesArray[lines.cols * lines.rows * lines.channels()];
lines.get(0,0,linesArray); //get is not a member of cv::Mat
↧
Can I use OpenCV 3.2 installed from prebuilt binaries on Windows with Python 3.5 and C++?
I would like to program with OpenCV 3.2 with both C++ and Python 3.5, under Windows 10. Can I do it just installing OpenCV 3.2 from the pre-built binaries? Or must I build OpenCV from source?
Thanks!
↧
Chain code and xml
Hello, i am having a problem when saving the xml file.. Only a single row is being showed in the xml..but, i want to retrieve several rows with valid values to represent the images found in the folder. Can someone please help me with this.
#include "opencv2/imgproc.hpp"
#include "opencv2/highgui.hpp"
#include "opencv2/imgproc/imgproc_c.h"
#include
#include
#include
#include "opencv2/imgcodecs.hpp"
#include
#include
#include
#include
using namespace std;
using namespace cv;
vector files;
int main() {
Mat image;
double totalCount = 0;
//Mat image = imread("C:/Users/Desktop/Outline.jpg");
cv::glob("C:/Users/Desktop/outline/*.jpg", files);
for (size_t i = 0; i < files.size(); i++) {
image = imread(files[i]);
Canny(image, image, 100, 100 * 2, 3, false);
CvChain* chain;
CvMemStorage* storage = 0;
storage = cvCreateMemStorage();
cvFindContours(&IplImage(image), storage, (CvSeq**)(&chain), sizeof(*chain), CV_RETR_EXTERNAL,
CV_CHAIN_CODE);
int total = chain->total;
// 1 row, 8 cols, filled with zeros, (float type, because we want to normalize later):
cv::Mat hist(1, 8, CV_32F, Scalar(0));
for (; chain != NULL; chain = (CvChain*)chain->h_next)
{
CvSeqReader reader;
int i, total = chain->total;
cvStartReadSeq((CvSeq*)chain, &reader, 0);
for (i = 0; i < total; i++)
{
char code;
CV_READ_SEQ_ELEM(code, reader);
int Fchain = (int)code;
// increase the counter for the respective bin:
hist.at(0, Fchain)++;
totalCount++;
}
}
// print the raw histogram:
cout << "Histo: " << hist << endl;
cout << "Total: " << totalCount << endl;
// normalize it:
Mat prob = hist / totalCount;
cout << "Proba: " << prob << endl;
FileStorage fs("freeman.xml", FileStorage::WRITE);
fs << "chain" << prob;
}
waitKey(0);
return 0;
}
This is the xml file that i got: 
This is a similar xml that i want as result:

↧