For educational purposes I'm trying to understand following relation:
I'm applying a blur kernel (32x32) on
an image (500x667 Grayscale, 8 bit
for a single channel) which takes
approx. 107ms using cv::filter2d.
When however a template is being
matched on the same image with size
32x32, the call for matchTemplate (CV_TM_SQDIFF)
just takes 14ms.
Why is there such a huge difference in processing time? The documentation states, that starting with a kernel size of 11 x 11 filter2d will apply the kernel in the frequency domain which should speed things up. But the documentation also states that filter2d is computing the correlation instead of the convolution. So aren't both methods computing similar things?
↧