Ain’t it easy to change into something suitable whenever convenient? Just like the big Transformers!! I especially liked Optimus Prive since his car mode has always been cool to me. And not just that, but also . What I found fascinating about these transformers is that they are handy cars with AI most of the time but transforms into giant robots when combat is inevitable.

Similarly, we signals can be transformed in such a way they can be useful depending on the situation. This is done using FT or  where in signals, in a spatial domain is broken down in the frequency domain. This is useful in manipulation of signals and or modulating these signals. Since this blog will focus on the basic usage of FFT through SciLab, you can read this article about FT instead from Better Explained.

In optics, thin lenses are considered to be Fourier transform engines as shown in the image below

In this activity, we’ll explore signals can be manipulated and how FT can be used in signal processing!

## Part 1: FFT 101

Now that we have a vague understanding of how Fourier Transform works, let’s make some visual representation to have a peek of its greatness.

Open Paint (I prefer Paint.net) and create a 128 x 128 pixel size canvas and make a circle. Save it as a bitmap file (.BMP) so we can use imread() to import it to SciLab to manipulate it. Alternatively, we can create a circle in SciLab (I created one in this article).

Using function, perform a fast Fourier transform on the circle. However, note that the resulting array will contain complex numbers instead of integers or floats. So to display the Fourier transformed image, don’t forget to take the absolute value first using abs() function. Then it should show something like in figure 1.

Here’s a snippet of the code I used to produce such images.

Following the same procedure as the circle, we can come up with similar figures such as the letter A (fig 3), or a sine wave function (fig 4), a dual slit (fig 5), a square function (fig 6), and finally a 2D Gaussian curve (fig 7). Each original figure can be created within SciLab. Actually, I already did almost all of the original images in this blog too.

## Part 2: FFT and Imaging Devices

Now that we have an idea how to implement FFT in SciLab, we can now use convolution. Going back to the optics interpretation of FT, if the FT of a mask is the same as the image it forms under a thin lens then the convolution of an image and a mask is the same as the image observed when viewing an image through the mask.

But how to convolve?

Convolution between two functions is given by;

$h(x,y) = \int \int f(x',y')g(x-x',y-y') dx' dy'$

in short-hand notation,

$h = f \star g$

That’s if we’re not working on the Fourier space. In Fourier space however, convolution is as easy as

$H = FG$

where H, F, and G are the Fourier transform of each corresponding functions. The end product of convolution is a little bit of each original function f and g.

### END OF MATH

Take this image of “VIP” and imagine looking at it through a lens. Can you picture an image?

Now convolve the “VIP” image’s FFT with the FFT of the circle similar to that in the first part.

See it looks like the word is being viewed through a lens, right. Of course the larger the lens are, the better the image resolution of the seen image since more light is able to focus onto our eyes. The aperture size or the diameter of the circle used for the convolution represents the size of the lens and the effects of increased aperture size is shown in figure 10.

## Part 3. Matchmaking with FFT

Another use of FTs is in template matching via correlation. Correlation is basically the how related the 2nd image is to the whole image or parts of the 1st image, hence this is used in template matching. How correlation works is similar to the convolution. Only similar since we get the correlation by

$P = F*G$

where P, F, and G are the FTs of functions p, f ,and g. Note that this is not a common multiplication of FTs, rather it is multiplied to the conjugate of G. This gives us the correlation of function g to f.

To demonstrate this, the text THE RAIN IN SPAIN STAYS MAINLY IN THE PLAIN. ” was typed in a 128 by 128 pixel dimension image on a white background. A letter A was also type but this time with a black background.

Using this code, the correlation of the phrase and the letter “A” was taken. Observe the result if we run this code with increasing font size of “A”

As we can see using correlation, the pattern of the letter “A” finds itself within the image much like the cat in the bowl on the right.

Notice the third image formed. It was formed by correlating the phrase with an “A” with the same font size. The position of the “A”s in the phrase have visually clearer “A” pattern over it compared to those around it. That’s because it sits there. It just match the template in that position, that’s all.

Imagine if you can correlate yourself with someone and it just fits. How lovely to be with someone who has high correlation to you. Much wow. Very love. ❤

## Part 4: Living on the edge with FFT

I hope you’re seeing how FT can be useful in the image processing at this point. So for the last exercise, we’re gonna live on the edge – or we can just detect them. HEHEHE.

Let’s revisit the VIP image from part 2. How many kinds of edges can you see there? Can you see a vertical edge? Horizontal? How about a slanted one? Let’s see if we can use FFT to better see these edges.

To detect the edge, one must be one with everything. At least we should know how the edge looks like.

So let’s make a 3×3 interpretation of what an edge looks like. It’s easy enough to make it in SciLab. Just follow these __ easy steps and let’s get it on.

1. Make a 3 x 3 matrix.
2. Using integers, make a pattern of numbers that looks like an edge. Like so.
3. Make sure that the sum of all the elements is zero (0).

To check whether we make a good edge interpretation, we can see it using imshow().

Looks good.

Right! Now all we have to do is to convolve the VIP image to the edge image and…

Voila! Here are the edges we want! With this convolution, we only extract the edges similar to the edge pattern or at least with a component similar to it.

Yay! Now we’re done! But first, I’d like to thank the people who helped me with the activity and with this blog namely Kuya Roland, Carlo, Louie, Patrick and Trix. If not because of them, I wouldn’t finish this uber friendly and light article.

“10/10” – IGN

“10/10” – Rotten Tomatoes