Difference between revisions of "HowTo:Fourier"
m (→Discrete Version) |
|||
Line 72: | Line 72: | ||
To complete the picture, let's introduce a set of <math>M</math> equally spaced sampling points, <math>x_m/L = m / M</math>. The inverse Fourier transform at these points is then a polynomial, |
To complete the picture, let's introduce a set of <math>M</math> equally spaced sampling points, <math>x_m/L = m / M</math>. The inverse Fourier transform at these points is then a polynomial, |
||
− | <math>f(x_m) \simeq \sum_{n=0}^{M} a_n \omega^{nm} |
+ | <math>f(x_m) \simeq \sum_{n=0}^{M-1} a_n \omega^{nm} |
</math> |
</math> |
||
Line 78: | Line 78: | ||
of the inverse is |
of the inverse is |
||
− | <math>a_n = \frac{1}{M}\sum_{m=0}^{M} b_m \omega^{-nm} |
+ | <math>a_n = \frac{1}{M}\sum_{m=0}^{M-1} b_m \omega^{-nm} |
</math> |
</math> |
||
− | These last two are the discrete Fourier transform. They are polynomials in the <math>M^{th}</math> roots of unity, <math>\omega</math>. |
+ | These last two are the discrete Fourier transform. They are polynomials in the <math>M^{th}</math> roots of unity, <math>\omega</math>. For one last caveat, note that the scaling by <math>1/M</math> is not done at all in the fftw library, whereas it is done on the ''reverse'' transform in the numpy library (fft.ifft). |
= Efficient DFT = |
= Efficient DFT = |
||
There are efficient ways of computing the discrete Fourier transform. They derive from noting that <math>\omega^{n+M/2} = \omega^{-n}</math> (for even <math>M</math>). |
There are efficient ways of computing the discrete Fourier transform. They derive from noting that <math>\omega^{n+M/2} = \omega^{-n}</math> (for even <math>M</math>). |
||
+ | |||
+ | = Wiener–Khinchin Theorem = |
||
+ | |||
+ | Remember that shifting a function <math>f(x) \to f(x-y)</math> in real space translates to multiplying by a phase factor in Fourier space? This makes it really easy to compute integrals of the form: |
||
+ | |||
+ | <math>(f*g)(x) = \int_0^L f(x-y) g(y) \; dy |
||
+ | </math> |
||
+ | |||
+ | in Fourier space. They are called convolutions, and represent for example, the correlation between <math>f</math> and <math>g</math> at 2 points separated by the distance <math>x</math>, or the probability distribution of the sum of 2 numbers -- one drawn from <math>f</math> and the other from <math>g</math>, or the matrix multiplication between Circulant matrices. The first interpretation leads to the Wiener–Khinchin Theorem, which states that the power spectrum (in Fourier space) is the transform of the autocorrelation function (in real space). |
||
+ | |||
+ | Computing the convolution at points <math>x_M</math> looks at first like it will take <math>O(M^2)</math> time, since the integral becomes a discrete dot product and we have <math>M</math> of them to compute. For 3D data, this is prohibitively expensive. In Fourier-land, the convolution is a direct product, and takes only <math>M</math> operations! |
||
+ | |||
+ | We'll derive it in the discrete case by Fourier transforming the result, <math>f*g</math>, |
||
+ | |||
+ | <math>FG(n) = \frac{1}{M}\sum_{m=0}^{M-1}\sum_{j=0}^{M-1} f(x_{m-j})g(x_j) \omega^{-nm} |
||
+ | </math> |
||
+ | |||
+ | Since <math>f</math> is periodic, we can shift <math>m \to m+j</math> and the sum still runs over <math>m=-j,\ldots,M-j-1</math>, which is the same as <math>0,\ldots,M-1</math>. |
||
+ | |||
+ | <math>FG(n) = \frac{1}{M}\sum_{m=0}^{M-1}\sum_{j=0}^{M-1} f(x_m)g(x_j) \omega^{-nm}\omega^{-nj} |
||
+ | = M F(n) G(n) |
||
+ | </math> |
||
+ | |||
+ | Because the product takes <math>M</math> operations, the Fourier transformations of <math>f,g,FG</math> are the time-limiting steps in this process. Nevertheless, we've reduced <math>M^2</math> operations to <math>M + 3 M \log M</math> operations. |
||
+ | Notice how the starting formula looks a lot like matrix multiplication with <math>f_{mj} = f(x_m-x_j)</math>? This type of matrix multiplication is one of the major computational uses of the Fourier transform. |
||
+ | |||
+ | = FFT-ing with Numpy = |
||
+ | |||
+ | Finally, some code. |
||
+ | |||
+ | You can verify the particular definition of the DFT used by numpy by, what else, Fourier transforming a delta function: |
||
+ | <source lang="python"> |
||
+ | from numpy import * |
||
+ | x = zeros(16) |
||
+ | x[1] = 1.0 |
||
+ | y = fft.fft(x) |
||
+ | print y |
||
+ | print exp(-2j*pi/16.0) # omega |
||
+ | print exp(-2j*pi*arange(16)/16.0) |
||
+ | </source> |
||
+ | |||
+ | You should also double-check that the inverse, '''fft.ifft''', works as advertised. Note that this means it must divide by <math>M=16</math> here. |
||
+ | |||
+ | Now for some thought-provoking questions: |
||
+ | # What is the special significance of the zero-frequency component, '''y[0]'''? |
||
+ | # What is the relationship between the transform at frequencies <math>k</math> and <math>M-k</math>? |
||
+ | ## How does this relationship depend on functions that are real vs. contain some imaginary numbers? |
||
+ | # What special property does the transform have if the function is even (<math>f(x) = f(L-x)</math>)? |
||
+ | ## Construct an even function numerically and test this out. |
||
+ | ## ... if the function is odd (<math>f(x) = -f(L-x)</math>)? |
||
+ | # Why does it make sense to look at the magnitude ('''y.real**2+y.imag**2''') and the phase ('''arctan2(y.imag,y.real)''') of the Fourier transform separately? |
Revision as of 16:41, 26 November 2014
The Fourier transform is such a useful fundamental tool, it deserves its own howto (as opposed to a simple Code). The Fourier transform is a transformation of a function into another function. The transformation can (in almost all cases) be reversed.
Contents
Analogies
Before getting started, a few analogies are helpful.
Sound comes from acoustic pressure waves -- so you hear middle C when your ear feels a time-varying pressure at precisely 261.6 Hz (1 Hz = 1 cycle per second). The pressure signal would be <math>P(t) \propto cos(2 \pi (261.6 t))</math> -- so that 1/261.6 second goes in a complete cycle, from 0 to <math>2\pi</math> radians. Fourier transforming that signal gives a function that is zero everywhere except for <math>k = 261.6</math> -- a delta function. The transform of a pure sin or cos wave is always a delta function -- peaked at one particular frequency. If you shift the phase of the sin or cos wave (say by <math>\Delta \theta = 2 \pi (261.6 \Delta t)</math>) this shows up in Fourier space as a phase factor, <math>\exp(i\Delta \theta)</math>, which does not change the magnitude.
Light comes from electromagnetic waves. In parking lots, you can usually see the grungy yellow/orange color of 589.29 nm light from cheap high-pressure sodium lights. This is because your eye is experiencing a time-varying electric field at a frequency of <math>3\cdot 10^{17}</math> nm/s / 589.29 nm <math>\simeq</math> 510 THz.
Definition
The Fourier transform of <math>f(x)</math> is defined as
<math> F(k) = \int_{-\infty}^\infty f(x) e^{-2\pi i k x} \; dx </math>
You can immediately see why the Fourier transform of a pure wave, <math>f(x) = e^{2\pi i s x}</math>, is centered on <math>k = s</math>. All the other test frequencies, <math>k</math>, create oscillatory integrands,
<math> e^{2\pi i (s - k) x} </math>
so the integral always cancels, unless <math>s = k</math>, where it shoots to infinity (<math>F(k) = \delta(k-s)</math>).
The Fourier transform `picks up' all these pure frequencies. You can see this by noting the transform of a bunch of pure waves,
<math> g(x) = \sum_j a_j e^{2\pi i k_j x} </math>
which is a sum over all the pure frequencies,
<math>G(k) = \sum_j a_j \delta(k-k_j) </math>
The reverse of the Fourier transform is just another Fourier transform. This time, the direction of oscillation has to change, though:
<math> f(x) = \int_{-\infty}^\infty F(k) e^{2\pi i k x} \; dk </math>
For visually understanding the Fourier transform, it's helpful to plot both sides of a table of transforms. We already showed that the transform of a wave is a delta function, and vice-versa. This means the transform of a constant is a delta function centered at zero. As another example, the transform of a Gaussian function is another Gaussian function.
Other Definitions
Note that there are other definitions of the Fourier transform. These are all related to the above, by scaling <math>k \to \nu/a</math>, so
<math> F_a(\nu) = \int_{-\infty}^\infty f(x) e^{-\frac{2\pi}{a} i\nu x} \; dx </math>
and
<math> f(x) = |a^{-1}| \int_{-\infty}^\infty F_a(\nu) e^{\frac{2\pi}{a} i\nu x} \; d\nu </math>
Common variations are <math>a = -1</math> and <math>a = \pm 2\pi</math>. Note that these formulas aren't applicable for imaginary <math>a</math>. If you do that, you'll end up translating the Fourier theory into its equivalent -- analytic continuations of Laplace transforms and their inverse via Bromwich contours.
Discrete Version
The discrete Fourier transform is one limit of the continuous one, where the integral is replaced by the sum. Before introducing this, however, it's helpful to introduce the Fourier transform on a circle.
<math> F_L(k) = \int_0^L f(x) e^{-2\pi i k x / L} \; dx </math>
The integral runs over 0 to L, at which point the function is considered to be periodic -- i.e. <math>f(0) = f(L)</math>. This definition is consistent with the `other definitions' above, for <math>a = L</math>.
Now, because the function is periodic, the test functions should be also. This means we only need to collect the values of <math>F_L(k)</math> at integer values of <math>k = n = 0, \pm 1, \ldots</math> -- those for which the test functions are periodic. Conversely, given a finite set of frequency values, we can make good approximations of the function using the inverse Fourier transform,
<math> f(x) \simeq \sum_{n=-M/2}^{M/2} a_n e^{2\pi i n x / L} </math>
Now we have a computational way to represent the Fourier transform -- a set of coefficients, <math>\{a_n\}_{-M/2}^{M/2}</math>. Each coefficient belongs to a basis function -- called a `plane-wave' basis in physics because of its oscillation as <math>x</math> varies along the direction of travel of the wave.
To complete the picture, let's introduce a set of <math>M</math> equally spaced sampling points, <math>x_m/L = m / M</math>. The inverse Fourier transform at these points is then a polynomial,
<math>f(x_m) \simeq \sum_{n=0}^{M-1} a_n \omega^{nm} </math>
where <math>\omega = e^{2\pi i/M}</math>. Moreover, given the set <math>f(x_m) = b_m</math>, the inverse of the inverse is
<math>a_n = \frac{1}{M}\sum_{m=0}^{M-1} b_m \omega^{-nm} </math>
These last two are the discrete Fourier transform. They are polynomials in the <math>M^{th}</math> roots of unity, <math>\omega</math>. For one last caveat, note that the scaling by <math>1/M</math> is not done at all in the fftw library, whereas it is done on the reverse transform in the numpy library (fft.ifft).
Efficient DFT
There are efficient ways of computing the discrete Fourier transform. They derive from noting that <math>\omega^{n+M/2} = \omega^{-n}</math> (for even <math>M</math>).
Wiener–Khinchin Theorem
Remember that shifting a function <math>f(x) \to f(x-y)</math> in real space translates to multiplying by a phase factor in Fourier space? This makes it really easy to compute integrals of the form:
<math>(f*g)(x) = \int_0^L f(x-y) g(y) \; dy </math>
in Fourier space. They are called convolutions, and represent for example, the correlation between <math>f</math> and <math>g</math> at 2 points separated by the distance <math>x</math>, or the probability distribution of the sum of 2 numbers -- one drawn from <math>f</math> and the other from <math>g</math>, or the matrix multiplication between Circulant matrices. The first interpretation leads to the Wiener–Khinchin Theorem, which states that the power spectrum (in Fourier space) is the transform of the autocorrelation function (in real space).
Computing the convolution at points <math>x_M</math> looks at first like it will take <math>O(M^2)</math> time, since the integral becomes a discrete dot product and we have <math>M</math> of them to compute. For 3D data, this is prohibitively expensive. In Fourier-land, the convolution is a direct product, and takes only <math>M</math> operations!
We'll derive it in the discrete case by Fourier transforming the result, <math>f*g</math>,
<math>FG(n) = \frac{1}{M}\sum_{m=0}^{M-1}\sum_{j=0}^{M-1} f(x_{m-j})g(x_j) \omega^{-nm} </math>
Since <math>f</math> is periodic, we can shift <math>m \to m+j</math> and the sum still runs over <math>m=-j,\ldots,M-j-1</math>, which is the same as <math>0,\ldots,M-1</math>.
<math>FG(n) = \frac{1}{M}\sum_{m=0}^{M-1}\sum_{j=0}^{M-1} f(x_m)g(x_j) \omega^{-nm}\omega^{-nj}
= M F(n) G(n)
</math>
Because the product takes <math>M</math> operations, the Fourier transformations of <math>f,g,FG</math> are the time-limiting steps in this process. Nevertheless, we've reduced <math>M^2</math> operations to <math>M + 3 M \log M</math> operations. Notice how the starting formula looks a lot like matrix multiplication with <math>f_{mj} = f(x_m-x_j)</math>? This type of matrix multiplication is one of the major computational uses of the Fourier transform.
FFT-ing with Numpy
Finally, some code.
You can verify the particular definition of the DFT used by numpy by, what else, Fourier transforming a delta function: <source lang="python"> from numpy import * x = zeros(16) x[1] = 1.0 y = fft.fft(x) print y print exp(-2j*pi/16.0) # omega print exp(-2j*pi*arange(16)/16.0) </source>
You should also double-check that the inverse, fft.ifft, works as advertised. Note that this means it must divide by <math>M=16</math> here.
Now for some thought-provoking questions:
- What is the special significance of the zero-frequency component, y[0]?
- What is the relationship between the transform at frequencies <math>k</math> and <math>M-k</math>?
- How does this relationship depend on functions that are real vs. contain some imaginary numbers?
- What special property does the transform have if the function is even (<math>f(x) = f(L-x)</math>)?
- Construct an even function numerically and test this out.
- ... if the function is odd (<math>f(x) = -f(L-x)</math>)?
- Why does it make sense to look at the magnitude (y.real**2+y.imag**2) and the phase (arctan2(y.imag,y.real)) of the Fourier transform separately?