All Waves Consist of a Continuous Series of
Fourier Series
The Fourier series defined in equation (2.135) becomes the Fourier transform:(2.138)X(f)=∫−∞∞x(t)e−j2πftdt
From: Modal Analysis , 2001
Fourier series
Mary Attenborough , in Mathematics for Electrical Engineering and Computing, 2003
Half-wave symmetry
There is another sort of symmetry that has an important effect on the Fourier series representation. This is called half-wave symmetry. A function with half-wave symmetry obeys that is, the graph of the function in the second half of the period is the same as the graph of the function in the first half turned upside down. A function with half-wave symmetry has no even harmonics. This can be shown by considering one of the even terms where n = 2m. Then
Substitute t′ = t –(1/2)τ in the second term, so that dt′ = dt. giving
As the second term in b2m becomes
which cancels the first term, giving b2m = 0.
A similar argument shows that the coefficients of the cosine terms for even n are also zero. In this case also it is only necessary to consider the half-cycle, as
To summarize, for a function with half wave symmetry
An even function, an odd function, and a function with half-wave symmetry are shown in Figure 16.5.
Figure 16.5. (a) An even function satisfies f(–t) = f(t), that is reflecting the graph in the y-axis results in the same graph. This function has only cosine terms in its Fourier series. (b) An odd function satisfies f (–t) = –f (t), that is reflecting the graph in the y-axis results in an upside down version of the same graph. This function has only sine terms in its Fourier series. (c) A function with half-wave symmetry satisfies f (t + τ /2) = –f (t) that is the graph of the function in the second half of the period is the same as the graph of the function in the first half reflected in the x-axis. This function has no even harmonics.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780750658553500425
Fourier Series
James S. Walker , in Encyclopedia of Physical Science and Technology (Third Edition), 2003
VII Discrete Fourier Series
The digital computer has revolutionized the practice of science in the latter half of the twentieth century. The methods of computerized Fourier series, based upon the fast Fourier transform algorithms for digital approximation of Fourier series, have completely transformed the application of Fourier series to scientific problems. In this section, we shall briefly outline the main facts in the theory of discrete Fourier series.
The Fourier series coefficients {c n } can be discretely approximated via Riemann sums for the integrals inEq. (9). For a (large) positive integer M, let x k = −π + 2π k/M for k = 0, 1, 2, …, M − 1 and let Δ x = 2π/M. Then the nth Fourier coefficient c n for a function f is approximated as follows:
The last sum above is called the Discrete Fourier Transform (DFT) of the finite sequence of numbers {f(x k )}. That is, we define the DFT of a sequence {g k } k = 0 M − 1 of numbers by
(38)
The DFT is the set of numbers {G n }, and we see from the discussion above that the Fourier coefficients of a function f can be approximated by a DFT (multiplied by the factors e −inπ/M). For example, inFig. 14 we show a graph of approximations of the Fourier coefficients {c n } n = −50 50 of the square wave f1 obtained via a DFT (using M = 1024). For all values, these approximate Fourier coefficients differ from the exact coefficients by no more than 10−3. By taking M even larger, the error can be reduced still further.
FIGURE 14. Fourier coefficients for square wave, n = −50 to 50. Successive values are connected with line segments.
The two principal properties of DFTs are that they can be inverted and they preserve energy (up to a scale factor). The inversion formula for the DFT is
(39)
And the conservation of energy property is
(40)
Interpreting a sum of squares as energy,Eq. (40) says that, up to multiplication by the factor 1/N, the energy of the discrete signal {g k } and its DFT {G n } are the same. These facts are proved in Briggs and Henson 1995 and Walker 1996.
An application of inversion of DFTs is to the calculation of Fourier series partial sums. If we substitute x k = −π + 2π k / M into the Fourier series partial sum S N (x) we obtain (assuming that N < M/2 and after making a change of indices m = n + N):
Thus, if we let g m = c m − N for m = 0,1, …, 2N and g m = 0 for m = 2N + 1, …, M − 1, we have
This equation shows that S M (x k ) can be computed using a DFT inversion (along with multiplications by exponential factors). By combining DFT approximations of Fourier coefficients with this last equation, it is also possible to approximate Fourier series partial sums, or arithmetic means, or other modified partial sums. See Briggs and Henson 1995 or Walker 1996 for further details.
These calculations with DFTs are facilitated on a computer using various algorithms which are all referred to as fast Fourier transforms (FFTs). Using FFTs, the process of computing DFTs, and hence Fourier coefficients and Fourier series, is now practically instantaneous. This allows for rapid, so-called real-time, calculation of the frequency content of signals. One of the most widely used applications is in calculating spectrograms. A spectrogram is calculated by dividing a signal (typically a recorded, digitally sampled, audio signal) into a successive series of short duration subsignals, and performing an FFT on each subsignal. This gives a portrait of the main frequencies present in the signal as time proceeds. For example, inFig. 15a we analyze discrete samples of the function
(41)
where the frequencies ν1 and ν2 of the sinusoidal factors are 128 and 256, respectively. The signal is graphed at the bottom ofFig. 15a and the magnitudes of the values of its spectrogram are graphed at the top. The more intense spectrogram magnitudes are shaded more darkly, while white regions indicate magnitudes that are essentially zero. The dark blobs in the graph of the spectrogram magnitudes clearly correspond to the regions of highest energy in the signal and are centered on the frequencies 128 and 256, the two frequencies used inEq. (41).
FIGURE 15. Spectrograms of test signals. (a) Bottom graph is the signal inEq. (41). Top graph is the spectrogram magnitudes for this signal. (b) Signal and spectrogram magnitudes for the signal in (42). Horizontal axes are time values (in sec); vertical axes are frequency values (in Hz). Darker pixels denote larger magnitudes, white pixels are near zero in magnitude.
As a second example, we show inFig. 15b the spectrogram magnitudes for the signal
(42)
This signal is a combination of two tones with sharply increasing frequency of oscillations. When run through a sound generator, it produces a sharply rising pitch. Signals like this bear some similarity to certain bird calls, and are also used in radar. The spectrogram magnitudes for this signal are shown inFig. 15b. We can see two, somewhat blurred, line segments corresponding to the factors 400π x and 200π x multiplying x in the two sine factors inEq. (42).
One important area of application of spectrograms is in speech coding. As an example, inFig. 16 we show spectrogram magnitudes for two audio recordings. The spectrogram magnitudes inFig. 16a come from a recording of a four-year-old girl singing the phrase "twinkle, twinkle, little star," and the spectrogram magnitudes inFig. 16b come from a recording of the author of this article singing the same phrase. The main frequencies are seen to be in harmonic progression (integer multiples of a lowest, fundamental frequency) in both cases, but the young girl's main frequencies are higher (higher in pitch) than the adult male's. The slightly curved ribbons of frequency content are known as formants in linguistics. For more details on the use of spectrograms in signal analysis, see Mallat 1998.
FIGURE 16. Spectrograms of audio signals. (a) Bottom graph displays data from a recording of a young girl singing "twinkle, twinkle, little star." Top graph displays the spectrogram magnitudes for this recording. (b) Similar graphs for the author's rendition of "twinkle, twinkle, little star."
It is possible to invert spectrograms. In other words, we can recover the original signal by inverting the succession of DFTs that make up its spectrogram. One application of this inverse procedure is to the compression of audio signals. After discarding (setting to zero) all the values in the spectrogram with magnitudes below a threshold value, the inverse procedure creates an approximation to the signal which uses significantly less data than the original signal. For example, by discarding all of the spectrogram values having magnitudes less than 1/320 times the largest magnitude spectrogram value, the young girl's version of "twinkle, twinkle, little star" can be approximated, without noticeable degradation of quality, using about one-eighth the amount of data as the original recording. Some of the best results in audio compression are based on sophisticated generalizations of this spectrogram technique—referred to either as lapped transforms or as local cosine expansions, see Malvar 1992 and Mallat 1998.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B0122274105002581
Signals, Systems, and Spectral Analysis
Ali Grami , in Introduction to Digital Communications, 2016
3.6.2 Dirichlet's Conditions
Fourier series is an expansion of a periodic signal in terms of the summing of an infinite number of sinusoids or complex exponentials, as any periodic signal of practical nature can be approximated by adding up sinusoids with the properly chosen frequencies, amplitudes, and initial phases. To apply the Fourier series representation to an arbitrary periodic signal g(t) with the period T 0, it is sufficient, but not strictly necessary, that the following conditions, known as Dirichlet's conditions , are satisfied:
- i)
-
The periodic signal is single-valued, with a finite number of maxima and minima and at most a finite number of discontinuities within the interval of one period.
- ii)
-
The signal is absolutely integrable over the interval of one period, i.e.,
(3.33)
Note that although a periodic pulse train does not satisfy these conditions, it has a Fourier series representation. The signals usually encountered in communication systems meet the Dirichlet's conditions, as the physical possibility of a periodic signal, like those generated in a communications lab, is a sufficient condition for the existence of a converging Fourier series.
When there is a discontinuity in a piecewise continuously differentiable periodic signal g(t), the series exhibits a behavior known as Gibbs phenomenon , as shown in Figure 3.22. In that, the Fourier series at the point of discontinuity converges to an average of the left-hand and right-hand limits of g(t) at the instant of discontinuity (i.e., to the arithmetic mean of the signal value on either side of the discontinuity). However, on each side of the discontinuity, the Fourier series has oscillatory overshoot with a period of T 0/2N, where T 0 is the signal period and N represents the number of terms included in the Fourier series, and a peak value of almost 9% of the amplitude of the discontinuity. The peak is independent of N, though the period of the oscillatory overshoot is a function of N. Since all real signals are continuous, Gibbs phenomenon does not occur, and we can thus assume the Fourier series representation is identical to the periodic signal. However, some mathematically defined signals, such as rectangular pulse train, have discontinuities, as such Gibbs phenomenon needs to be addressed.
Figure 3.22. Gibbs phenomenon.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B978012407682200003X
Mathematics for modal analysis
Jimin He , Zhi-Fang Fu , in Modal Analysis, 2001
2.11 Fourier series and Fourier transform
Fourier series is an ingenious representation of a periodic function. For a periodic time domain function x(t) with period T, we have:
(2.134)
Mathematically, it can be shown that x(t) consists of a number of sinusoids with frequencies multiple to a fundamental frequency. This fundamental frequency f is dictated by the period such that . The contribution to x(t) by a sinusoid with frequency fk is . The amplitude of the kth sinusoid can be determined by:
(2.135)
This component is usually a complex quantity with its amplitude and phase. The term represents a unit vector rotating at a frequency of . This integral shows that the component in signal x(t) that has frequency as will be 'frozen' at the rotating frequency of the unit vector, thus posing a non-zero value after the integration. Those other components will become zero after the integration over the whole period.
A periodic signal consists of the summation of the components at all frequencies:
(2.136)
Each frequency component X(fk ) is a complex quantity. However, x(t) is a real quantity. This is because X(fk ) and X(–fk ) are a pair of complex conjugates. The product of these two is the power the signal has at frequency fk :
(2.137)
Summation of the power at different frequencies will produce the total power of the signal. Each power component will have amplitude information only. The phase information vanishes.
When the period approaches infinity, x(t) becomes a non-periodic signal. The Fourier series defined in equation (2.135) becomes the Fourier transform:
(2.138)
If we separate the real and imaginary parts of X(f), we have:
(2.139, 140)
The inverse Fourier transform returns X(f) to time domain:
(2.141)
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780750650793500024
Response of structures to periodic dynamic loadings
S. Rajasekaran , in Structural Dynamics of Earthquake Engineering, 2009
5.11 Gibbs phenomenon
The Fourier series approximation of a square wave has been plotted in Fig. 5.34. The approximation is generally quite good as shown in the figure. However, an inaccuracy exists at the corners of the wave. Sines and cosines are smooth, continuous functions and therefore are best suited to approximately other smooth and continuous functions. However, jumps or discontinuities exist in them and the approximation is poor. For the square wave a discontinuity exists at t/T 0 = 0.5. At this location, the square wave has two values + 1 and − 1. When the function has jumps or double values, a Fourier series passes through the mean of the two points as shown in Fig. 5.34, which in our case is zero. It is also to be noted that t/T 0 = 0.5 the square wave is vertical. The fourier series tends to overshoot at the corners. This is called the Gibbs phenomenon. This does not disappear even if large number of terms are used in the series. It is concluded that the Gibbs phenomenon is local and the contribution to total energy is minimal.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9781845695187500051
Gravity Waves
Nikolaos D. Katopodes , in Free-Surface Flow:, 2019
3.5.2 Fourier Integral
The Fourier series is a representation of the transient function in terms of the discrete frequencies, . This idea can be generalized for a continuous frequency, ω, in which case the infinite summation becomes an integral. To this end, let us rewrite Eq. (3.49) as follows
(3.51)
where is the increment between successive integers. If we define
then, the Fourier series in Eq. (3.51) can be written as follows
(3.52)
In the limit, as , the summation becomes an operation of integration yielding
(3.53)
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780128154878000032
Fourier Waveform Analysis
Revised by David C. MunsonJr., in Reference Data for Engineers (Ninth Edition), 2002
Complex Form of Fourier Series
The Fourier series can be written more concisely as
where
and
-
D 0= (1/2)A 0= (1/2)C 0
-
Dn = (1/2)(An− jBn )
-
D−n = (1/2)(An+ jBn )
-
n= 1, 2, … .
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780750672917500091
Continuous Signal Processing
Steven W. Smith , in Digital Signal Processing: A Practical Guide for Engineers and Scientists, 2003
The Fourier Series
This brings us to the last member of the Fourier transform family: the Fourier series . The time domain signal used in the Fourier series is periodic and continuous. Figure 13-10 shows several examples of continuous waveforms that repeat themselves from negative to positive infinity. Chapter 11 showed that periodic signals have a frequency spectrum consisting of harmonics. For instance, if the time domain repeats at 1000 hertz (a period of 1 millisecond), the frequency spectrum will contain a first harmonic at 1000 hertz, a second harmonic at 2000 hertz, a third harmonic at 3000 hertz, and so forth. The first harmonic, i.e., the frequency that the time domain repeats itself, is also called the fundamental frequency. This means that the frequency spectrum can be viewed in two ways: (1) the frequency spectrum is continuous, but zero at all frequencies except the harmonics, or (2) the frequency spectrum is discrete, and only defined at the harmonic frequencies. In other words, the frequencies between the harmonics can be thought of as having a value of zero, or simply not existing. The important point is that they do not contribute to forming the time domain signal.
FIGURE 13-10. Examples of the Fourier series. Six common time domain waveforms are shown, along with the equations to calculate their "a" and "b" coefficients.
The Fourier series synthesis equation creates a continuous periodic signal with a fundamental frequency, f, by adding scaled cosine and sine waves with frequencies: f, 2f, 3f, 4f, etc. The amplitudes of the cosine waves are held in the variables: a 1, a 2, a 3, a 4, etc., while the amplitudes of the sine waves are held in: b 1 b 2, b 3 b 4, and so on. In other words, the "a" and "b" coefficients are the real and imaginary parts of the frequency spectrum, respectively. In addition, the coefficient a 0 is used to hold the DC value of the time domain waveform. This can be viewed as the amplitude of a cosine wave with zero frequency (a constant value). Sometimes a 0 is grouped with the other "a" coefficients, but it is often handled separately because it requires special calculations. There is no b 0 coefficient since a sine wave of zero frequency has a constant value of zero, and would be quite useless. The synthesis equation is written:
EQUATION 13-4
The Fourier series synthesis equation. Any periodic signal, x(t), can be reconstructed from sine and cosine waves with frequencies that are multiples of the fundamental, f The a n and bn coefficients hold the amplitudes of the cosine and sine waves, respectively.
The corresponding analysis equations for the Fourier series are usually written in terms of the period of the waveform, denoted by T, rather than the fundamental frequency, f (where f = 1/T). Since the time domain signal is periodic, the sine and cosine wave correlation only needs to be evaluated over a single period, i.e., −T/2 to T/2, 0 to T, −T to 0, etc. Selecting different limits makes the mathematics different, but the final answer is always the same. The Fourier series analysis equations are:
EQUATION 13-5
Fourier series analysis equations. In these equations, x(f) is the time domain signal being decomposed, a 0 is the DC component, an & bn hold the amplitudes of the cosine and sine waves, respectively, and T is the period of the signal, i.e., the reciprocal of the fundamental frequency.
Figure 13-11 shows an example of calculating a Fourier series using these equations. The time domain signal being analyzed is a pulse train, a square wave with unequal high and low durations. Over a single period from −T/2 to T/2, the waveform is given by:
FIGURE 13-11. Example of calculating a Fourier series. This is a pulse train with a duty cycle of d = k/T. The Fourier series coefficients are calculated by correlating the waveform with cosine and sine waves over any full period. In this example, the period from −T/2 to T/2 is used.
The duty cycle of the waveform (the fraction of time that the pulse is "high") is thus given by d = k/T. The Fourier series coefficients can be found by evaluating Eq. 13-5. First, we will find the DC component, a 0:
This result should make intuitive sense; the DC component is simply the average value of the signal. A similar analysis provides the "a" coefficients:
The "b" coefficients are calculated in this same way; however, they all turn out to be zero. In other words, this waveform can be constructed using only cosine waves, with no sine waves being needed.
The "a" and "b" coefficients will change if the time domain waveform is shifted left or right. For instance, the "b" coefficients in this example will be zero only if one of the pulses is centered on t = 0. Think about it this way. If the waveform is even (i.e., symmetrical around t = 0), it will be composed solely of even sinusoids, that is, cosine waves. This makes all of the "b" coefficients equal to zero. If the waveform if odd (i.e., symmetrical but opposite in sign around t = 0), it will be composed of odd sinusoids, i.e., sine waves. This results in the "a" coefficients being zero. If the coefficients are converted to polar notation (say, M n and θn coefficients), a shift in the time domain leaves the magnitude unchanged, but adds a linear component to the phase.
To complete this example, imagine a pulse train existing in an electronic circuit, with a frequency of 1 kHz, an amplitude of one volt, and a duty cycle of 0.2. The table in Fig. 13-12 provides the amplitude of each harmonic contained in this waveform. Figure 13-12 also shows the synthesis of the waveform using only the first fourteen of these harmonics. Even with this number of harmonics, the reconstruction is not very good. In mathematical jargon, the Fourier series converges very slowly. This is just another way of saying that sharp edges in the time domain waveform result in very high frequencies in the spectrum. Lastly, be sure and notice the overshoot at the sharp edges—i.e., the Gibbs effect discussed in Chapter 11.
FIGURE 13-12. Example of Fourier series synthesis. The waveform being constructed is a pulse train at 1 kHz, an amplitude of one volt, and a duty cycle of 0.2 (as illustrated in Fig. 13-11). This table shows the amplitude of the harmonics, while the graph shows the reconstructed waveform using only the first fourteen harmonics.
An important application of the Fourier series is electronic frequency multiplication. Suppose you want to construct a very stable sine wave oscillator at 150 MHz. This might be needed, for example, in a radio transmitter operating at this frequency. High stability calls for the circuit to be crystal controlled. That is, the frequency of the oscillator is determined by a resonating quartz crystal that is a part of the circuit. The problem is, quartz crystals only work to about 10 MHz. The solution is to build a crystal controlled oscillator operating somewhere between 1 and 10 MHz, and then multiply the frequency to whatever you need. This is accomplished by distorting the sine wave, such as by clipping the peaks with a diode, or running the waveform through a squaring circuit. The harmonics in the distorted waveform are then isolated with band-pass filters. This allows the frequency to be doubled, tripled, or multiplied by even higher integers numbers. The most common technique is to use sequential stages of doublers and triplers to generate the required frequency multiplication, rather than just a single stage. The Fourier series is important to this type of design because it describes the amplitude of the multiplied signal, depending on the type of distortion and harmonic selected.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780750674447500509
Digital signal processing
A.C. Fischer-Cripps , in Newnes Interfacing Companion, 2002
3.5.2 Fourier series
The Fourier series gives amplitudes and frequencies of the component sine waves for any periodic function f(t). For periodic functions of period To with frequency ωo, the Fourier series can be written:
Using Euler's formula, it can be shown that any cosine (or sine) function can be represented by a pair of exponential functions:
Substituting into the Fourier series we obtain:
Note: Cn is a complex number, the real part contains the amplitude of the cos terms, and the imaginary part the amplitude of the sin terms.
A plot of Cn vs frequency is a frequency spectrum of the signal. For example, if f(t) =A cos ωot, then the frequency spectrum is a pair of lines of height A/2 located at ±ωo.
This plot is the magnitude of the exponential components of the signal. A frequency spectrum using trigonometric coefficients would be a single line of height A at ωo.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780750657204501212
The Math of DSP
James D. Broesch , in Digital Signal Processing, 2009
The Fourier Series
The Fourier series plays an important theoretical role in many areas of DSP. However, it generally does not play much of a practical role in actual DSP system design. For this reason, we will spend most of this section discussing the insights to be gained from the Fourier series; we will not devote a great deal of time to the mathematical manipulations commonly found in more academic texts.
Insider Info
The Fourier series is named after the French mathematician Joseph Fourier. Fourier and a number of his contemporaries were interested in the study of vibrating strings. In the simple case of just one naturally vibrating string the analysis is straightforward: the vibration is described by a sine wave. However, musical instruments, such as a piano, are made of many strings all vibrating at once. The question that intrigued Fourier was: How do you evaluate the waveforms from a number of strings all vibrating at once? As a product of his research, Fourier realized that the sound heard by the ear is actually the arithmetic sum of each of the individual waveforms. This is called the principle of superposition. This is not such a dramatic observation and is, in fact, somewhat intuitive. The really interesting thing that Fourier contributed, however, was the realization that virtually any physical waveform can, in fact, be represented as the sum of a series of sine waves.
Figure 4.13 shows an example of how the Fourier series can be used to generate a square wave. The square wave can be approximated by the expression:
Figure 4.13. Creating a square wave from a series of sine waves
(4.53)
The first term on the right side of Equation 4.53 is called the fundamental frequency. Each value of n is a harmonic of the fundamental frequency.
Looking at Figure 4.13, we can see that after only two terms the waveform begins to take on the shape of a square wave. Adding in the third harmonic produces a closer approximation to a square wave. If we keep adding in harmonics, we continue to obtain a waveform that looks more and more like a square wave. Interestingly enough, even if we added an infinity of odd harmonics we would not get a perfect waveform. There would always be a small amount of "ringing" at the edges. This is called the Gibbs phenomena.
There are some very interesting implications to all of this. The first is the fact that the bandwidth of a signal is a function of the shape of a waveform. For example, we could transmit a 1-kHz sine wave over a channel having a bandwidth of 1 kHz, but if we wanted to transmit a 1-kHz square wave we would have a problem.
Equation 4.53 tells us that we need infinite bandwidth to transmit a square wave! And, indeed, to transmit a perfect square wave would require infinite bandwidth. However, a perfect square wave is discontinuous; the change from the low state to the high state occurs in zero time. Any physical system will require some time to change state. Therefore, any attempt to transmit a square wave must involve a compromise.
In practice, 10 to 15 times the fundamental frequency provides enough bandwidth to transmit a high-quality square wave. Thus, to transmit our 1-kHz square wave would require something like a 10-kHz bandwidth channel. A wider channel would give a sharper signal, while a narrower channel would give a more rounded square wave.
These observations lead to some interesting correlations. The higher the frequency that a system can handle, the faster it can change value. Naturally, the converse is true: The faster a system can respond, the higher the frequency it can handle.
This information also gives us the tools to complete the development of the Nyquist theorem.
The Nyquist Theorem Completed
Earlier we demonstrated that we needed at least two nonzero points to reproduce a sine wave. This is a necessary but not sufficient condition. For any two (or more) nonzero points that lie on the curve of a sine wave, there are an infinite number of harmonics of the sine wave that will also fit the same points. We eliminated the harmonic problem by requiring that all of our samples be restricted to one cycle of the sine wave. We will revisit this limitation in a minute, but first let's look closer at our work on the Nyquist theorem up to this point.
The big limitation on our development of the Nyquist theorem so far has been the requirement that we only deal with sine waves.
By taking into account the Fourier series we can remove this limitation. The Fourier series tells us that, for any practical waveform, we can think of it as the sum of a number of sine waves. All we need to concern ourselves with is handling the highest frequency present in our signal. This allows us to state the Nyquist theorem in the form normally seen in the literature.
Key Concept
To accurately reproduce a signal, we must sample at a rate greater than twice the frequency of the highest frequency component present in the signal.
The bold emphasis is to highlight two areas that are often misinterpreted. It is often stated that it is necessary to sample at twice the highest frequency of interest. As we saw earlier, sampling at twice the frequency only guarantees that we will get two points over one cycle. If these two points occur at the zero crossing, it would be impossible to fit a curve to the two points.
Another common mistake is to assume that it is sufficient to sample a signal at twice the frequency of interest. It is not the frequency of interest, but rather the frequency present that is important. If there are signal components higher in frequency than the Nyquist frequency, they will be aliased into the frequency below the Nyquist frequency and cause distortion of the sampled signal.
FAQs
How Do We Ensure That Aliasing Does Not Occur?
The solution to this problem brings us back to the anti-aliasing filter. In theory, we set the cutoff frequency of the anti-aliasing filter just below the Nyquist frequency. This ensures that no frequency components equal to or greater than the Nyquist frequency can be sampled by the rest of the system, and therefore no aliasing of signals can occur. This removes our earlier restriction that the two points be located on one cycle of the waveform. The anti-aliasing filter ensures that this case is met for the highest frequency. In practice, we seldom try to push the Nyquist frequency. Generally, instead of sampling at twice the frequency, we will sample at five to ten times the highest frequency we are trying to capture.
Let's demonstrate with an example. Let's say that we are interested in building a DSP system that can record voices at telephone-quality levels. Generally, telephone-quality speech can be assumed to have a bandwidth of 5 kHz. Even though the human hearing range is generally defined as 20 Hz to 20 kHz, most speech information is contained in the spectrum below 5 kHz.
The limiting factor on an analog voice input is generally the microphone. These typically handle frequencies up to 20 or 30 kHz, though the cheaper mikes will start rolling off in amplitude around 10 kHz or so. Thus, there will be frequency components present that are well above our upper frequency of interest. An anti-aliasing filter is needed to eliminate these components.
If we assume that we want to sample our signal at five times the highest frequency of interest, then our sampling rate would be 25 kHz. Strictly speaking, this would dictate a Nyquist frequency of 12.5 kHz. However, since we are not interested in frequencies this high, it makes sense to set the cutoff of the anti-aliasing filter at around 6 kHz or so. This gives us some headroom above our design requirement of 5 kHz, but is low enough that we will be oversampling the signal by a factor greater or equal to 12.5 kHz/6 kHz. This oversampling allows us to relax the performance specifications on the analog parts of the system, thus making our system more robust and easier to build.
Setting the cutoff of the anti-aliasing filter well below the Nyquist frequency has another significant advantage: it allows us to specify a simpler filter with a slower roll-off. Such a filter is cheaper and introduces much less phase distortion.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780750689762000043
Source: https://www.sciencedirect.com/topics/engineering/fourier-series
0 Response to "All Waves Consist of a Continuous Series of"
Enregistrer un commentaire