An Introduction To Linear Systems For Engineers
Engineering students often struggle to understand linear systems operations. Numerous accomplished engineers need a guide to assist them in their understanding of these relatively complex models. Signal processing and the telecommunications industry make heavy use of this type of theory, so it’s important to have a strong grasp of it.
Defining Linear Systems
A mathematical model of a system that’s based on the use of a linear operator is called a linear system. Linear systems generally exhibit properties that are much simpler and more concrete than general nonlinear ones. They serve as a mathematical abstraction in countless fields of engineering.
The Fourier Transform
It’s possible to decompose a function of time, known as a signal, into the frequencies that make it up by using the Fourier transform. Performing a Fourier transform on a function of time itself will return a complex-valued function of frequency. This function will have an absolute value that represents the amount of said frequency that was present in the original function. The complex argument of the value represents the phase offset of the most basic sinusoid form that’s present in the initial frequency.
Fourier transform is used to refer to both the mathematical operation that associates the representation of the frequency domain to a function of time as well as the frequency domain representation itself. This concept isn’t limited to functions of time. However, in order to have a unified language, the original function’s domain is still referred as the time domain even when it doesn’t have anything to do with time.
Many functions that will come up in engineering projects work with the reverse of this. The inverse Fourier transformation of a frequency domain representation will combine the contributions of all of the different frequencies in question in order to recover the original function of time. Numerous engineering books call this the Fourier synthesis.
The LaPlace Transforms
Pierre-Simon Laplace developed a widely used concept that transforms a function of time into a function of complex frequency. Conventionally, Laplace transform is the same as Fourier transform, converting from time domain to frequency domain
An easy way to envision this is to remember that the Fourier transform expresses a signal as a superposition of sinusoids while the Laplace transform expresses a function as a superposition of moments. This gives engineering teams an alternative functional description that can simplify the process of analyzing the behavior of a system.
Laplace transformation can be used to transform values from the time domain to the frequency domain, and this can ultimately help to transform differential equations into algebraic ones. This can turn the relatively high-level mathematical operation of convolution into simple multiplication that even those without a background in engineering could calculate.
There are several inversion theorems that most engineering experts will have to familiarize themselves with. The Fourier inversion theorem is perhaps the most popular. It states that it’s possible to recover a function from its Fourier transform. This tends to work for a majority of all functions, though sometimes mathematicians may find a function that it actually won’t work with.
The Lagrange-Bürmann formula, also known as the Lagrange inversion theorem, provides the Taylor series expansion of the inverse function of any analytic function. Those who are familiar with the Mellin transform or the two-sided Laplace transform might be familiar with the fact that there’s also a Mellin inversion theorem. Nearly any linear system theorem can be inverted if there’s a need to do so.
Looking at Sampling Theory
While most engineering experts are able to follow these mathematical examples well enough, they might have more difficulty when it comes to understanding sampling theory. Signals present in the physical world are analog. Sound waves moving through the air are analog. Radio waves carrying AM, FM or SSB data through a vacuum are also analog.
Processing these signals with digital computer technology requires a process called sampling. Analog signals are continuous in both amplitude and time, but digital circuits expect signals that are discrete when it comes to these two metrics.
Sampling requires a circuit to measure the value of a signal at certain intervals in time. Each of these measurements is referred to as a sample. Circuits will also take similar measurements of the amplitude of that signal.
The continuous analog signal is sampled at a frequency of F. The discrete signal that comes out of the sampling circuit has more frequency components than the original one did. These new frequency components are repeated at whatever the sampling rate is. They’re seen in the discrete frequency response at their original position, and they’re also centered on some value +/- F.
It’s generally necessary to sample the signal at twice the maximum frequency of the original analog rate in order to preserve all of the information in the signal. This is known as the Nyquist rate. According to the Sampling Theorem, a signal can be exactly reproduced if it’s sampled at a frequency of F where F is greater than twice the original maximum frequency.
Engineering experts know that when a signal is sampled at a lower rate than the Nyquist value, it will show signs of a phenomenon known as aliasing when it’s reproduced. This generally takes the form of unwanted noise in the reconstructed signal. Some of the frequencies from the original signal could also be lost. Aliasing happens because signal frequencies overlap if the sampling frequency drops down too low. Frequencies will fold around half of the original sampling frequency, which is why some engineering texts refer to this as the folding frequency.