The Basics Of Random Signal Analysis

The Basics Of Random Signal Analysis

Statistical signal processing is included in graduate level studies in many different fields of study, such as electrical engineering, biomedical engineering, physics, applied mathematics, pure mathematics, and statistics. Nearly all scientific fields consider statistical signal processing within their framework. With most real-world applications, signals largely consist of stochastic components.

Signals and systems are a foundation for those studying electrical engineering. They are a fundamental jumping off point for understanding other content and concepts within this technical and complex field. While knowing the theory behind signals and systems is an important and vital foundation in the world of electrical engineering, it is also necessary to study and know other complex and random signals. Many signals of interest in the field of electrical engineering will be extremely complex, in particular if they are randomly corrupted or noisy. In turn, the information within the signal can be distorted and difficult to uncover. Understanding random signals and how to retrieve the necessary information is a very important part of studying electrical engineering.

Random signal analysis involves the following concepts:

  • Random (stochastic) processes
  • Random variables
  • Probability

When studying random signals, it is important to look at a collection of many signals, instead of just one circumstance of that signal. The collection of signals together is referred to as a random process. This ensemble of signals will correspond to each and every potential outcome of a specific signal measurement. Within that family of signals, each signal is referred to as a realization or as a sample function of the process.

Random Variables

A random variable describes the outcome of a random experiment (e.g., a coin flip). Random variables can be considered as scalar versions of random vectors and random processes, as all three of these concepts are within a single variation theme and can be considered, in general terms, as a random object.

A random variable acquires values randomly. All single random experiments can be characterized by random variables, for example, the amount of noise received during a single use of a communication link. However, when it comes to mathematics and engineering, of course, a more precise and analytical definition is necessary. By mathematical definition, a random variable is not random, nor is it variable; it is a function mapping from one area into another area. The first area being a subset of a probability space corresponding to a specific outcome of the random experiment, and the second area, or space, being a subset of the real line, also known as a real-valued random variable. This theory only makes sense with a constraint in place on the function. Informally, however, a random variable can be defined as a function.

For some, a random variable is best defined as a measurement on a probability of space; meaning that for each sample point ω, the random variable will produce a value, shown as f (ω). The outcome of the random experiment is ω and f (ω) is the result of the measurement obtained on the experiment. The outcome comes from an abstract space, such as real numbers, waveforms, sequences, integers, ASCII characters, and Chinese characters, or others. However, the value of measurement or f (ω) must be concrete, meaning a real number, or a meter reading.

Random Processes

A random process is a sequence of random variables, where in its simplest form as discrete random process it is equivalent to a random vector. A random process can also be continuous-time, modeling a waveform signal, where at each continuous time t the process describes a random variable X(t). A discrete time random process, also known as a random sequence, is a time series in statistical studies and can be denoted as {X(n)n= 0,1, . . .}. It can also be denoted in the digital signal processing literature by {X[n]}. Often, the denominations discrete and continuous also refer to the amplitude of the signal, and the process or sequence can be defined by the discrete or continuous nature of the time variable.

Probability Mass Function

Using an inverse image equation will help compute probabilities, however, one should expect that the calculations will get complicated and messy (sorry, I really don’t understand what you mean here). To truly understand the theory of random variables, one must understand the definition of probability mass function, or pmf. In probability theory and statistics, a pmf is a function that describes the probability that a discrete random variable is identical to a specific outcome of that random variable.

Random variables can be mutually independent, usually just referred to as independent. If a collection of random variables is within a random process and is independent and the marginal pmf’s are identical, the process is labeled iid, for independent identically distributed.

If a random process exists, then the probability distribution for any random vectors that were formed from collecting outputs of the random process will be discovered, at minimum, in theory, from the inverse image formula. If one desires a more direct example of random process, the problem can be more complicated as more is needed, in addition to a consistent assignment of probabilities and identity mapping. This is where Kolmogorov extension theory comes in. This theorem, named after the initial developer of the modern probability theory, A.N. Kolmogorov. This theory states that if one can specify a family of pmf’s for all n, then the pmf’s can describe the random process.