A Brief DSP Overview

Basic signal processing concepts for beginners

An important area of technology today lies in the field of digital signal processing (DSP). The use of computers and inexpensive microprocessors to process real-world physical signals is prevalent in many aspects of daily life. The proliferation of these low-cost signal processing solutions has led to a host of opportunities for the development and application of DSP concepts. Given the availability of low-cost DSP solutions the use of digital signal processing has in many cases usurped the role of older analog systems real-time technology. DSP provides both a flexibility and power that makes its usage well-suited for resolving a variety of real-world applications.digital signal processing

Regardless of the application the use of digital signal processing techniques requires that the actual real-world signal in question be in a form that can be utilized by the device that will perform the processing. The form that a real-world signal must take prior to its being processed is the subject of sampled data acquisition systems. This brief DSP overview will provide only a cursory discussion of some general concepts found in sampled data acquisition systems, and will not address the many technical aspects involved in sampling theory.

Detailed explanations and discussions about sampled data acquisition systems and digital signal processing concepts in general can be accessed via academic text books, technical journals, and online content made available through the Internet.

Sampled Signals

A sampled signal can be thought of as a representation of some physical quantity that is observed at different points in time. The concept of “sampling” essentially means to measure the signal at an instance in time. For example a real-world electrical voltage signal might be 1 Volt at one instant in time, and then 1.1 Volts shortly thereafter. The electrical voltage signal is continuous in the real “analog” world; that is, it has a voltage value at every instance in time. The laws of physics dictates that the voltage transition continuously from the 1 Volt to the 1.1 Volts.

In a digital sampled data system the signal is defined only at those moments in time when a sample of the signal was measured for its value. From the perspective of the DSP device itself it knows only that the signal was 1 Volt at a point in time, and 1.1 Volts at a later time interval.

Conceptually, the digital sampled data system could measure the real-world signal at faster and faster time increments until its digital representation of the signal approached that of its analog counterpart.

If a digital device were to be able to somehow sample a real-world signal continuously (such that it was exactly equivalent to an analog system) then it would have no processing time left available for it to actually perform any meaningful work on the samples it obtained.

Of course this would not be an ideal solution. The best approach would be to sample the real-world signal at a rate fast enough to obtain a reasonable representation of the signal, yet sample slow enough so that there is time left-over to perform some meaningful work – that is to say, signal processing.

When the sampled data system is able to perform work on a signal such that a desired result can be achieved for an infinitely long period, and there is no loss of data (i.e. skipped samples), then the system is often referred to as operating in “real-time”.

To maximize the amount of processing time left available to the digital system it is of some importance to determine the slowest rate at which it can sample a real-world analog signal and yet still maintain an accurate representation (thereby leaving time for performing the signal processing). The slowest rate that a real-world signal can be sampled actually depends on the signal itself. The signal's ability to suddenly change must be taken into account and forces the digital system to obtain new samples to maintain a current representation.

Typically the rate at which a digital system samples a signal is a constant one, where the sampling is performed at regular defined intervals. This approach is referred to as “periodic sampling”. The use of periodic sampling adds greatly to the simplification of analyzing a digital signal. The sampled signal itself is referred to as a “discrete time sequence”. Whereas the real-world analog signal has values at all points in time, the discrete time sequence has values that are defined only at the sampling time intervals. Even though the sampled signal is not represented in the duration of time between adjacent samples, the sampled signal should not be thought of having zero value at these moments – the discrete time sequence simply is not represented at those points in time.

Data Acquisition

The process of “sampling” a real-world analog signal whereby a discrete time sequence is captured for subsequent signal processing is referred to as “data acquisition”. Typically the captured signal will represent a physical quantity of an electrical voltage. This often is the case because of:

  • A variety of real-world signals can be readily transformed into voltage; this process occurs through the use of transducers such as microphones, accelerometers, piezoelectric material, thermistors, etc.

  • The availability of many inexpensive devices that perform periodic time sampling of voltage that are compatible with digital processor devices.

In data acquisition the process of transforming a continuous analog signal to a discrete time sequence is referred to as “analog-to-digital conversion” or “A/D (A-to-D) conversion”. A wide variety of A/D hardware systems exist today to address the demands of many different real-world signal processing applications.

When designing a data acquisition system it is critical to understand the real-world signal that is being subjected to the analog-to-digital conversion process. One important signal characteristic that is considered is the signal's “bandwidth (BW)”. The difference between the highest and lowest rate at which the signal can change is referred to as bandwidth. The rate of signal change is typically described in a data acquisition system as “frequency”. In many cases it is possible that the signal may not change at all for some length of time – a zero frequency rate of change, sometimes referred to as “DC”) – so bandwidth often refers to the maximum frequency component of the signal. This maximum frequency component of a signal is the occurrence of when a signal changes from its lowest possible value to its highest possible value (also from highest to lowest) in the shortest/minimum duration of time.

Expressed in mathematical terms the maximum signal frequency is defined as one divided by the minimum time interval discussed previously. The unit of measurement associated with frequency is expressed in units of Hertz (Hz), where 1 Hz is equivalent to an acquisition rate of 1 sample acquired per second.

If the bandwidth of a signal is known it becomes possible to determine the very minimum rate at which the signal can be sampled such that the acquired discrete time sequence is a valid representation of the continuous real-world analog signal. Knowing the minimum rate at which to sample a signal is of great importance in a data acquisition system. It allows for the conservation of processing time left available between the sampling of the signal such that actual signal processing work can take place. It also plays a direct role in the amount of processor memory that is required for storing the acquired samples (a faster sampling rate results in more samples acquired).

The minimum sampling rate at which a signal can be acquired while maintaining a valid representation of the continuous real-world analog signal is referred to as the “Nyquist Rate (Fs)”. It is directly related to the bandwidth of the signal that is being sampled. The Nyquist Rate requires that the continuous real-world analog signal must be sampled at a rate that exceeds at least twice the signal's bandwidth in order to arrive at an equivalent digital sampled version of the signal. This relationship is fundamentally important to data acquisition systems.

Analog Signal Band Limiting

A point of confusion might lie in trying to determine where to place a limit on the bandwidth of a signal. It is often helpful to understand that most signals are actually “band limited” prior to the analog-to-digital conversion process itself. This can be because of the very nature of the input device from which the signal originates, or the configuration of analog circuitry through which the signal must travel. The input device itself may have an inherent limitation that induces band limiting. An example of this could be a microphone that is not designed to pickup frequencies that exceed the range of human hearing (say 20 kHz), or a land line telephone circuit that drops off sharply after 3.5 kHz because of the type of wire used to carry the signal and the length traveled.

In some cases the data acquisition system will employ A/D active band limiting circuitry that allows for an adjustable band limit frequency. Often this is done by using a lowpass anti-aliasing filter that will pass signal frequencies that lie below a set “cutoff” frequency and block out signal frequencies that lie above. In theory the cutoff frequency can be very sharply defined, but in actuality these sharp cutoff frequencies can be very difficult to achieve. Given this fact it can be a good idea to select a sample frequency that exceeds the Nyquist Frequency criteria (perhaps 20-30 % higher) to allow for frequencies that lie near the anti-aliasing filter's bandwidth.

When a data acquisition system is producing samples of a properly band limited signal at a sufficient sampling rate, then the analog/digital boundary has been crossed and the world of digital signal processing has been entered. The information contained in the continuous real-world analog signal still exists but now in its digital equivalent form – a form that digital machines and their programming can process.

 
 

Find a product