Somebody recently asked me this question “Does Shannon Capacity Increase by Dividing a Frequency Band into Narrow Bins”. To be honest I was momentarily confused and thought that this may be the case since many of the modern Digital Communication Systems do use narrow frequency bins e.g. LTE. But on closer inspection I found that the Shannon Capacity does not change, in fact it remains exactly the same. Following is the reasoning for that.
Shannon Capacity is calculated as:
Now if the bandwidth ‘B’ is divided into 10 equal blocks then the transmit power ‘P’ for each block would also be divided by 10 to keep the total transmit power for the entire band to be constant. This means that the factor P/(B*No) remains constant. So the total capacity for the 10 blocks would be calculated as:
So the Shannon Capacity for the entire band remains the same.
PS: The reason for the narrower channels is that for a narrow channel the channel appears relatively flat in the frequency domain and the process of equilization is thus simplified (a simple multiplication/division would do).
Note: ‘No’ is the Noise Power Spectral Density and ‘B*No’ is the Noise Power.
It is sometimes important to know the relationship between various distributions. This can be useful if there is a function available for one distribution and it can be used to derive other distributions. In the context of Wireless Communications it is important to know the relationship between the Uniform, Gaussian and Rayleigh distribution.
According to Central Limit Theorem the sum of a large number of independent and identically distributed random variables has a Gaussian distribution. This is used to model the amplitude of the in-phase and quadrature components of a wireless signal. Shown below is the model for the received signal which has been modulated by the Gaussian channel coefficients g1 and g2.
The envelope of this signal (sqrt(g1^2+g2^2)) as a Rayleigh distribution. Now if you only had a function for Uniform Distribution you can generate Rayleigh Distribution using the following routine.
Note: Here a1 and a2 can be considered constants (at least during the symbol duration) and its really g1 and g2 that are varying.
1. The simplest channel model just scales the input signal by a real number between 0 and 1 e.g. if the signal at the transmitter is s(t) then at the receiver it becomes a*s(t). The effect of channel is multiplicative (the receiver noise on the other hand is additive).
2. The above channel model ignores the phase shift introduced by the channel. A more realistic channel model is one that scales the input signal as well rotates it by a certain angle e.g. if s(t) is the transmitted signal then the received signal becomes a*exp(jθ)*s(t).
3. In a realistic channel the transmitter, receiver and/or the environment is in motion therefore the scaling factor and phase shift are a function of time e.g. if s(t) is the transmitted signal then the received signal is a(t)*exp(jθ(t))*s(t). Typically in simulation of wireless communication systems a(t) has a Rayleigh distribution and θ(t) has a uniform distribution.
4. Although the above model is quite popular, it can be further improved by introducing temporal correlation in the fading envelope. This can be achieved by the Smith’s simulator which uses a frequency domain approach to characterize the channel. The behavior of the channel is controlled by the Doppler frequency fd. Higher the Doppler frequency greater is the variation in the channel and vice versa .
5. Finally the most advanced wireless channel model is one that considers the channel to be an FIR filter where each tap is defined by the process outlined in (4). The channel thus performs convolution on the signal that passes through it. In the context of LTE there are three channel models that are defined namely Extended Pedestrian A (EPA), Extended Vehicular A (EVA) and Extended Typical Urban (UTU) .
Note: As an after thought I have realized that this channel model becomes even more complicated with the introduction of spatial correlation between the antennas of a MIMO system .
We have previously discussed the bit error rate (BER) performance of M-QAM in AWGN. We now discuss the BER performance of M-QAM in Rayleigh fading. The one-tap Rayleigh fading channel is generated from two orthogonal Gaussian random variables with variance of 0.5 each. The complex random channel coefficient so generated has an amplitude which is Rayleigh distributed and a phase which is uniformly distributed. As usual the fading channel introduces a multiplicative effect whereas the AWGN is additive.
The function “QAM_fading” has three inputs, ‘n_bits’, ‘M’, ‘EbNodB’ and one output ‘ber’. The inputs are the number of bits to be passed through the channel, the alphabet size and the Energy per Bit to Noise Power Spectral Density in dB respectively whereas the output is the bit error rate (BER).
function[ber]= QAM_fading(n_bits, M, EbNodB)
The bit error rates of the four modulation schemes 4-QAM, 16-QAM, 64-QAM and 256-QAM are shown in the figure below. All modulation schemes use Gray coding which gives a few dB of margin in the BER performance. As with the AWGN case each additional bit per symbol requires about 1.5-2 dB in signal to ratio to achieve the same BER.
M-QAM Bit Error Rate in Rayleigh Fading
Although not shown here similar behavior is observed for higher order modulation schemes such as 1024-QAM and 4096-QAM (the gap in the signal to noise ratio for the same BER is increased to about 5dB).
Quadrature Amplitude Modulation has been adopted by most wireless communication standards such as WiMAX and LTE. It provides higher bit rates and consequently higher spectral efficiencies. It is usually used in conjunction with Orthogonal Frequency Division Multiplexing (OFDM) which provides a simple technique to overcome the time varying frequency selective channel.
We have previously discussed the formula for calculating the bit error rate (BER) of QAM in AWGN. We now calculate the same using a simple Monte Carlo Simulation.
function[ber]= QAM_AWGN(n_bits, M, EbNodB)
The above function basically has three inputs and one output. The inputs are the number of bits to be passed through the channel, the size of the constellation and the signal to noise ratio in dB. The output is the bit error rate (BER). The simulation can be divided into three section namely the transmitter, the channel and the receiver. In this simulation we have used Gray coding which gives us about a dB of improvement at low to medium signal to noise ratio.
- M-QAM Bit Error Rate in AWGN
As seen above the BER obtained through our simulation matches quite well with the BER obtained through the theoretical formula. Each additional bit per symbol required about 2dB extra in signal to noise ratio to achieve the same bit error rate.
Most of us have used the FFT routine in MATLAB. This routine has become increasingly important in simulation of communication systems as it is being used in Orthogonal Frequency Division Multiplexing (OFDM) which is employed in 4G technologies like LTE and WiMAX. We would not go into the theoretical details of the FFT, rather, we would produce the MATLAB code for it and leave the theoretical discussion for a later time.
The underlying technique of the FFT algorithm is to divide a big problem into several smaller problems which are much easier to solve and then combine the results in the end.
%%%%% INITIALIZATION %%%%%
%%%%% BIT REVERSAL %%%%%
while (m >= 2 && j > m)
%%%%% DANIELSON-LANCZOS ALGO %%%%%
while (n > mmax)
In the above example we have calculated an ‘nn’ point complex FFT of an ‘nn’ point complex time domain signal. The algorithm takes the input in a special arrangement where the ‘nn’ point complex input signal is converted into ’2*nn’ real sequence where the imaginary components are placed in odd elements and real components are placed in even elements of the input sequence. A similar arrangement works for the output sequence.
Shown below is the FFT of a complex exponential with a frequency of 100 Hz. The plot is shown from 0Hz to 1000 Hz which is the sampling frequency. A signal with multiple frequencies would have to be passed through a Low Pass Filter (LPF) so that the signal components above 500 Hz (fs/2) are filtered out. When the FFT of a real signal is performed an image frequency is produced between 500 Hz to 1000 Hz.
Fast Fourier Transform of a Complex Exponential
Here we have discussed the case of complex input sequence. Simplifications can be made for a real sequence or for special signals such as pure sine and cosine waves. We will discuss these in later posts.
 Numerical Recipes in C, The Art of Scientific Computing, Cambridge University Press.
We have previously discussed the problem of detecting two closely spaced sinusoids using the Discrete Fourier Transform (DFT). We assumed that the data set we got was pure i.e. there was no noise. However, in reality this is seldom the case. There is always some noise, corrupting the signal. Let us now see how it effects the detection problem.
We consider Additive White Gaussian Noise (AWGN) as the corrupting source. The noise power is set equal to the power of the two sinusoids i.e. we have an SNR of 0 dB. This is quite a severe case, the noise power is usually a few dB below the signal power. We are also bounded by the number of samples, N=64, giving us a resolution of 15.87 Hz.
xlabel ('Normalized Frequency')
The data is plotted on a logarithmic scale so that we can compare the signal and noise power levels.
DFT of Two Tones in Noise
It is observed that we still have two peaks around the required frequency bins but there are also a number of false peaks. These peaks are around 10 dB lower than the signal peaks and should not cause a false detection. So the signal to noise ratio of 0 dB in the time domain is translated to a signal to noise ratio of about 10 dB in the frequency domain (this can be realized using an appropriate filter).
In the previous post we had introduced the Discrete Fourier Transform (DFT) as a method to perform the spectral analysis of a time domain signal. We now discuss an important property of the DFT, its spectral resolution i.e. its ability to resolve two signals with similar spectral content.
Initially one might think that increasing the sampling frequency would increase the spectral resolution but this totally incorrect. In fact if the sampling frequency is increased, keeping the number of time domain samples to be the same, the resolution actually decreases. So how do we calculate the spectral resolution. One simple way is to calculate the difference between two frequency bins as fs/(N-1) or 1/(N-1)Ts. Simply put the resolution in the frequency domain is the inverse of the sample length in the time domain.
So let us now calculate the DFT of two closely spaced sine waves keeping the sampling frequency to be the same and changing the number of time domain samples (only the result for N=64 shown here). We again list down the code used to calculate the DFT.
xlabel ('Normalized Frequency')
We first do some quick math to find the spectral resolution.
Now the two tones are space 20 Hz apart (100 Hz and 120 Hz), so we can predict that the two tones would be detected successfully. The result of the DFT operation on the composite signal is shown below.
DFT of Two Tones
It is observed that although the two tones are detected, they are not exactly at the desired frequencies (0.10 and 0.12). Secondly the amplitude of the two tones is different although the time domain signals had equal amplitude. Both these phenomenon are due to the fact that we only have a limited number of frequency bins (N=64) due to which the resulting spectrum is only an estimate of the true spectrum.
There are better techniques than DFT to separate two closely spaced sinusoids and these are known as super resolution spectral techniques and would be discussed some other time.
Discrete Fourier Transform or DFT is a mathematical operation that transforms a time domain signal to frequency domain. It is usually implemented using the Fast Fourier Transform (FFT). The computational complexity of the DFT is N2 whereas its (N)log2N for the FFT, where N is the number of samples of the the time domain signal. Mathematically, the DFT is written as
and this operation can be easily implemented in MATLAB as shown below.
xlabel ('Normalized Frequency')
The DFT of the cosine function with frequency of 100Hz is shown below. It must be noted that the frequency in the figure below is normalized by the sampling frequency. So the peaks in the graph occur at 0.1*1000=100Hz and 0.9*1000=900Hz (image frequency). Actually if you zoom in you will see that the peaks are not exactly at 0.1 and 0.9 but are slightly offset as the frequency bins are not located at exactly those frequencies.
- DFT of a cosine wave
In case of multiple sinusoids the resolution of the DFT becomes important. Higher the sample size (duration of the time domain signal) higher is the resolution of the DFT. As stated earlier the DFT is a computationally complex operation and usually the Fast Fourier Transform (FFT) is used to compute the frequency domain behavior. We will discuss this in the following posts.
A simple way to expedite the process of DFT calculation is to use matrix manipulation instead of a “for loop”.This is shown below.
xlabel ('Normalized Frequency')
Although matrix manipulation makes for a nice and clean code, it was found that there was no improvement in the computation time. Both the “for loop” code and the “matrix multiplication” code took about 0.35 seconds to execute. However, increasing the sample size from 128 to 1024 results in significant better computation time for the latter scheme.
Posted in Fundamentals
Tagged DFT, FFT, OFDM
We have previously looked at the antennas inside a cell phone. Now we look at another important component of a cell phone; the mobile station modem (MSM). One of the most popular MSM in cell phones today is the Qualcomm Snapdragon S4. The details of this MSM are given in the table below.
Qualcomm Snapdragon S4
As can be seen from the above table this small chipset (can easily fit on a fingertip) packs a punch as far as processing power is concerned. It supports a number of wireless standards from GSM/GPRS to LTE and from CDMA 2000 to TD-SCDMA. One of its close competitors is the NVIDIA Tegra 3 which has four ARM Cortex A9 cores (compared to Snapdragon’s two).
Qualcomm Snapdragon – S4
Posted in LTE, WCDMA
Tagged 4G, LTE, WCDMA