Tag Archives: OFDM

BER for BPSK-OFDM in Frequency Selective Channel

OFDM Tx-Rx Block Diagram

As the data rates supported by wireless networks continue to rise the bandwidth requirements also continue to increase (although spectral efficiency has also improved). Remember GSM technology which supported 125 channels of 200KHz each, which was further divided among eight users using TDMA. Move on to LTE where the channel bandwidth could be as high as 20MHz (1.4MHz, 3MHz, 5MHz, 10MHz, 15MHz and 20MHz are standardized).

This advancement poses a unique challenge referred to as frequency selective fading. This means that different parts of the signal spectrum would see a different channel (different amplitude and different phase offset). Look at this in the time domain where the larger bandwidth means shorter symbol period causing intersymbol interference (as time delayed copies of the signal overlap on arrival at the receiver).

The solution to this problem is OFDM that divides the wideband signal into smaller components each having a bandwidth of a few KHz. Each of these components experiences a flat channel. To make the task of equalization simple a cyclic prefix (CP) is added in the time domain to make the effect of fading channel appear as circular convolution. Thus simplifying the frequency domain equalization to a simple division operation.

Shown below is the Python code that calculates the bit error rate (BER) of BPSK-OFDM which is the same as simple BPSK in a Rayleigh flat fading channel. However there is a caveat. We have inserted a CP which means we are transmitting more energy than simple BPSK. To be exact we are transmitting 1.25 (160/128) times more energy. This means that if this excess energy is accounted for the performance of BPSK-OFDM would be 1dB (10*log10(1.25)) worse than simple BPSK in Rayleigh flat fading channel.

Note:

  1. Although we have shown the channel as a multiplicative effect in the figure above, this is only true for a single tap channel. For a multi-tap channel (such as the one used in the code above) the effect of the channel is that of a filter which performs convolution operation on the transmitted signal.
  2. We have used a baseband model in our simulation and the accompanying figure. In reality the transmitted signal is upconverted before transmission by the antennas.
  3.  The above model can be easily modified for any modulation scheme such as QPSK or 16-QAM. The main difference would be that the signal would have a both a real part and an imaginary part, much of the simulation would remain the same. This would be the subject of a future post. For a MATLAB implementation of 64-QAM OFDM see the following post (64-QAM OFDM).
  4. Serial to parallel and parallel to serial conversion shown in the above figure was not required as the simulation was done symbol by symbol (one OFDM symbol in the time domain represented 128 BPSK symbols in the frequency domain).
  5. The channel model in the above simulation is quasi-static i.e. it remains constant for one OFDM symbol but then rapidly changes for the next, without any memory.

Shannon Capacity CDMA vs OFDMA

We have previously discussed Shannon Capacity of CDMA and OFMDA, here we will discuss it again in a bit more detail. Let us assume that we have 20 MHz bandwidth for both the systems which is divided amongst 20 users. For OFDMA we assume that each user gets 1 MHz bandwidth and there are no guard bands or pilot carriers. For CDMA we assume that each user utilizes full 20 MHz bandwidth. We can say that for OFDMA each user has a dedicated channel whereas for CDMA the channel is shared between 20 simultaneous users.

We know that Shannon Capacity is given as

C=B*log2(1+SNR)

or in the case of CDMA

C=B*log2(1+SINR)

where ‘B’ is the bandwidth and SINR is the signal to noise plus interference ratio. For OFDMA the SNR is given as

SNR=Pu/(B*No)

where ‘Pu’ is the signal power of a single user and ‘No’ is the Noise Power Spectral Density. For CDMA the calculation of SINR is a bit more complicated as we have to take into account the Multiple Access Interference. If the total number of users is ‘u’ the SINR is calculated as

SINR=Pu/(B*No+(u-1)*Pu)

The code given below plots the capacity of CDMA and OFDMA as a function of Noise Power Spectral Density ‘No’.

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% CAPACITY OF CDMA and OFDMA
% u - Number of users
% Pu - Power of a single user
% No - Noise Power Spectral Density
%
% Copyright RAYmaps (www.raymaps.com)
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

clear all
close all

u=20;
Pu=1;
No=1e-8:1e-8:1e-6;

B=20e6;
C_CDMA=u*B*log2(1+Pu./(B*No+(u-1)*Pu));

B=1e6;
C_OFDMA=u*B*log2(1+Pu./(B*No));

plot(No,C_CDMA/1e6);hold on
plot(No,C_OFDMA/1e6,'r');hold off
xlabel('Noise Power Spectral Density (No)')
ylabel('Capacity (Mbps)')
legend('CDMA','OFDMA')
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Shannon Capacity of CDMA and OFDMA
Shannon Capacity of CDMA and OFDMA

We see that the capacity of OFDMA is much more sensitive to noise than CDMA. Within the low noise region the capacity of OFDMA is much better than CDMA but as the noise increases the capacity of the two schemes converges. In fact it was seen that as the noise PSD is further increased the two curves completely overlap each other. Therefore it can be concluded that OFDMA is the preferred technique when we are operating in the high SNR regime.

Udemy Course

Introduction to Wireless Communications

• In this course you will learn the basic principles of wireless communications from 1G to 5G and beyond. You will learn about frequency reuse, capacity, channel coding, modulation and demodulation, OFDM, MIMO, and a host of other topics.

• This course is for you if you are a student and have just started learning about wireless communications or if you are a guy in the field who wants to get a better handle on the fundamental concepts of wireless communications.

Here is the link to the course.

Does Shannon Capacity Increase by Dividing a Frequency Band into Narrow Bins

Somebody recently asked me this question “Does Shannon Capacity Increase by Dividing a Frequency Band into Narrow Bins”. To be honest I was momentarily confused and thought that this may be the case since many of the modern Digital Communication Systems do use narrow frequency bins e.g. LTE. But on closer inspection I found that the Shannon Capacity does not change, in fact it remains exactly the same. Following is the reasoning for that.

Shannon Capacity is calculated as:

C=B*log2(1+SNR)

or

C=B*log2(1+P/(B*No))

Now if the bandwidth ‘B’ is divided into 10 equal blocks then the transmit power ‘P’ for each block would also be divided by 10 to keep the total transmit power for the entire band to be constant. This means that the factor P/(B*No) remains constant. So the total capacity for the 10 blocks would be calculated as:

C=10*(B/10)*log2(1+P/(B*No))

So the Shannon Capacity for the entire band remains the same.

PS: The reason for the narrower channels is that for a narrow channel the channel appears relatively flat in the frequency domain and the process of equilization is thus simplified (a simple multiplication/division would do).

Note: ‘No’ is the Noise Power Spectral Density and ‘B*No’ is the Noise Power.

Fast Fourier Transform – Code

Most of us have used the FFT routine in MATLAB. This routine has become increasingly important in simulation of communication systems as it is being used in Orthogonal Frequency Division Multiplexing (OFDM) which is employed in 4G technologies like LTE and WiMAX. We would not go into the theoretical details of the FFT, rather, we would produce the MATLAB code for it and leave the theoretical discussion for a later time.

The underlying technique of the FFT algorithm is to divide a big problem into several smaller problems which are much easier to solve and then combine the results in the end.

%%%%% INITIALIZATION %%%%%
clear all
close all

fm=100;
fs=1000;
Ts=1/fs;
nn=1024;
isign=1;

t=0:Ts:(nn-1)*Ts;

x_c=exp(1i*2*pi*fm*t);
x(1:2:2*nn-1)=imag(x_c);
x(2:2:2*nn)=real(x_c);

%%%%% BIT REVERSAL %%%%%
n=2*nn;
j=1;
for i=1:2:n-1
    if(j>i)
        temp=x(j);
        x(j)=x(i);
        x(i)=temp;

        temp=x(j+1);
        x(j+1)=x(i+1);
        x(i+1)=temp;
    end
    m=nn;
    while (m >= 2 && j > m)
        j=j-m;
        m=m/2;
    end
    j=j+m;
end

%%%%% DANIELSON-LANCZOS ALGO %%%%%
mmax=2;
while (n > mmax)
    istep=2*mmax;
    theta=isign*(2*pi/mmax);
    wtemp=sin(0.5*theta);
    wpr=-2*wtemp*wtemp;
    wpi=sin(theta);
    wr=1.0;
    wi=0.0;
    for m=1:2:mmax-1
        for i=m:istep:n
            j=i+mmax;
            tempr=wr*x(j)-wi*x(j+1);
            tempi=wr*x(j+1)+wi*x(j);
            x(j)=x(i)-tempr;
            x(j+1)=x(i+1)-tempi;
            x(i)=x(i)+tempr;
            x(i+1)=x(i+1)+tempi;
        end
        wtemp=wr;
        wr=wr*wpr-wi*wpi+wr;
        wi=wi*wpr+wtemp*wpi+wi;
    end
mmax=istep;
end
F=x(2:2:n)+1i*x(1:2:n-1);
plot((0:fs/(nn-1):fs),abs(F))

In the above example we have calculated an ‘nn’ point complex FFT of an ‘nn’ point complex time domain signal. The algorithm takes the input in a special arrangement where the ‘nn’ point complex input signal is converted into ‘2*nn’ real sequence where the imaginary components are placed in odd elements and real components are placed in even elements of the input sequence. A similar arrangement works for the output sequence.

Shown below is the FFT of a complex exponential with a frequency of 100 Hz. The plot is shown from 0Hz to 1000 Hz which is the sampling frequency. A signal with multiple frequencies would have to be passed through a Low Pass Filter (LPF) so that the signal components above 500 Hz (fs/2) are filtered out. When the FFT of a real signal is performed an image frequency is produced between 500 Hz to 1000 Hz.

Fast Fourier Transform of a Complex Exponential
Fast Fourier Transform of a Complex Exponential

Here we have discussed the case of complex input sequence. Simplifications can be made for a real sequence or for special signals such as pure sine and cosine waves. We will discuss these in later posts.

 [1] Numerical Recipes in C, The Art of Scientific Computing, Cambridge University Press.

Detecting Sinusoids in Noise

We have previously discussed the problem of detecting two closely spaced sinusoids using the Discrete Fourier Transform (DFT). We assumed that the data set we got was pure i.e. there was no noise. However, in reality this is seldom the case. There is always some noise, corrupting the signal. Let us now see how it effects the detection problem.

We consider Additive White Gaussian Noise (AWGN) as the corrupting source. The noise power is set equal to the power of the two sinusoids i.e. we have an SNR of 0 dB. This is quite a severe case, the noise power is usually a few dB below the signal power. We are also bounded by the number of samples, N=64, giving us a resolution of 15.87 Hz.

clear all
close all

fm1=100;
fm2=120;
fs=1000;
Ts=1/fs;
N=64;
t=0:Ts:(N-1)*Ts;

x1=sqrt(2)*cos(2*pi*fm1*t);
x2=sqrt(2)*cos(2*pi*fm2*t);
wn=randn(1,N);
x=x1+x2+wn;
W=exp(-j*2*pi/N);

n=0:N-1;
k=0:N-1;

X=x*(W.^(n'*k));

plot(k/N,20*log10(abs(X)))
xlabel ('Normalized Frequency')
ylabel ('X(dB)')

The data is plotted on a logarithmic scale so that we can compare the signal and noise  power levels.

DFT of Two Tones in Noise
DFT of Two Tones in Noise

It is observed that we still have two peaks around the required frequency bins but there are also a number of false peaks. These peaks are around 10 dB lower than the signal peaks and should not cause a false detection. So the signal to noise ratio of 0 dB in the time domain is translated to a signal to noise ratio of about 10 dB in the frequency domain (this can be realized using an appropriate filter).