WINNER-II Path Loss Model

In simple terms the path loss is the difference between the transmitted power and the received power of a wireless communication system. This may range from tens of dB to more than a 100 dB e.g. if the transmitted power of a wireless communication system is 30 dBm and the received power is -90 dBm then the path loss is calculated as 30-(-90)=120 dB. Path loss is sometimes categorized as a large scale effect (in contrast to fading which is a small scale effect).

According to the WINNER-II model the path loss can be calculated  as:

Here d is the separation between the transmitter and receiver in meters, fc is the frequency in GHz, A is the path loss exponent, B is the intercept and C is the frequency dependent parameter. X is the environment specific parameter such as path loss due to a wall. PLfree is the path loss in a free space line of sight environment (here A=20, B=46.4 and C=20).

The table below describes the different environments defined in the WINNER-II model. Once an environment is selected the path loss parameters A, B and C can be selected from the table further down e.g. A1 is the in-building scenario with A=18.7, B=46.8 and C=20 for the LOS case. The PL for a T-R separation of 100 m and frequency of 2 GHz is calculated as:

PL=18.7*log10(100)+46.8+20*log10(2/5)=76.42 dB

A separate equation for the path loss is given where the parameters A, B and C are not sufficient to describe the scenario.

Note:

1. Here CG is the concept group that developed the particular scenario. This is either Local Area (LA), Metropolitan Area (MA) or Wide Area (WA).

2. For more details visit:

L. Hentilä, P. Kyösti, M. Käske, M. Narandzic , and M. Alatossava. (2007, December.) MATLAB implementation of the WINNER Phase II Channel Model ver1.1 [Online]. Available: https://www.ist-winner.org/phase_2_model.html

Soft Frequency Reuse in LTE

Frequency Reuse is a well known concept that has been applied to wireless systems over the past two decades e.g. in GSM systems. As the name suggests Frequency Reuse implies using the same frequencies over different geographical areas. If we have a 25MHz band then we can have 125 GSM channels and 125*8=1000 time multiplexed users in a given geographical area. Now if we want to increase the number of users we would have to reuse the same frequency band in a geographically separated area. The technique usually adopted is to use a fraction of the total frequency band in each cell such that no two neighbor cells use the same frequency. Typically the frequency band is divided into 3 or 7 cells.

The division of the frequency band in to smaller chunks reduces the system capacity e.g. one cell with 25 MHz bandwidth would have much higher capacity then 7 cells having 3.5 MHz each. To overcome this problem a frequency reuse of 1 has been proposed i.e. each cell has the full system bandwidth (nearly). The problem of co-channel interference at the cell boundaries is resolved by dedicating a small chunk of the available spectrum for the cell edges.

In Soft Frequency Reuse (SFR) the cell area is divided into two regions; a central region where all of the frequency band is available and a cell edge area where only a small fraction of the spectrum is available. The spectrum dedicated for the cell edge may also be used in the central region if it is not being used at the cell edge. The lack of spectrum at the cell edge may result in much reduced Shannon Capacity for that region. This is overcome by allocating high power carriers to the users in this region thus improving the SINR and the Shannon Capacity.

Note:
1. The Signal to Interference and Noise Ratio is given as:
SINR=Signal Power/(Intercell Interference+Intracell Interference+AWGN Noise)
2. Typically the term capacity was used to describe the number of voice channels (or users) that a system can support. But with modern digital communication systems it usually refers to the Shannon Capacity that can be achieved (in bits/sec/Hz).

[1] Yiwei Yu, Eryk Dutkiewicz, Xiaojing Huang, Markus Mueck and Gengfa Fang, “Performance Analysis of Soft Frequency Reuse for Inter-cell Interference Coordination in LTE Networks”, ISCIT 2010.

WiMAX Path Loss and Antenna Height

As discussed previously the SUI (Stanford University Interim) model can be used to calculate the path loss of a WiMAX link. The SUI model is given as:

SUI Path Loss Equation

It has five components:

1. The free space path loss (A) up to the reference distance of ‘do’.
2. Additional path loss for distance ‘d’ with path loss exponent ‘n’.
3. Additional path loss (Xf) for frequencies above 2000 MHz.
4. Path gain (Xh) for receive antenna heights greater than 2 m.

The most important factor in this equation is the distance dependent path loss. The impact of this factor is controlled by the path loss exponent ‘n’. It is well known that in free space the path loss exponent has a value of 2. In more realistic channels its value ranges anywhere from 2 to 6. For SUI model the path loss exponent is calculated as:

n=a-(b*hb)+(c./hb)

where a, b and c are SUI model specific parameters. It is obvious that the path loss exponent decreases with increase in base station antenna height ‘hb’. The path loss exponent for various antenna heights is shown below.

Path Loss Exponent

It is observed that as the base station antenna height is varied from 10 m to 80 m the path loss exponent for the three scenarios varies from around 5.5-6.0 to 3.5-4.5. Basically what this means is that for higher base station antenna heights the cell radius would be larger. However we need to be careful when making this statement. Higher antenna heights also sometimes results in a weak signal area close to the base station. This is where the antenna downward tilt becomes an important factor. Antenna downward tilt usually has a value around 5-10 degrees. It is somewhat surprising that although it is such an important factor none of the well known empirical models take it into account.

Note: SUI Model was initially formulated based upon the data collected by AT&T Wireless across the United States in 95 existing macrocells at 1.9 GHz.

WiMAX Path Loss Calculation

Calculation of the path loss is fundamental to Wireless System Design. There are many models available for calculating the path loss such as Okumura Model, Hata Model, COST-231 Model and more recently the SUI (Stanford University Interim) Model. The SUI Model has been specifically proposed for Broadband Wireless Access Systems such as WiMAX. It defines three types of environments namely A, B and C which are equivalent to the urban, suburban and rural environments defined in the earlier models. According to this model the path loss can be calculated as:

PL=A+10*n*log10(d/do)+Xf+Xh+s

where

n=a-(b*hb)+(c/hb)
A=20*log10(4*pi*do/lambda)
Xf=6.0*log10(f/2000)
Xh=-10.8*log10(hr/2) for A&B
Xh=-20.0*log10(hr/2) for C

and

frequency of operation = f = >2000MHz
transmit receive separation = d = 100 m to 8000 m
reference distance = do = 100 m
base station antenna height = hb = 10 m to 80 m
receive antenna height = h = 2 m to 10 m
shadowing factor with lognormal distribution = s = 8.2 dB to 10.6 dB

The values for the parameters a,b and c for the three environment are given in the table below.

Doing a quick calculation for f=2500 MHz, hb=30 m, hr=2 m, s=8.2, do=100 m and d=1000 m gives us a path loss 137.13 dB for Type-A channel. Increasing the frequency to 3500 MHz (another WiMAX band) increases the path loss to 140.93 dB i.e. there is a 3.8 dB increase in the path loss.

So to recap the path loss given by the SUI model is composed of 5 elements:

1. The free space path loss (A) up to the reference distance of ‘do’.
2. Additional path loss for distance ‘d’ with path loss exponent ‘n’.
3. Additional path loss (Xf) for frequencies above 2000 MHz.
4. Path gain (Xh) for receive antenna heights greater than 2 m.

LTE Path Loss at 700 MHz

In the previous post we had compared the path loss of LTE at 728 MHz and 1805 MHz in a free space line of sight channel. This is a very simplistic channel model which tells us that ratio of the received signal strengths at these frequencies can be simply found as:

(f1/f2)^2=(1805/728)^2=6.15

That is the received signal strength at 728 MHz is 6.15 times higher than the received signal strength at 1805 MHz.

Now let us consider a more realistic channel model known as the COST-231 model. According to this model the path loss (difference between the transmit power and receive power) is given as:

L=46.3+33.9*log10(f)-13.82*log(ht)-a+(44.9-6.55*log10(ht))*log10(d)+C

where

f=frequency in MHz (0.1500 MHz – 2000 MHz)

ht=base station antenna height in m (30 m – 200 m)

hr=mobile station antenna height in m (1 m – 10 m)

d=transmit receive separation in km (1 km – 20 km)

C=3 dB for metropolitan centres

and mobile station antenna correction factor is given as:

a=3.2*log10(11.75*hr)^2-4.97

Using the above equations with ht=30 m, hr=1 m and d=1 km the path loss at 728 MHz and 1805 MHz is found out to be 100.63 dB and 114.00 dB respectively i.e. there is a gain of 13.37 dB when using the lower frequency. In simpler terms the received signal at 728 MHz would be 21.72 times stronger than the signal at 1805 MHz.

Such a remarkable improvement in signal strength or in signal to noise ratio (SNR) has the potential of increasing the throughput four folds. For example at an SNR of 1.5 dB QPSK 1/2 would give a throughput of 6.00Mbps whereas at an SNR of 14.7 dB a modulation coding scheme (MCS) of 64QAM 2/3 would result in a throughput of 24.01 Mbps.

Modulation Coding Schemes

Ray-Tracing for Network Planning-II

It’s very easy to get lost in the jargon when selecting a simulation tool for planning your wireless network. You will be faced with complex terminology which would not make much sense. At one end of the spectrum are solutions based on simple empirical models while at the other end are solutions based on ray-tracing techniques. Empirical models are based on measurement data and are your best bet if you want a quick and cheap solution whereas ray-tracing techniques are based on laws of physics and promise more accurate results. In principle ray-tracing techniques are quite simple: just transmit a bunch of rays in all directions and see how they behave. However when the number of rays and their interactions becomes large the simulation time may become prohibitively expensive. The simulation time for complex geometries may vary from a few hours to several days.

Following are some of the factors that you must consider when selecting a ray-tracing simulator.

1. Upper limit on the number of interactions

Ray-tracing simulators essentially generate a bunch of rays (image based techniques are an exception) and then follow them around as they reflect, refract, diffract and scatter. Each interaction decreases the strength of the rays. The strength of the rays also decays with distance. As a result the simulator needs to decide when to terminate a ray path. This is usually done based upon the number of interactions that a ray undergoes (typically 8-10 interactions are considered) or based upon its strength (once the strength of a ray falls below -110 dBm there is no point following it any further). Higher the number of interactions considered, greater the accuracy of the simulation but higher the computational complexity.

2. Granularity in field calculations

Field calculations cannot be performed at each and every point within the simulation space. The usual approach is to divide the region under study into a grid such that locations closer to a transmitter are covered more finely and the regions further away are covered in lesser detail. The rays are then combined within each block of the grid to get the resultant field strength. The level of granularity determines the computation load. It would be prohibitively expensive to have a very high level of granularity for a large network.

3. Accuracy in modeling the various propagation phenomenon

As mentioned previously an accurate modeling of all propagation phenomena is required including reflection, refraction, diffraction and scattering. Some ray-tracing simulators might model reflection and refraction only while ignoring the other phenomenon such as diffraction. Furthermore some ray-tracing simulators might consider all reflections to be specular (no scattering). This is a good approximation for large smooth surfaces but is not such a good assumption for irregular terrain.

4. Granularity of the terrain database

Most state of the art ray-tracing tools use some sort of terrain database to perform their calculations. These terrain databases are required for determining the paths of the rays as they travel in dense urban environments. These databases may contain simple elevation data or actual 3D building data. These databases may have accuracy of 10m or 30m or maybe more. The accuracy of the simulation is highly dependent on the granularity of the terrain database.

5. Accuracy in representation of building materials

The wireless signal propagation within cities is governed by complex phenomena such as reflection, refraction, diffraction and scattering. Let’s take the example of the phenomenon of reflection. The percentage of signal reflected back at a particular interface is dependent on permittivity and permeability of the object. Based on these properties only 10% of the signal maybe reflected or 50% of the signal may be reflected. So, for accurate simulation not only should we have a high level of granularity of the 3D building data, we also need an accurate description of the building materials.

6. Dynamic Channel Behavior

A wireless channel is continuously changing i.e. the channel is dynamic (as opposed to being static). However the ray-tracing techniques available in the literature do not capture this dynamic behavior. The dynamic behavior of the channel is mainly due to the motion of the transmitter or receiver as well as motion of the surroundings. While the position of the transmitter and receiver can be varied in the ray-tracing simulation the surroundings are always stationary. Hence a ray-tracing simulator is unable to capture the time-varying behavior of the channel.

The accuracy of ray-tracing simulators is bound to increase as the computational power of computers increases and as accurate 3D building databases become available throughout the world. Until that time we would have to fall back to approximate simulations or maybe measurement results.