Category Archives: Network Planning

Eclipse

Eclipse 1.0 – A Paradigm Shift in RF Planning

NEW: Simulation of a Moving Transmitter (such as a car)

NEW: Simulation of a Moving Transmitter (such as a pedestrian)

Radio frequency planning is an essential component of network planning, roll-out, up-gradation, expansion etc. Several methods can be adopted for this from something as simple as free space models, empirical path loss models to the significantly more complicated, time consuming and expensive drive testing. Drive testing gives very accurate results but these results can be rendered useless by changing the position of an antenna or the tilt or transmit power of an antenna requiring another run in the field. One solution to this problem is ray-tracing which is very accurate but is usually considered to be very computationally expensive and of little practical value. But recent advances in computational power of machines coupled with efficient techniques have given a new lease of life to this method.

Eclipse is a near real-time simulation software for prediction of signal strength in urban areas. The software uses shooting and bouncing ray (SBR) method of ray tracing with 1 degree ray separation, 1 m step size and 9 interactions per ray path. The simulation parameters can be varied according to the resolution required. The code is highly optimized to give results in shortest possible time. It is especially useful for network planning of ultra-dense wireless networks where a dense network of antennas is placed on lamp posts instead of telecom towers. Various frequency bands can be simulated, along with different antenna radiation patterns and MIMO configurations.


Helsinki 3D Building Data

 

Path Followed by a Single Ray

 

Paths Followed by Multiple Rays

 

Received Signal Strength Over Area of Interest

Note: If you would like to run a test simulation send us a request at info@raymaps.com

Android Apps

1. Rx Signal Meter

Received signal strength calculation is required by RF engineers working in the field. This application provides an easy to use interface to calculate the received signal strength. Input parameters include the transmit power, transmit antenna gain, receive antenna gain, transmit receive separation and frequency of operation. Output is the received signal strength in dBm.

https://play.google.com/store/apps/details?id=com.raymaps.path.loss

2. Rx Signal Pro

A simple application that can be used to calculate the Received Signal Strength using one of the following six models.
1. Free Space Path Loss
2. Hata Model
3. COST-231 Model
4. ECC-33 Model
5. Ericsson Model
6. SUI Model
Input parameters include the transmit power, frequency, transmit-receive separation, base station antenna height and mobile station antenna height. Output parameter is the Received Signal Strength in dBm. The application is also available in Desktop version.

https://play.google.com/store/apps/details?id=com.raymaps.signal.me…

3. Path Loss Calculator

A simple application that can be used to calculate the Path Loss using one of the following six models.

1. Free Space Path Loss
2. Hata Model
3. COST-231 Model
4. ECC-33 Model
5. Ericsson Model
6. SUI Model

Input parameters include the frequency, transmit-receive separation, base station antenna height and mobile station antenna height. Output parameter is the Path Loss in dB. The application is also available in Desktop version.

https://play.google.com/store/apps/details?id=com.raymaps.path.loss…

4. RF Planner

This is a simple application which can be used to do RF planning of a GSM, WCDMA, LTE or WiMAX site. The input parameters are the Tx Power, Rx Sensitivity, Tx Height, Rx Height and Frequency of operation. The output parameter is the Cell Radius which is used to plot the coverage area of the cell site on Google Maps. The coverage scenario can be selected from the following three options:
1. Urban
2. Suburban
3. Rural
The underlying model is applicable to Frequencies of up to 3500 MHz, Distances of 100-8000 m, BS Antenna Heights of 10-80 m and MS Antenna Height of 2-10 m.

https://play.google.com/store/apps/details?id=com.raymaps.rf.planner1

Antenna Radiation Pattern and Antenna Tilt

An introductory text in Communication Theory would tell you that antennas radiate uniformly in all directions and the power received at a given distance ‘d’ is proportional to 1/(d)^2. Such an antenna is called an isotropic radiator. However, real world antennas are not isotropic radiators. They transmit energy in only those directions where it is needed. The Gain of a antenna is defined as the ratio of the power transmitted (or received) in a given direction to the power transmitted in that direction by an isotropic source and is expressed in dBi.

Although antenna Gain is a three dimensional quantity, the Gain is usually given along horizontal and vertical planes passing through the center of the antenna. The Horizontal and Vertical Gain patterns for a popular base station antenna Kathrein 742215 are shown in the figure below.

Kathrein 742215 Gain Pattern
Kathrein 742215 Gain Pattern

The actual Gain is given with respect to the maximum Gain which is a function of the frequency e.g. in the 1710-1880 MHz band the maximum Gain has a value of 17.7dBi. Another important parameter is the Half Power Beam Width (HPBW) which has values of 68 degree and 7.1 degree in the horizontal and vertical planes respectively. HPBW is defined as the angle in degrees within which the power level is equal to or above the -3 dB level of the maximum.

Also shown in the above figure are approximate Horizontal Gain patterns for two antennas that have been rotated at 120 degrees and 240 degrees. Together these three antennas cover the region defined as a cell. There would obviously be lesser coverage in areas around the intersection of two beams.

A somewhat more interesting pattern is in the vertical direction where the HPBW is only 7.1 degrees. Thus it is very important to direct this beam in the right direction. A perfectly horizontal beam would result in a large cell radius but may also result in weak signal areas around the base station. A solution to this problem is to give a small tilt to the antenna in the downward direction, usually 5-10 degrees. This would reduce the cell radius but allow for a more uniform distribution of energy within the cell. In reality the signal from the main beam and side lobes (one significant side lobe around -15 dB) would bounce off the ground and buildings around the cell site and spread the signal around the cell.

Antenna Tilt of 10 Degrees
Antenna Tilt of 10 Degrees

The above figure gives a 2D view of signal propagation from an elevated antenna with a downward tilt of 10 degrees in an urban environment.

Base Station Antenna Tilt and Path Loss

Path loss is basically the difference in transmit and receive powers of a wireless communication link. In a Free Space Line of Sight (LOS) channel the path loss is defined as:

L=20*log10(4*pi*d/lambda)

where ‘d’ is the transmit receive separation and ‘lambda’ is the wavelength. It is also possible to include the antenna gains in the link budget calculation to find the end to end path loss (cable and connector losses may also be factored in). Antenna gains are usually defined along a horizontal plane and vertical plane passing through the center of the antenna. The antenna gain can then be calculated at any angle in 3D using the gains in these two planes.

Although 3D antenna gains are quite complex quantities simplified models are usually used in simulations e.g. a popular antenna Kathrein 742215 has the following antenna gain models [1] along the horizontal and vertical planes:

Gh(phi)=-min(12*(phi/HPBWh)^2, FBRh)+Gm

Gv(theta)=max(-12*((theta-theta_tilt)/HPBWv)^2, SLLv)

where

Gm=18 dBi
HPBWh=65 degrees
HPBWv=6.2 degrees
SLLv=-18 dB

We are particularly interested in the gain in the vertical plane and the effect of base station antenna tilt on the path loss. We assume that the mobile antenna station has uniform gain in all directions. The path loss can be then calculated as:

L=20*log10(4*pi*d/lambda)+Gv(theta)+Gh(phi)

where we have assumed that Gh(phi)=0 for all phi (this is a reasonable simplification since changing the distance along the line of sight would not change Gh(phi) ). Using the above expression the path loss in free space is calculated for a frequency of 1805 MHz, base station antenna height of 30 m and an antenna tilt of 5 degrees.

Effect of Antenna Tilt on Path Loss
Effect of Antenna Tilt on Path Loss

It is observed that there is a sudden decrease in path loss at distances where the antenna main beam is directed. If the antenna tilt is increased this behavior would be observed at smaller distances. Since we have used a side lobe level that is fixed at -18 dB we see a rapid change in behavior at around 100 m. If a more realistic antenna model is used we would see a gradual decrease in path loss at this critical distance.

[1] Fredrik Gunnarsson, Martin N Johansson, Anders Furuskär, Magnus Lundevall, Arne Simonsson, Claes Tidestav, Mats Blomgren, “Downtilted Base Station Antennas – A Simulation Model Proposal and Impact on HSPA and LTE Performance”,
Ericsson Research, Ericsson AB, Sweden. Presented at VTC 2008.

Qualcomm In Muddy Waters In India

Remember Qualcomm CEO Paul Jacobs proudly claiming that his company had prevented WiMAX from getting a hold in India by acquiring BWA licenses in four regions of India. Well now Qualcomm is in a bit of bother as the Department of Telecommunication (DoT) in India has raised objections to the license application filed by Qualcomm. According to news circulating on the internet the DoT has objected to Qualcomm filing four separate applications through its nominee companies in the four regions (Delhi, Mumbai, Kerala and Haryana) it had won the licenses on June 12, 2010. Secondly the DoT has also objected to the delay in the filing of application outside the three month period required by the laws.

Qualcomm has rejected these objections saying that it has followed all rules in letter and spirit. According to Qualcomm the license application was filed in August 2010 within the three month period as required by the laws. However this is disputable as Qualcomm also submitted a revised application in December 2010. Qualcomm has also countered the second objection by saying that it plans to merge the four nominee companies so that there is no breach of law. As per the rules “if at any stage the spectrum allocation is revoked, withdrawn, varied or surrendered, no refund will be made”. So if an understanding is not reached between Qualcomm and DoT, Qualcomm is set to lose more than $1 billion that it had paid for the BWA spectrum.

Qualcomm Inc. is a leading wireless chip manufacturing company of the world. It is the pioneer of CDMA technology and its chipsets have been embedded in more than a billion cell phones. Qualcomm has greatly invested in UMTS technology and is a strong proponent of WCMDA, HSPA and LTE standards. It had a paid about a billion dollars for the right to use a 20 MHz chunk of spectrum in the 2.3 GHz band. It plans to bring TDD LTE to India, which is a considered to be a comparatively economical 4G technology.

WiMAX Path Loss and Antenna Height

As discussed previously the SUI (Stanford University Interim) model can be used to calculate the path loss of a WiMAX link. The SUI model is given as:

SUI Path Loss Equation
SUI Path Loss Equation

It has five components:

1. The free space path loss (A) up to the reference distance of ‘do’.
2. Additional path loss for distance ‘d’ with path loss exponent ‘n’.
3. Additional path loss (Xf) for frequencies above 2000 MHz.
4. Path gain (Xh) for receive antenna heights greater than 2 m.
5. Shadowing factor (s).

The most important factor in this equation is the distance dependent path loss. The impact of this factor is controlled by the path loss exponent ‘n’. It is well known that in free space the path loss exponent has a value of 2. In more realistic channels its value ranges anywhere from 2 to 6. For SUI model the path loss exponent is calculated as:

n=a-(b*hb)+(c./hb)

where a, b and c are SUI model specific parameters. It is obvious that the path loss exponent decreases with increase in base station antenna height ‘hb’. The path loss exponent for various antenna heights is shown below.

Path Loss Exponent
Path Loss Exponent

It is observed that as the base station antenna height is varied from 10 m to 80 m the path loss exponent for the three scenarios varies from around 5.5-6.0 to 3.5-4.5. Basically what this means is that for higher base station antenna heights the cell radius would be larger. However we need to be careful when making this statement. Higher antenna heights also sometimes results in a weak signal area close to the base station. This is where the antenna downward tilt becomes an important factor. Antenna downward tilt usually has a value around 5-10 degrees. It is somewhat surprising that although it is such an important factor none of the well known empirical models take it into account.

Note: SUI Model was initially formulated based upon the data collected by AT&T Wireless across the United States in 95 existing macrocells at 1.9 GHz.

LTE Path Loss at 700 MHz

In the previous post we had compared the path loss of LTE at 728 MHz and 1805 MHz in a free space line of sight channel. This is a very simplistic channel model which tells us that ratio of the received signal strengths at these frequencies can be simply found as:

(f1/f2)^2=(1805/728)^2=6.15

That is the received signal strength at 728 MHz is 6.15 times higher than the received signal strength at 1805 MHz.

Now let us consider a more realistic channel model known as the COST-231 model. According to this model the path loss (difference between the transmit power and receive power) is given as:

L=46.3+33.9*log10(f)-13.82*log(ht)-a+(44.9-6.55*log10(ht))*log10(d)+C

where

f=frequency in MHz (0.1500 MHz – 2000 MHz)

ht=base station antenna height in m (30 m – 200 m)

hr=mobile station antenna height in m (1 m – 10 m)

d=transmit receive separation in km (1 km – 20 km)

C=3 dB for metropolitan centres

and mobile station antenna correction factor is given as:

a=3.2*log10(11.75*hr)^2-4.97

Using the above equations with ht=30 m, hr=1 m and d=1 km the path loss at 728 MHz and 1805 MHz is found out to be 100.63 dB and 114.00 dB respectively i.e. there is a gain of 13.37 dB when using the lower frequency. In simpler terms the received signal at 728 MHz would be 21.72 times stronger than the signal at 1805 MHz.

Such a remarkable improvement in signal strength or in signal to noise ratio (SNR) has the potential of increasing the throughput four folds. For example at an SNR of 1.5 dB QPSK 1/2 would give a throughput of 6.00Mbps whereas at an SNR of 14.7 dB a modulation coding scheme (MCS) of 64QAM 2/3 would result in a throughput of 24.01 Mbps.

Modulation Coding Schemes

Propagation and In-Building Penetration at 700MHz

It is quite well known that wireless signals travel further at lower frequencies. This phenomenon has become particularly important in the context of LTE where a frequency band has been allocated at 700MHz. We would like to quantify the benefits that can be achieved by using this frequency band.

Firstly we find the received signal power at 728 MHz (lowest downlink frequency) and at 3600 MHz (highest downlink frequency) in a free space line of sight channel. The transmit power is set to 1 W and omnidirectional antennas are considered at the transmitter and receiver. The received power for these two frequencies at a distance of 1000 m is found out to be -59.68dBm and -73.57dBm respectively i.e. there is a gain of 13.88 dB by using the lower frequency band. In simpler terms the signal power would be more than 20 times stronger at the lower frequency. This result can also be simply obtained by taking the square of the ratio the two frequencies.

(3600/728)^2 = 24.45

Similarly compared to a frequency of 1805 MHz, the signal at 728 MHz would be more than 6 times stronger.

(1805/728)^2 = 6.1474

Now we turn our attention to the penetration loss i.e. how much would the signal attenuate when passing through a concrete wall. For this we would have to calculate the attenuation constant (alpha) which is given as:

Attenuation Constant
Propagation Constant

Alpha, the attenuation constant is the real part of the propagation constant gamma whereas Beta, the phase constant, is the imaginary part. These quantities depend upon the frequency, relative permittivity, relative permeability and conductivity of the material. The penetration loss can then be found as -20*log10(exp(-alpha*thickness)). Using the properties of concrete the penetration loss at 728 MHz and at 1805 MHz is found out to be 4.16 dB and 10.38 dB i.e. there is a gain of 6.22 dB when using the lower frequency. In simpler terms the signal at the lower frequency would be more than 4 times stronger. We have considered a concrete wall of 10 cm thickness.

It is quite evident that the frequency of operation plays a big role in determining the propagation loss and the penetration loss. The frequency band of 728-746 MHz would thus be a prized commodity and operators would be willing to pay handsome amount to secure it.

Note:

1. We have ignored the reflection that occurs at the interface as its effect is comparably quite small.

2. Following were the material properties of concrete used in the calculation for penetration loss.

728 MHz
Relative permittivity = 4.5775
Relative permeability = 1.0000
Conductivity = 0.055

1805 MHz
Relative permittivity = 4.1000
Relative permeability = 1.0000
Conductivity = 0.1300

4G LTE Coverage within Virginia

Since our last post on Verizon LTE coverage within California, Verizon has removed the LTE Coverage Map from its site. Now it only gives a list of cities that have 4G LTE service (just like T-Mobile). So we now move from the West Coast to the East Coast i.e. Virginia. The state that is home to Virginia Tech, one of the finest schools in the country and a breeding ground for Wireless Engineers. It is thus somewhat of a shock to see that Verizon Wireless has no 4G LTE footprint in the state of Virginia. The only place that it intends to deploy 4G in near future is Bristol Virginia. It claims that by the end of 2013 it would have 4G coverage throughout the US where 3G service is currently available.

As in California T-Mobile has a much wider coverage with many smaller cities getting 4G service. The list includes: Alexandria, Mclean, Newport News, Norfolk, Petersburg, Portsmouth, Reston, Richmond, Roanoke and Lychburg. So although Verizon might be winning the speed race it is definitely not winning the coverage race (at least in CA and VA). And with AT&T T-Mobile merger also a possibility early next year Verizon is set to face some stiff challenge.

Given below are the results of a 4G speed test conducted by PC Magazine in the Northeast.

4G LTE Speed Test
4G LTE Speed Test

The above results show that in areas where 4G coverage is available Verizon allows for average download speeds that are twice that of T-Mobile. The upload speeds are somewhat similar. Overall Verizon is by far the best in terms of the Mobile Speed Index, with T-Mobile in second spot and AT&T at third.

Ray-Tracing for Network Planning-II

It’s very easy to get lost in the jargon when selecting a simulation tool for planning your wireless network. You will be faced with complex terminology which would not make much sense. At one end of the spectrum are solutions based on simple empirical models while at the other end are solutions based on ray-tracing techniques. Empirical models are based on measurement data and are your best bet if you want a quick and cheap solution whereas ray-tracing techniques are based on laws of physics and promise more accurate results. In principle ray-tracing techniques are quite simple: just transmit a bunch of rays in all directions and see how they behave. However when the number of rays and their interactions becomes large the simulation time may become prohibitively expensive. The simulation time for complex geometries may vary from a few hours to several days.

Following are some of the factors that you must consider when selecting a ray-tracing simulator.

1. Upper limit on the number of interactions

Ray-tracing simulators essentially generate a bunch of rays (image based techniques are an exception) and then follow them around as they reflect, refract, diffract and scatter. Each interaction decreases the strength of the rays. The strength of the rays also decays with distance. As a result the simulator needs to decide when to terminate a ray path. This is usually done based upon the number of interactions that a ray undergoes (typically 8-10 interactions are considered) or based upon its strength (once the strength of a ray falls below -110 dBm there is no point following it any further). Higher the number of interactions considered, greater the accuracy of the simulation but higher the computational complexity.

2. Granularity in field calculations

Field calculations cannot be performed at each and every point within the simulation space. The usual approach is to divide the region under study into a grid such that locations closer to a transmitter are covered more finely and the regions further away are covered in lesser detail. The rays are then combined within each block of the grid to get the resultant field strength. The level of granularity determines the computation load. It would be prohibitively expensive to have a very high level of granularity for a large network.

3. Accuracy in modeling the various propagation phenomenon

As mentioned previously an accurate modeling of all propagation phenomena is required including reflection, refraction, diffraction and scattering. Some ray-tracing simulators might model reflection and refraction only while ignoring the other phenomenon such as diffraction. Furthermore some ray-tracing simulators might consider all reflections to be specular (no scattering). This is a good approximation for large smooth surfaces but is not such a good assumption for irregular terrain.

4. Granularity of the terrain database

Most state of the art ray-tracing tools use some sort of terrain database to perform their calculations. These terrain databases are required for determining the paths of the rays as they travel in dense urban environments. These databases may contain simple elevation data or actual 3D building data. These databases may have accuracy of 10m or 30m or maybe more. The accuracy of the simulation is highly dependent on the granularity of the terrain database.

5. Accuracy in representation of building materials

The wireless signal propagation within cities is governed by complex phenomena such as reflection, refraction, diffraction and scattering. Let’s take the example of the phenomenon of reflection. The percentage of signal reflected back at a particular interface is dependent on permittivity and permeability of the object. Based on these properties only 10% of the signal maybe reflected or 50% of the signal may be reflected. So, for accurate simulation not only should we have a high level of granularity of the 3D building data, we also need an accurate description of the building materials.

6. Dynamic Channel Behavior

A wireless channel is continuously changing i.e. the channel is dynamic (as opposed to being static). However the ray-tracing techniques available in the literature do not capture this dynamic behavior. The dynamic behavior of the channel is mainly due to the motion of the transmitter or receiver as well as motion of the surroundings. While the position of the transmitter and receiver can be varied in the ray-tracing simulation the surroundings are always stationary. Hence a ray-tracing simulator is unable to capture the time-varying behavior of the channel.

The accuracy of ray-tracing simulators is bound to increase as the computational power of computers increases and as accurate 3D building databases become available throughout the world. Until that time we would have to fall back to approximate simulations or maybe measurement results.