A vast amount of activity is occurring on a global basis to upgrade electrical power grids to make the delivery of electricity more efficient, reliable, environmentally friendly and cost effective. This includes a wide variety of equipment and technology within the generation, transmission, distribution and metering portions of the grid. One key aspect of these upgrades is the inclusion of communications capability within a variety of monitoring and metering equipment. Various wireless and wired communication technologies are being evaluated and deployed across the world. RF communications is already the technology of choice in various regions and applications, but is not without its own challenges.
The addition of communications to the grid is driven by the general principle of providing two-way communication between generation, distribution and consumption points in the network. Such communication links are a vital tool in order to operate the grid more efficiently. However, this is not the basis on which the deployment of RF technology first began. Initially, the motivation was to automate the reading of electric, water and gas meters, and to eliminate the need for humans to manually record consumption data.
These automatic meter reading (AMR) systems were designed to support a one-way flow of electricity from the utility to the consumer. The AMR systems also provided a one-way flow of information back to the utility for billing purposes, transmitting the electric consumption over a given time period. Data rates were low, as were the total amount of data communicated, typically less than 1 kbit per month. There are approximately 150 million electric, water and gas meters in the field that already have communication capability; the majority of them have this low data rate, one-way communications capability.
Figure 1 World net elctricity generation by fuel (US Energy Information Administration, 2010).
A combination of forces has driven a new vision of a grid that is fundamentally different from the one developed over the past century. With worldwide electricity demand rising rapidly and a strong desire to limit dependence on fossil fuel, new generation sources will increasingly come from renewable sources such as wind and solar. The US energy administration predicts that worldwide electricity consumption will grow by over 80 percent over the next 25 years and that renewables will grow to become the second leading source of power generation (see Figure 1).
Wind and solar generation are more unpredictable than traditional fossil fuel generation plants and, therefore, create a more complex system to manage. In addition, a significant increase in demand is expected to come from electric vehicles, which creates an uneven demand profile. Finally, to slow the demand growth for electricity, business and home consumers will need to manage their usage in new ways.
Therefore, a vision of a “smart grid” is needed, a name signifying a higher level of measurement, communication, control and protection capability that involves a two-way flow of information on a more frequent basis for purposes beyond just meter reading. This enables a wide array of capabilities, including:
- balancing of generation among centralized power plants;
- optimizing electrical power distribution;
- improving power quality monitoring and outage response;
- enabling control of end-user load profiles;
- implementing time-of-day tariffs;
- accommodating multiple sources of energy including renewable sources;
- enabling remote connect/disconnect;
- providing consumers with real-time feedback on their demand profiles.
Once such communications are in place, utilities and consumers can collectively reduce consumption, increase grid efficiency and reliability, and enable the broader use of electric vehicles and renewable sources of energy.
Driving Requirements and Challenges for the Smart Grid
Smart grid initiatives have been launched, targeting a range of ambitious goals. In many cases, these goals require building an infrastructure to solve existing and future challenges, in advance of full understanding of both the problems and the solutions. The US Energy Independence and Security Act of 2007 defines no fewer than 10 key policy goals for the smart grid, ranging from an overarching goal to “use digital technology to improve reliability, security and efficiency of the electric grid” to more specific targets of “deploying automated metering, grid operations and status and distribution grid management.” These goals anticipate a grid that integrates distributed renewable resources, puts in place demand response resources, enables electricity storage, accommodates plug-in electric vehicles and integrates “smart” appliances and other devices that provide the consumer with timely information and control of their energy usage.
Along with these challenges, there are also anticipated risks driving the need for cyber security and interoperability standards. Standards will allow utilities to begin deployment of an initial network infrastructure in anticipation of millions of future interconnected devices developed by many companies. This same image of millions of easily interconnecting nodes, some with the ability to turn off or on power at homes and sub-stations or add unmanageable loads to the grid, raises the specter of malicious attacks paralyzing our electricity dependent countries. Added to these broad risks are concerns for personal privacy. Minute by minute tracking of energy usage is viewed by some as yet another unwanted digital window into people’s homes and lives.
Added together, these goals and risks represent a complex closed loop systems problem. When comparing just the needed data rates of the smart grid to the Internet or cellular networks, existing communications technology seems more than capable. The real challenge is the balance between longevity, reliability, cost and future requirements. The smart grid will be a huge utility and government investment (Pike’s Research estimates the cumulative global spending to hit $200 B by 2015) and the utilities traditionally require a 20 year or longer lifetime on their equipment investments. These communications systems require future proofing, designed in head room, for problems that will not appear for over a decade. Utilities and equipment providers are continually comparing and trading off immediate implementation costs versus more hazy future requirements.
Smart Grid Communications: Multiple Interconnected Networks
Figure 2 Smart grid communications: A connection of multiple networks.
As the smart grid extends two-way communications between different forms of generation to the electricity consumer, there are multiple systems with different degrees of control, measurement, data recording, protection and optimization. The main systems can be divided into separate groupings: 1) distribution and transmission field and wide area networks; 2) smart meter to data collector or network access point; and 3) smart meter to appliances or charging stations within the home (see Figure 2).
Smart Meter Communications: Dominating the Investment
The communications systems centered in the smart meters are attracting the most attention from competing technologies. With a potential replacement of over 140 million meters in the US and over one billion meters worldwide, smart meters represent the vast majority of nodes, and therefore cost, within the smart grid communications systems being deployed by the utilities.
The two main competing technologies, RF and power line carrier (PLC), are being adopted to different degrees across the world. RF solutions have dominated the North American market. The availability of spectrum in the license-free high power ISM band and the fact that there are relatively few meters for each transformer, has generally meant that RF can be deployed at lower cost in North America.
Smart Meter RF Communications Technology Adoption to Date
Focusing further on the RF systems currently being deployed to connect meters to utilities, another level of competing technology is seen between mesh and star-based systems. These two system approaches attempt to address the RF challenges of the smart grid differently.
Connecting millions of meters back to the utility represents a unique challenge. Connecting existing meters means creating an RF link to over 100 million predetermined locations in the US alone. Many of these locations are in difficult environments for RF communications, such as in basements and behind cement walls. Many are also in urban areas with dense and changing RF interferers.
Mesh systems create multiple short paths (through neighboring meters) for a single meter to reach a central collector, which acts as a gateway into the utility wide area network. Mesh systems, available from multiple vendors, are typically delivering data rates of 100 to 150 kbits/sec, operate using FSK or spread spectrum modulation schemes, are typically within the ISM band centered at 915 MHz and have channel bandwidths of 50 to 200 kHz.
Star-based systems primarily use narrow band signals on licensed channels to bridge a longer distance to fewer central collectors. Fewer central collectors are required, albeit with higher transmit powers, located in clear line-of-sight positions such as mountaintops or tall buildings. Star-based systems typically operate with FSK-based modulation and utilize lower data rates than wider band mesh systems. In addition to the need for fewer central collectors, proponents of star-based systems cite the benefits of in-band interference-free spectrum and simpler network protocols.
Smart Meter RF Networks: How to successfully Deploy Within an Interference Limited Environment
In the US, both the ISM and licensed bands are becoming increasingly crowded. For both mesh and star systems, this means that interference becomes the central challenge to overcome. The deployment of large scale metering networks significantly increases this crowding since a system’s biggest source of interference could be the system itself.
This crowding has a significant impact on the radio and network requirements. For example, good clear channel assessment (CCA) and frequency hopping routines can ease finding a clear channel. Increasing the data rate reduces the length of time each node is transmitting, but it comes at the cost of reduced link margin. For the radio, this crowding also means that the blocking and adjacent channel rejection are often more critical than the receiver sensitivity.
As described above, a primary challenge for the metering infrastructure is that the location of the meters is fixed. Interference problems cannot be solved by adjusting the orientation or height or location, as can be done with a wireless router in a home. Furthermore, because the meters are very often retrofits that fit within an existing meter housing, there is little or no flexibility to modify the package for the purposes of RF performance enhancement. The meters are often secured against a thick reinforced concrete wall only a meter or so above the ground. Very seldom can a simple line-of-sight model be legitimately used to describe the channel.
The interferers of interest vary by band and region of the world. In the US, significant interferers exist for both licensed band and ISM band systems. Common interferers include signals in the TV white bands, cellular carriers and other devices operating in the same bands, which may or may not be part of the same system. Devices operating in the same band may not have been designed for peaceful coexistence, both from the packet structure and modulation choices as well as emission spectra that marginally meet FCC guidelines.
Communication is relatively easy for 80 to 90 percent of the meters that are deployed. The last 10 to 20 percent present the most difficult RF challenges, due to geography and physical objects, significant local interference, or nearby noise sources. Since the problematic meters cannot be moved, the only system-level solutions are the addition of data collectors or repeaters, the addition of a second alternate communications device, like PLC or cellular, or enhanced radio performance. The widespread metering infrastructure ultimately needs to be extremely robust, with 100 percent coverage. To put this in context, a large scale deployment may be five million units. If the utility coverage is only 99 percent, 50 thousand meters cannot be read.
Radio Architecture and Design
A typical radio module contains an antenna, an external power amplifier for higher output applications, an RF band select filter, a radio IC, a communications processor and various discrete elements for matching and bypassing. Many modules also contain an external low noise amplifier (LNA), usually co-packaged with the external RF band select filter, switch and amplifier in a front-end module (FEM). Because of the physical constraints of both the meter housing and the location of the meter itself, the antenna is often located in a non-optimal location or built into the meter housing itself.
The external RF band select filter, such as a surface acoustic wave (SAW) filter, is often used to help attenuate interferers outside the band of interest. For example, in a 902 to 928 MHz system, the filter would help remove interference from the 850 MHz cell bands, 896 to 901/935 to 940 MHz land mobile licensed bands and the 901 to 902 MHz Part 24 Personal Communication Services (PCS) band. Typical RF band select filters can attenuate the signals between 30 and 60 dB, depending on the quality of the filter and the frequency of the interfering signal and how close the signal is to the pass band.
In any transceiver design, there are many architectural and design trade-offs that have to be made with regards to various performance parameters, cost and power dissipation. Many of the performance challenges faced in metering communications environments can be eased with the application of additional money or power (voltage and/or current). However, utilities face difficult justifications regarding the deployment of large scale fixed network systems and additional cost may not be an option.
Additional power is a more complex argument. Many radios are designed to operate in both line-powered and battery-powered systems, and the power dissipation is driven by the battery-powered requirements. Most radio modules in electrical meters have a “last gasp” capability so they can operate for a limited time even if main power is lost, and the radio power requirements are a major contributor to sizing the charge storage necessary for this capability. Also, any increase in power consumption is multiplied by the scale of the deployment. Using the same five million node example above, a relatively benign current increase of 100 mA over 2.5 V leads to an increase of 1.25 MW, or 30 MW-hr per day.
A low-IF architecture can be advantageous for metering applications where the fixed receiver often faces very strong electromagnetic fields, which generate spurs at 50 or 60 Hz multiples that could affect a zero-IF architecture. The data rates in the control and measurement side of the meter will likely stay under 1 Mbps. As the associated filtering requirements can be described as narrowband, a zero-IF architecture would need to address challenging issues of both DC offset and CMOS flicker noise.
Example: How Phase Noise Impacts Blocking Performance
There are many parameters that influence the blocking performance of a transceiver. Usually, the narrowband blocking performance is set by the rejection profile of the receiver’s channel filter and the wideband blocking performance is set by the phase noise performance of the receiver’s local oscillator (LO).
Blocking is typically measured using a bit error rate (BER) test by finding the maximum blocker power that results in a BER degradation of 10-3 with the receiver’s input power set at the sensitivity level of +3 dB. The receiver’s wideband blocking performance can be estimated as:
MaxBlocker (dBm) = -174 dBm + 3 dB + Noise Figure - Phase Noise - 6 dB
where MaxBlocker is the maximum blocker level in dBm that results in a BER degradation of 10-3. Phase Noise is the receiver’s LO phase noise and Noise Figure is the receiver’s noise figure. Table 1 contrasts maximum blocking levels derived from the phase noise performances of transceivers from various suppliers.
In this simplified example it is easy to see the importance that the LO phase noise has on the maximum blocking level. At 10 MHz offset, an LO performance of -142 dBc/Hz will result in a maximum blocking level of -30 dBm. In comparison, a transceiver with a LO phase noise of -130 dBc/Hz (at the same 10 MHz offset) would have a maximum blocking level of -42 dBm. To achieve similar performance, an external RF band select or SAW filter is required at the antenna input to improve the blocking performance. However, this will increase the overall system cost and degrade the receiver’s noise figure due to the insertion loss of the RF filter.
A receiver’s IIP2 and IIP3 performance must also be considered in wireless metering applications, especially in urban areas where high levels of in-band and out-of-band interferers can give rise to unwanted spectral products at the mixer output and thus limit the receiver’s usable dynamic range. As a benchmark of what may be required for device resiliency to two-tone interferers, the transceiver described in this example demonstrates a measured IIP2 of +18.5 dBm and an IIP3 of -11.5 dBm, while maintaining a low power dissipation of 12.8 mA. Moreover, an on-chip Reed Solomon forward error correction (FEC) capability, supported by the on-chip RISC processor, offers additional resilience to burst type errors that result from transient interference or in rapid signal fading environments.
The importance of the phase noise and its impact on the receiver’s dynamic range performance also has implications for clear channel assessment (CCA). In “Listen before talk,” a contention-based MAC protocol, the meter’s receiver makes a power measurement on the channel that it wants to use to transmit. A clear channel is one that the receiver “sees” with a power measurement near the thermal noise floor. A radio with poor linearity and/or phase noise specifications may see a large interferer as elevated noise, and consistently fail the CCA test. Significant interference or inadequate linearity and phase noise may prevent some nodes from communicating with the network because of these failures.
In conclusion, most transceivers for the metering infrastructure are designed to deliver the maximum performance within cost and power constraints. This requires both careful design and challenging tradeoffs to be made for many parameters. It is important that system designers appreciate that in addition to sensitivity and channel selectivity that the receiver’s nonlinear performance and its interference resilience are analyzed and considered as part of wireless network design for the smart grid.
Future Issues That Impact Today’s Deployment
Increase in interferers with AM components
The environment in which the deployed metering communication system exists is expected to be constantly changing. The increased desire for higher bandwidth for streaming applications or for solving bottlenecks resulting from high aggregate concentration means that it is likely there will be more interferers with more complex modulation schemes, often with significant amplitude modulated (AM) components. Anticipating these challenging interferers, IIP2 and IIP3 specifications will become increasingly critical to the radio and network performance.
Narrow band is getting narrower
Recently the FCC has tightened the spectral efficiency requirement for the Part 90 licensed bands, which are used in many metering systems in the US. The Part 90 Private Land Mobile Services band was previously allocated with 12.5 and 25 kHz channels. Later an additional option was added for 6.25 kHz channels. The 25 kHz channels are being phased out and an additional requirement to increase the spectral efficiency has been added, to support a data rate of 4.8 kbps per 6.25 kHz of bandwidth. It is expected that, over time, additional spectral efficiency regulations will be put in place worldwide, forcing the use of more complex modulations or lower modulation indices.
Increasing headroom for new protocol and security standards
The smart grid RF challenges can be described as anticipating current and future spectral challenges and making the proper tradeoffs between performance, power and cost of the RF components. The drive to adopt industry-wide protocols and network security standards forces additional future proofing strategies that extend to all resources, spanning RF, processing and memory. One example is the anticipated transition from ZigBee SEP 1.0 to ZigBee SEP 2.0. Many smart meters are being deployed with certified ZigBee networks using the smart energy profile 1.0 to communicate to future devices in the home. A new profile, ZigBee smart energy profile 2.0, is under development, supporting IP-based addressing, re-defining the layers above the MAC and PHY. Utilities have a strong desire to deploy meters that can support the existing SEP 1.0 standard, yet ensure that the meters can download the new profile when it is finalized. This is one of a number of examples of upgradeability to new standards driving an increased need for both processing and memory headroom.
Conclusion
The smart grid presents many technical, market and societal challenges. RF designers must aim for new optimizations, satisfying a need for industrial strength products that will last for decades and perform in an increasingly crowded spectrum. The payoff will be a grid infrastructure with the flexibility to adapt as the demand and supply of electricity evolve.
James Frame received his BS, MS and PhD degrees, all in electrical engineering, from the University of Illinois at Urbana/Champaign in 1995, 1995 and 1998, respectively. He has been with Analog Devices for five years and is currently a system applications engineer in the energy segment team. Prior to this he spent five years working as a senior development engineer at Advantest America.
Jeritt Kent received his BSEE and MEngEE degrees from the University of Idaho. He has been with Analog Devices for 11 years and is currently the RF and Energy Specialist for ADI’s Industrial and Instrumentation segment. He was previously employed at Allegro Microsystems, and before that, spent six years in CMOS ASIC design at American Microsystems Inc.
Ian Lawee received his BS and MSEE degrees in computer science from the University of Pennsylvania in 1988 and his MSEE and MBA degrees from the Massachusetts Institute of Technology in 1995. He is currently a marketing manager at Analog Devices, responsible for energy communications, power and measurement ICs. Prior to this position, he spent 14 years in the semiconductor test and measurement industry holding various positions in engineering and marketing management.