Lamp color temperature is rated in Kelvin degrees, and the term is used to describe the “whiteness” of the lamp light. In incandescent lamps, color temperature is related to the physical temperature of the filament.

In fluorescent lamps where no hot filament is involved, color temperature is related to the light as though the fluorescent discharge is operating at a given color temperature. The lower the Kelvin degrees, the “warmer” the color tone. Conversely, the higher the Kelvin degrees, the “cooler” the color tone.

Incandescent lamps provide pleasant color tones, bringing out the warm red flesh tones similar to those of natural light. This is particularly true for the “soft” and “natural” white lamps.

Tungsten filament halogen lamps have a gas filling and an inner coating that reflects heat. This keeps the filament hot with less electricity. Their light output is “whiter.” They are more expensive than the standard incandescent lamp.

Fluorescent lamps are available in a wide range of “coolness” to “warmth.” Warm fluorescent lamps bring out the red tones. Cool fluorescent lamps tend to give a person’s skin a pale appearance.

Fluorescent lamps might be marked daylight D (very cool), cool white CW (cool), white W (moderate), warm white WW (warm). These categories break down further into a deluxe X series (i.e., deluxe warm white—deluxe cool white), specification SP series, and specification deluxe SPX series.

Typical color temperature ratings for lamps are 2800K (incandescent), 3000K (halogen), 4100K (cool white fluorescent), and 5000K (fluorescent that simulates daylight). Note that a halogen lamp
is “whiter” than a typical incandescent lamp.Catalogs from lamp manufacturers provide detailed information about lamp characteristics.

Fluorescent lamps and ballasts are a moving target. In recent years, there have been dramatic improvements in both lamps and electronic ballast efficiency.

First, the now-antiquated T12 fluorescent lamps (40 watts) were replaced by energy-saving T8 fluorescent lamps. These original T8 lamps are becoming a thing of the past. The latest T8 high efficiency, energy saving (25 watts vs. 32 watts) lamps have an expected 50% longer life than the original T8 lamps.

The newer T8 lamps use approximately 40% less energy than the older T12 lamps. At $0.06 per kWh, one manufacturer claims a savings of $27.00 per lamp over the life (30,000 hours) of the lamp. At $0.10 cents per kWh, the savings is said to be $45.00 per lamp over the life of the lamp. Using the newer T8 lamps on new installations and as replacements for existing installations makes the payback time pretty attractive.

One electronic ballast can operate up to four lamps, whereas the older style magnetic ballast could operate only two lamps. For a three- or four-lamp luminaire, one ballast instead of two results in quite a saving. Some electronic ballasts can operate six lamps.

Hard to believe! You now can have reduced power consumption and increased light output using electronic ballasts. Today’s high-efficiency ballasts are available with efficiencies of from 98% to 99%.

The only way you can stay on top of these rapid improvements is to check out the Web sites of the various lamp and ballast manufacturers. Today’s magnetic and electronic ballasts handle most of the fluorescent lamp types sold, including standard and energy-saving preheat, rapid start, slimline, high output, and very high output. Again, check the label on the ballast.


The market for magnetic (core and coil) ballasts is shrinking! The National Appliance Energy Conservation Amendment of 1988, Public Law 100-357 prohibited manufacturers from producing ballasts having a power factor of less than 90%.

Ballasts that meet or exceed the federal standards for energy savings are marked with a letter “E” in a circle. Dimming ballasts and ballasts designed specifically for residential use were exempted.

Today’s electronic ballasts are much lighter in weight and considerably more energy efficient than older style magnetic ballasts (core and coil). Energy saving ballasts might cost more initially, but the payback is in the energy consumption saving over time.

Old-style fluorescent ballasts get very warm and might consume 14 to 16 watts, whereas an electronic ballast might consume 8 to 10 watts. Combined with energy-saving fluorescent lamps that use 32 or 34 watts instead of 40 watts, energy savings are considerable. You are buying light, not heat.

When installing fluorescent luminaires, check the label on the ballast that shows the actual volt amperes that the ballast and lamp will draw in combination. Do not attempt to use lamp wattage only when making load calculations because this could lead to an overloaded branch circuit.

For example, a high-efficiency ballast might draw a total of 42 volt-amperes, whereas an old-style magnetic ballast might draw 102 voltamperes.

The higher the power factor rating of a ballast, the more energy efficient. Look for a power factor rating in the mid to high 90s.

Various line currents, volt-amperes, wattages, and overall power factor for various single-lamp
fluorescent ballasts.

Ballast Line Current Line Voltage Line Volt-Amperes Lamp Wattage Line Power Factor
No. 1   0.35                120                              42                    40                    0.95(95%)
No. 2   0.45                120                              54                    40                    0.74(74%)
No. 3   0.55                120                              66                    40                    0.61(61%)
No. 4   0.85                120                              102                  40                    0.39(39%)
No. 5   0.22                120/277                       26                    30                    0.99(99%)


Be sure to use the proper lamp for a given ballast. Mismatching a lamp and ballast may result in poor starting and poor performance, as well as shortened lamp and/or ballast life.

The manufacturer’s ballast and/or lamp warranty may be null and void. T8 lamps are designed to be used interchangeably on magnetic or electronic rapid-start ballasts or electronic instant-start ballasts. Lamp life is reduced slightly when used with an instant-start ballast.

Operating a ballast at an over-voltage condition will cause the it to run hot and shorten its life. Operating a ballast at an under-voltage situation can result in premature lamp failure and unreliable

Most ballasts today will operate satisfactorily within a range of 15% to 27% of their rated voltage. The higher quality CBM certified ballasts will operate satisfactorily within a range of 610%.

Ballast Sound Rating
Most ballasts will hum, some more than others. Ballasts are sound rated and are marked with letters “A” through “F.” “A” is the quietest, and “F” is the noisiest. Look for an “A” or “B” sound rating for residential applications.

Magnetic ballasts (core and coil) hum when the metal laminations vibrate because of the alternating current reversals. This hum can be magnified by the luminaire itself, and/or the surface the luminaire is mounted on. Electronic ballasts have little, if any, hum.

CAUTION: Do not insert spacers, washers, or shims between a ballast and the luminaire to make the ballast more quiet. This will cause the ballast to run much hotter and could result in shortened ballast life and possible fire hazard.

Instead, replace the noisy ballast with a quiet, sound-rated one. Sometimes checking and tightening the many nuts, bolts, and screws of the luminaire will solve the problem.


Preheat ballasts are connected in a simple series circuit. They are easily identified because they have a “starter.” One type of starter is automatic and looks like a small roll of Lifesavers with two “buttons” on one end.

Another type of starter is a manual “ON–OFF” switch that has a momentary “make” position just beyond the “ON” position. When you push the switch on and hold it there for a few seconds, the lamp filaments glow. When the switch is released, the start contacts open, an arc is initiated within the lamp, and the lamp lights up.

Preheat lamps have two pins on each end. Preheat lamps and ballasts are not used for dimming applications.

Rapid Start.
Probably the most common type used today. Rapid start ballasts/lamps do not require a starter. The lamps start in less than 1 second. For reliable starting, ballast manufacturers recommend that there be a grounded metal surface within ½ in. (12.7 mm) of the lamp and running the full length of the lamp, that the ballast be grounded, and that the supply circuit originates from a grounded system.

T5 rapid start lamps do not require a grounded surface for reliable starting. Rapid start lamps have two pins on each end. Rapid start lamps can be dimmed using a special dimming ballast.

Instant Start.
Instant start lamps do not require a starter. These ballasts provide a high-voltage “kick” to start the lamp instantly. They require special fluorescent lamps that do not require preheating of the lamp filaments.

Because instant start fluorescent lamps are started by brute force, they have a shorter life (as much as 40% less) than rapid start lamps when older style magnetic ballasts are used. With electronic ballasts, satisfactory lamp life can be expected.

Instant start lamps have one pin on each end. Instant start ballasts/lamps cannot be used for dimming applications.

Dimming Ballasts.
Special dimming ballasts and dimmers are needed for controlling the light output of fluorescent lamps. Rapid start lamps are used. Incandescent lamp dimmers cannot be used to control fluorescent lamps. An exception to this is that dimmers marked “Incandescent Only” can be used to dim compact fluorescent lamps.


NEC defines a luminaire as a complete lighting unit consisting of a light source such as lamp or lamps, together with the parts designed to position the light source and connect it to the power supply.*
Luminaire is the international term for “lighting fixture” and is used throughout the NEC.

There are literally thousands of different types of luminaries from which to choose to satisfy certain needs, wants, desires, space requirements, and, last but not least, price considerations. Whether the luminaire is incandescent or fluorescent, the basic categories are surface mounted, recessed mounted, and suspended ceiling mounted.

The Code Requirements
Article 410 sets forth the requirements for installing luminaires. The electrician must “meet Code” with regard to mounting, supporting, grounding, live-parts exposure, insulation clearances, supply
conductor types, maximum lamp wattages, and so forth.

Probably the two biggest contributing factors to fires caused by luminaries are installing lamp wattages that exceed that for which the luminaire has been designed, and burying recessed luminaries under thermal insulation when the luminaire has not been designed for such an installation.

Mountings for basic categories of luminaires.

Fluorescent                Incandescent
• Surface                     • Surface
• Recessed                   • Recessed
• Suspended Ceiling   • Suspended Ceiling

Nationally Recognized Testing Laboratories (NRTL) tests, lists, and labels luminaires that are in conformance with the applicable UL safety standards. Always install luminaires that bear the label from a qualified NRTL.

In addition to the NEC, the UL Electrical Construction Materials Directory (Green Book) and the UL Guide Information for Electrical Equipment (White Book), and manufacturers’ catalogs and literature
are excellent sources of information about luminaires.

NEC 110.3(B) states that Listed or labeled equipment shall be installed and used in accordance with any instructions included in the listing or labeling.* It is important to carefully read the label and any instructions furnished with a luminaire.  Most Code requirements can be met by simply following this information. Here are a few examples of label and instruction information:

• Maximum lamp wattage
• Type of lamp
• For supply connections, use wire rated for at
least 8C
• Type-IC
• Type Non-IC
• Suitable for wet locations
• Thermally protected


No single cable characteristic should be emphasized to the serious detriment of others. A balance of cable characteristics, as well as good installation, design, and construction practices, is necessary to provide a reliable cable system.

Service conditions
a) Cables should be suitable for all environmental conditions that occur in the areas where they are installed.

b) Cable operating temperatures in substations are normally based on 40 °C ambient air or 20 °C ambient earth.

Special considerations should be given to cable installed in areas where ambient temperatures differ from these values.

c) Cables may be direct buried, installed in duct banks, conduits, and trenches below grade, or in cable trays, conduits, and wireways above ground. Cable should be suitable for operation in wet and dry locations.

High-voltage power cables are designed to supply power to substation utilization devices, other substations, or customer systems rated higher than 1000 V.

NOTE — Oil-filled and gas-insulated cables are excluded from this definition.

Low-voltage power cables are designed to supply power to utilization devices of the substation auxiliary systems rated 1000 V or less.

Control cables are applied at relatively low current levels or used for intermittent operation to change the operating status of a utilization device of the substation auxiliary system.

NOTE — leads from current and voltage transformers are considered control cables since in most cases they are used in relay protection circuits. However, when current transformer leads are in a primary voltage area exceeding 600 volts they should be protected as required by the NESC, Rule 150.

As used in this document, instrumentation cables consist of cables for Supervisory Controls and Data Acquisition (SCADA) systems or event recorders, and thermocouple and resistance temperature detector cables.

Instrumentation cables are used for transmitting variable current or voltage signals (analog) or transmitting coded information (digital).


The selection of the cable voltage rating is based on the service conditions of 2.1, the electrical circuit frequency, phasing, and grounding configuration, and the steady-state and transient conductor voltages with respect to ground and other energized conductors.

A voltage rating has been assigned to each standard configuration of shield and insulation material and thickness in NEMA WC 3-1980, NEMA WC 5-1973, NEMA WC 7-1988, NEMA WC 8-1988, and in AEIC CS5-1987, AEIC CS6-1987, and AEIC CS7-1987.

The selected voltage rating must result in a cable insulation system that maintains the energized conductor voltage, without installation breakdown under normal operating conditions.

For high-voltage cables, it is usual practice to select an insulation system that has a voltage rating equal to or greater than the expected continuous phase-to-phase conductor voltage. The NEMA standards provide for a cable voltage rating that is only 95% of the actual continuous voltage.

For solidity grounded systems, it is usual to select the 100 Percent Insulation Level, but the 133 Percent Insulation Level is often selected where additional insulation thickness is desired. The 133 Percent Insulation Level is also applied on systems without automatic ground fault protection.

Distribution substations often utilize cable for the distribution circuits from the substation secondary switch-yard (substation getaways). The insulation system selected for this distribution cable may have a voltage rating that is a class above the minimum NEMA rating for the actual circuit voltage and ground fault protection, because it is believed that the additional insulation will result in a lower probability of insulation failure.

Research conducted by the Electric Power Research Institute has led to cable construction recommendations published in EPRI EL-6271 [B10].11 The EPRI recommendations for cable insulation systems have insulation thickness that are the same as those of the NEMA and AEIC standards.

For power and control cables applied at 600 V and below, some engineers use 1000 V-rated insulation because of past insulation failures caused by inductive voltage spikes from de-energizing electromechanical devices, e.g., relays, spring winding motors.

The improved dielectric strength of today's insulation materials prompted some utilities to return to using 600 V rated insulation for this application. Low voltage power and control cable rated 600 V and 1000 V is currently in use.

The selection of the power cable insulation system also includes consideration of cost and performance under normal and abnormal conditions. Dielectric losses, resistance to flame propagation, and gas generation when burned are the most common performance considerations.


The following site-dependent parameters have been found to have substantial impact on the grid design: maximum grid current IG, fault duration tf, shock duration ts, soil resistivity ρ, surface material resistivity (ρs), and grid geometry.

Several parameters define the geometry of the grid, but the area of the grounding system, the conductor spacing, and the depth of the ground grid have the most impact on the mesh voltage, while parameters such as the conductor diameter and the thickness of the surfacing material have less impact.

Fault duration (tf) and shock duration (ts)
The fault duration and shock duration are normally assumed equal, unless the fault duration is the sum of successive shocks, such as from reclosures. The selection of tf should reflect fast clearing time for transmission substations and slow clearing times for distribution and industrial substations.

The choices tf and ts should result in the most pessimistic combination of fault current decrement factor and allowable body current. Typical values for tf and ts range from 0.25 s to 1.0 s.

Soil resistivity (ρ)
The grid resistance and the voltage gradients within a substation are directly dependent on the soil resistivity. Because in reality soil resistivity will vary horizontally as well as vertically, sufficient data must be gathered for a substation yard.

Because the equations for Em and Es given assume uniform soil resistivity, the equations can employ only a single value for the resistivity.

Resistivity of surface layer (ρs)
A layer of surface material helps in limiting the body current by adding resistance to the equivalent body resistance.

Grid geometry
In general, the limitation on the physical parameters of a ground grid are based on economics and the physical limitations of the installation of the grid. The economic limitation is obvious. It is impractical to install a copper plate grounding system.

Clause 18 describes some of the limitations encountered in the installation of a grid. For example, the digging of the trenches into which the conductor material is laid limits the conductor spacing to approximately 2 m or more.

Typical conductor spacings range from 3 m to 15 m, while typical grid depths range from 0.5 m to 1.5 m. For the typical conductors ranging from 2/0 AWG (67 mm2) to 500 kcmil (253 mm2), the conductor diameter has negligible effect on the mesh voltage.

The area of the grounding system is the single most important geometrical factor in determining the resistance of the grid. The larger the area grounded, the lower the grid resistance and, thus, the lower the GPR.


What Are The Benefits Of Installing Capacitors?

Power capacitors provide several benefits to power systems. Among these include power factor correction, system voltage support, increased system capacity, reduction of power system losses, reactive power support, and power oscillation damping.

Power Factor Correction.
In general, the efficiency of power generation, transmission, and distribution equipment is improved when it is operated near unity power factor. The least expensive way to achieve near unity power factor is with the application of capacitors.

Capacitors provide a static source of leading reactive current and can be installed close to the load. Thus, the maximum efficiency may be realized by reducing the magnetizing (lagging) current requirements throughout the system.

System Voltage Support.
Power systems are predominately inductive in nature and during peak load conditions or during system contingencies there can be a significant voltage drop between the voltage source and the load. Application of capacitors to a power system results in a voltage increase back to the voltage source, and also past the application point of the capacitors in a radial system.

The actual percentage increase of the system voltage is dependent upon the inductive reactance of the system at the point of application of the capacitors. The short-circuit impedance at that point is approximately the same as the inductive reactance; therefore, the 3-phase short-circuit current at that location can be used to determine the approximate voltage rise.

Increased System Capacity.
The application of shunt or series capacitors can affect the power system capacity. Application of shunt capacitors reduces the inductive reactive current on the power system, and thus reduces the system kVA loading. This can have the effect of increasing system to serve additional load.

Series capacitors are typically used to increase the power carrying capability of transmission lines. Series capacitors insert a voltage in series with the transmission line that is opposite in polarity to the voltage drop across the line, which decreases the apparent reactance and increases the power transfer capability of the line.

Power System Loss Reduction.
The installation of capacitors can reduce the current flow in a power system. Since losses are proportional to the square of the current, a reduction in current will lead to reduced system losses.

Reactive Power Support.
Capacitors can help support steady-state stability limits and reactive power requirements at generators.

Power Oscillation Damping.
Controlled series capacitors can provide an active damping for power oscillations that many large power systems experience. They can also provide support after significant disturbances to the power system and allow the system to remain in synchronous operation.


NEC rules for the ends of a wire differ from those for the middle. (Adapted from Practical Electrical Wiring, 20th edition, © Park Publishing, 2008, all rights reserved).

The key to applying these rules, and the new NEC Example D3(a) in Annex D on this topic is to remember that the end of a wire is different from its middle. Special rules apply to calculating wire sizes based on how the terminations are expected to function.

Entirely different rules aim at assuring that wires, over their length, don’t overheat under prevailing loading and conditions of use. These two sets of rules have nothing to do with each other—they are based on entirely different thermodynamic considerations.

Some of the calculations use, purely by coincidence, identical multiplying factors. Sometimes it is the termination requirements that produce the largest wire, and sometimes it is the requirements to prevent conductor overheating.

You can’t tell until you complete all the calculations and then make a comparison. Until you are accustomed to doing these calculations, do them on separate pieces of paper.

Current is always related to heat.
Every conductor has some resistance and as you increase the current, you increase the amount of heat, all other things being equal. In fact, as is covered in Sec. 110 of Div. 1 and elsewhere, you increase the heat by the square of the current.

The ampacity tables in the NEC reflect heating in another way. As the reproduction of NEC Table 310.16 (see Table 18 in Div. 12) shows, the tables tell you how much current you can safely (meaning without overheating the insulation) and continuously draw through a conductor under the prevailing conditions—which is essentially the definition of ampacity in NEC Article 100: The current in amperes that a conductor can carry continuously under the conditions of use without exceeding its temperature rating.

Ampacity tables show how conductors respond to heat.
The ampacity tables (such as Table 18 in Div. 1) do much more than what is described in the previous paragraph. They show, by implication, a current value below which a wire will run at or below a certain temperature limit.

Remember, conductor heating comes from current flowing through metal arranged in a specified geometry (generally, a long flexible cylinder of specified diameter and metallic content). In other words, for the purposes of thinking about how hot a wire is going to be running, you can ignore the different insulation styles.

As a learning tool, let’s make this into a “rule” and then see how the NEC makes use of it: A conductor, regardless of its insulation type, runs at or below the temperature limit indicated in an ampacity column when, after adjustment for the conditions of use, it is carrying equal or less current than the ampacity limit in that column.

For example, a 90 C THHN 10 AWG conductor has an ampacity of 40 amps. Our “rule” tells us that when 10 AWG copper conductors carry 40 amps under normal-use conditions, they will reach a worst-case, steady-state temperature of 90 C just below the insulation.

Meanwhile, the ampacity definition tells us that no matter how long this temperature continues, it won’t damage the wire. That’s not true of the device, however. If a wire on a wiring device gets too hot for too long, it could lead to loss of temper of the metal parts inside, cause instability of nonmetallic parts, and result in unreliable performance of overcurrent devices due to calibration shift.

Termination rules protect devices.
Because of the risk to devices from overheating, manufacturers set temperature limits for the conductors you put on their terminals. Consider that a metal-to-metal connection that is sound in the electrical sense probably conducts heat as efficiently as it conducts current. If you terminate a 90 C conductor on a circuit breaker, and the conductor reaches 90 C (almost the boiling point of water), the inside of the breaker won’t be much below that temperature.

Expecting that breaker to perform reliably with even a 75 C heat source bolted to it is expecting a lot. Testing laboratories take into account the vulnerability of devices to overheating, and there have been listing restrictions for many, many years to prevent use of wires that would cause device overheating. These restrictions now appear in the NEC.

Smaller devices (generally 100 amp and lower, or with termination provisions for 1 AWG or smaller wire) historically weren’t assumed to operate with wires rated over 60 C such as TW. Higherrated equipment assumed 75 C conductors but generally no higher for 600-volt equipment and below. This is still true today for the larger equipment. (Note that medium-voltage equipment, over 600 volts, has larger internal spacings and the usual allowance is for 90 C, but that equipment will not be further considered at this point.)

Today, smaller equipment increasingly has a “60/75 C” rating, which means it will function properly even where the conductors are sized based on the 75 C column of Table 18, Div. 1.


There can be completely different definitions for power quality, depending on one’s frame of reference. For example, a utility may define power quality as reliability and show statistics demonstrating that its system is 99.98 percent reliable.

Criteria established by regulatory agencies are usually in this vein. A manufacturer of load equipment may define power quality as those characteristics of the power supply that enable the equipment to work properly. These characteristics can be very different for different criteria.

Power quality is ultimately a consumer-driven issue, and the end user’s point of reference takes precedence Therefore, the following definition of a power quality problem is used:

"Any power problem manifested in voltage, current, or frequency deviations that results in failure or misoperation of customer equipment."

There are many misunderstandings regarding the causes of power quality problems. The utility’s and
customer’s perspectives are often much different. While both tend to blame about two-thirds of the events on natural phenomena (e.g., lightning), customers, much more frequently than utility personnel, think that the utility is at fault.

When there is a power problem with a piece of equipment, end users may be quick to complain to the utility of an “outage” or “glitch” that has caused the problem. However, the utility records may indicate no abnormal events on the feed to the customer.

We recently investigated a case where the end-use equipment was knocked off line 30 times in 9 months, but there were only five operations on the utility substation breaker. It must be realized that there are many events resulting in end-user problems that never show up in the utility statistics.

One example is capacitor switching, which is quite common and normal on the utility system, but can cause transient overvoltages that disrupt manufacturing machinery.

Another example is a momentary fault elsewhere in the system that causes the voltage to sag briefly at the location of the customer in question. This might cause an adjustable-speed drive or a distributed
generator to trip off, but the utility will have no indication that anything was amiss on the feeder unless it has a power quality monitor installed.

In addition to real power quality problems, there are also perceived power quality problems that may actually be related to hardware, software, or control system malfunctions. Electronic components can degrade over time due to repeated transient voltages and eventually fail due to a relatively low magnitude event.

Thus, it is sometimes difficult to associate a failure with a specific cause. It is becoming more common that designers of control software for microprocessor-based equipment have an incomplete knowledge of how power systems operate and do not anticipate all types of malfunction events.

Thus, a device can misbehave because of a deficiency in the embedded software. This is particularly common with early versions of new computer-controlled load equipment.

One of the main objectives of this site is to educate utilities, end users, and equipment suppliers alike to reduce the frequency of malfunctions caused by software deficiencies.

In response to this growing concern for power quality, electric utilities have programs that help them respond to customer concerns. The philosophy of these programs ranges from reactive, where the utility responds to customer complaints, to proactive, where the utility is involved in educating the customer and promoting services that can help develop solutions to power quality problems.

The regulatory issues facing utilities may play an important role in how their programs are structured. Since power quality problems often involve interactions between the supply system and the customer facility and equipment, regulators should make sure that distribution companies have incentives to work with customers and help customers solve these problems.

The economics involved in solving a power quality problem must also be included in the analysis. It is not always economical to eliminate power quality variations on the supply side.

In many cases, the optimal solution to a problem may involve making a particular piece of sensitive equipment less sensitive to power quality variations. The level of power quality required is that level which will result in proper operation of the equipment at a particular facility.

Power quality, like quality in other goods and services, is difficult to quantify. There is no single accepted definition of quality power. There are standards for voltage and other technical criteria that may be measured, but the ultimate measure of power quality is determined by the performance and productivity of end-user equipment.

If the electric power is inadequate for those needs, then the “quality” is lacking. Perhaps nothing has been more symbolic of a mismatch in the power delivery system and consumer technology than the “blinking clock” phenomenon.

Clock designers created the blinking display of a digital clock to warn of possible incorrect time after loss of power and inadvertently created one of the first power quality monitors. It has made the homeowner aware that there are numerous minor disturbances occurring throughout the power delivery system that may have no ill effects other than to be detected by a clock.

Many appliances now have a built-in clock, so the average household may have about a dozen clocks that must be reset when there is a brief interruption. Older-technology motor-driven clocks would simply lose a few seconds during minor disturbances and then promptly come back into synchronism.


Electric power quality has emerged as a major area of electric power engineering. The predominant reason for this emergence is the increase in sensitivity of end-use equipment. The various aspects of power quality as it impacts utility companies and their customers and includes material on (1) grounding, (2) voltage sags, (3) harmonics, (4) voltage flicker, and (5) long-term monitoring.

While these five topics do not cover all aspects of power quality, they provide the reader with a broad based overview that should serve to increase overall understanding of problems related to power quality.

Proper grounding of equipment is essential for safe and proper operation of sensitive electronic equipment. In times past, it was thought by some that equipment grounding as specified in the U.S. By the National Electric Code was in contrast with methods needed to insure power quality.

Since those early times, significant evidence has emerged to support the position that, in the vast majority of instances, grounding according to the National Electric Code is essential to insure proper and trouble free equipment operation, and also to insure the safety of associated personnel.

Other than poor grounding practices, voltage sags due primarily to system faults are probably the most significant of all power quality problems. Voltage sags due to short circuits are often seen at distances very remote from the fault point, thereby affecting a potentially large number of utility customers.

Coupled with the wide-area impact of a fault event is the fact that there is no effective preventive for all power system faults. End-use equipment will, therefore, be exposed to short periods of reduced voltage which may or may not lead to malfunctions.

Like voltage sags, the concerns associated with flicker are also related to voltage variations. Voltage flicker, however, is tied to the likelihood of a human observer to become annoyed by the variations in the output of a lamp when the supply voltage amplitude is varying.

In most cases, voltage flicker considers (at least approximately) periodic voltage fluctuations with frequencies less than about 30–35 Hz that are small in size. Human perception, rather than equipment malfunction, is the relevant factor when considering voltage flicker.

For many periodic waveform (either voltage or current) variations, the power of classical Fourier series theory can be applied. The terms in the Fourier series are called harmonics; relevant harmonic terms may have frequencies above or below the fundamental power system frequency.

In most cases, non fundamental frequency equipment currents produce voltages in the power delivery system at those same frequencies. This voltage distortion is present in the supply to other end-use equipment and can lead to improper operation of the equipment.

Harmonics, like most other power quality problems, require significant amounts of measured data in order for the problem to be diagnosed accurately. Monitoring may be short- or long-term and may be relatively cheap or very costly and often represents the majority of the work required to develop power
quality solutions.

In summary, the power quality problems associated with grounding, voltage sags, harmonics, and voltage flicker are those most often encountered in practice. It should be recognized that the voltage and current transients associated with common events like lightning strokes and capacitor switching can also negatively impact end-use equipment.