graphical element
Why is a power plant boiler like a Chevy with its hood welded shut? And what does this have to do with increasing power plant efficiency by 30 to 100 percent? No, it's not a trick question: the answer is here.
graphical element

A Conversation on the US Power Industry:
The State of The Onion

An onion has many layers and the health of all, including the core, is essential to its well being. How well is the US industry’s core? Read on.

There are multiple differences between the US power grid (and its many functions) and its far more conservative cousins in Western Europe. To assist in understanding these differences, I will try to lay out some sensible facts and guidelines so that you can benefit from my many years of wise and wonderful experience. The entire infrastructure, encompassing a geography many times the size of Western Europe, was implemented across the US very quickly and, therefore, many aspects that one might expect to find in the infrastructure — such as preventative maintenance and 25/50 year design cycle criteria — simply do not exist. Cost considerations force this.

When power transmission first originated as per Mr. Edison and his DC lines, a small power plant was built to service a narrow area of coverage. This gradually led to bigger plants, but still following the small service area guidelines. As cities became bigger and real estate in major cities more valuable, a new parameter came into play. It became cheaper to transmit the energy along transmission lines, and consequently power plants moved to isolated locations on top of the fuel source, thereby increasing production size until you reached some of the monster fossil plants approaching 1,300 MGW. That’s right, in excess of 1,000 MGW from a single unit. Also, keep in mind that these monsters are often 4-packs (four in a row).

Another factor to consider is that 50 percent of a utility’s operating cost is its fuel source and despite the fact that all US plants, until recently, were closed monopolies, they still had shareholders and could show specific profits. That’s why you had such a wide variation and sudden fluctuations in the types of plants being built. When fossil bituminous coal falls to a specific bulk price level, then those plants become more popular. Same with lignite. Same with gas. Same with oil. As the prices rose and fell, so did the number of those types of plants being built. They even built crude lignite units that only had efficiency ratings of 20 percent, less than the fossil average of 35 percent — a cost exercise our conservative Western European cousins would never consider. Up until a few years ago, the whole emphasis on power grids was to supply ever-increasing demand curves. Until very, very recently, no US power company even had energy conservation or similar measures on its radar screen — with one exception.

The exception is Idaho Power, which is heavily dependent on hydro-electric plants, which in turn rely on a limited natural resource. They realized that by analyzing how their customers could use electricity more efficiently — by doing thermal scans of buildings and plants to see where increasing insulation could save power, for example — and then providing assistance to implement efficiency improvements, they could improve service to their customers at far lower cost than by building new plants using imported fuel. Idaho Power thus became a partner with their customers in seeking greater efficiency to conserve energy. With their largest industrial customers, they may suggest not only adding insulation or other commonly considered measures, but also different industrial equipment with more efficient motors, for example. Idaho Power is doing it right, and very successfully so.

The cost of fuel was also a major reason that a number of utilities jumped head first into nuclear reactors. Imagine the promise of reducing fuel costs from 50 percent to 5 percent of your budget. Well, Murphy’s Law decrees that it ain’t gonna be so, and the cause: too much abundant choice of everything! If your credit card could take it, you could order a complete power plant on the phone. The last time I researched this, there were some 30 different types of turbines available, 70 different types of boilers, more than 1,700 types of water pumps, and on and on and on. Now, every truck fleet manager and airline knows that if you standardize your fleet to a single entity or restricted number of unit types, you get better utilization and more effective maintenance. Well, in the good old US power industry, that did not happen. Hence, the hodgepodge of multi-types that you see today. (Remember this was all created in a protected quasi-monopoly guaranteed profit environment.) Even the French Frantomic nuclear reactors are confined to a single source type. Well, if you have multiple animals in your zoo, they become very difficult to control and you have to have different keepers for each type — and power plants are no different. But what happens if you do not budget for lots of keepers, then you try cross disciplines? That’s the rub. A low efficiency fossil unit is vastly different to run and operate than an oil or gas-based unit. That’s why when the price of oil rose to $25 a barrel and gas escalated at the same time but coal and fossil fuels stayed reasonably stable, massive coal shale projects were started. Fossil fuel still powers more than 60 percent of US plants and the reserves in the US alone go into the year 3000.

Now we come to our famous preventative maintenance statement. When I first came to the States in 1979, I used to work closely with heavy industrial Fortune 500 customers such as power utilities and petrochemical plants. Inevitably, the first question I would always get is, “What is this weird beast ‘pre-maintenance’? When it breaks, we fix it. When we have an outage — planned or otherwise — we do the same.” It’s all crammed into the outage slot. Even if the turbine swings the outage, you can bet that there is three times the amount of work still needing to be done on the rest of the plant. (It was built by the low bidder after all.)

Who are the major financial contributors to this sad state of affairs? Unbelievably, it’s the power utilities’ INSURANCE companies!!!! Standard industry practice across the US is to run the equipment into the ground until a catastrophic failure occurs, and that includes the most costly items on the list — turbines, boilers, water pumps, etc. Then call your insurance company (or the equipment vendors) and claim against the policy for the damage. The vendors even go along with this as it places new replacement equipment in the pipeline. The insurers don’t care because the added premium costs simply get charged against the utilities’ bottom line and are approved by the Public Utility Commission (PUC) as normal operating costs. The shock-wave here is going to come as utilities move further into the non-regulated market where this fiasco will no longer be covered up. Watch for a massive backlash from the insurers and a frantic juggling of plant assets by the worst offenders, together with major grid failures,

Another unfortunate factor was the concentration of effort by senior management of utilities on the Wall Street share price. As any lame duck knows, what do you cut first to show a sharp increase in profits and immediate decrease in costs? Maintenance, that’s what. As for the preventative type, fergetaboutit!!!!! When I pointed out that in South Africa, which follows Europe’s preventative maintenance cycle, plans were based on 10 to 50 year projections and drove the entire utility, US industry executives were stunned.

Another fun factor is that the alternate fuel cost (that’s the power you have to buy to supply your grid from another utility when your power plants are down for maintenance) is about $40,000 dollars per hour per 600-megawatt unit — but remember it’s a pass-through cost guaranteed by the PUC and its controlling authority. Because of all of the above, if you planned an outage well and tried to fix a multitude of problems many times beyond your control scope, you became liable for the problem. That’s why the front line troops, the maintenance techs, would often let the system self-destruct and just let the insurance company pick up the tab. It was a lot easier than fighting the tide. C’est la vie! Nuts, but there you are.

Now we throw in the dreaded “green factor.” We need to keep our air, sky, etc. pure and untouched. Great idea in theory, but silly in application unless you are prepared to pay for it. Massive emissions controls all over the place including on the nukes and then, the dreaded Three Mile Island and later, its Russian brother, and oh, yeah, where do we put all these spent fuel rods that the federal government stopped us from reprocessing (even though 85 percent of the energy in the rod is still there and could be exploited) and told us it would take care of? Yet the highest risk of death and destruction from power plant failure is from the failure of dams used to supply hydro-electric energy! At Three Mile Island, no one was killed and no one injured, not even from the very minor release of radioactivity, which was less than the background radiation that is normally emitted by a pile of coal such as those that sit outside fossil-fired plants every day. Panic, panic everywhere not a drop of logic to drink. I saw a plant in Texas that was a 1,200 MGW nuclear reactor with a completed containment dome. They had to build a second dome around it to compensate for new NRC (Nuclear Regulatory Committee) requirements that were issued after the dome was completed. But they had to comply. Chernobyl, which was so poorly designed it could never even have been presented for approval, let alone received approval in the US, had no containment dome at all.

This madness extended through the entire industry until a typical $4 billion plant had a $12 billion cost overrun. So many things crept out of the woodwork. What if terrorists attacked the plant? Result of that question: 450 tons worth of concrete temporary blocks around the dome entrance. (Imagine how long it would take a crane to move them for a standard outage, let alone a forced one.) This would also be a high tech terrorist attack, but terrorists are largely low-tech creatures. They could place additives in the steam turbine water-flow and the whole unit would be down. What if the fuel rod retraction system failed? What if Elvis came back to life? You name it and they had to consider it, no matter how far-fetched the idea. And don’t forget that, for every different design criterion, a new contingency had to be prepared. How much time do you think went into pre-maintenance? Very little. Same for customer service. Today, if you are a giant US conglomerate, you cannot get consolidated billing and drill down verification and inspection for all your facilities across the US.

There were, as in every industry, some progressive companies that did the best they could, but the above is a good indicator of the major forces at play. Now, if you are still awake, the industry titan shudders and we go in a new direction: deregulation. Monopolies must disband. Competition must prevail. Efficiency is God. The net result of this is that the old industry dinosaurs are dying fast — they are getting rich in the sellout — but they are dying out and the few utilities that I call progressive are soaring to the top of the industry — Duke Power is an example of the best.

There are new players in the field. The power is now in the hands of HE WHO BROKERS the demand. A completely new team of players is coming onto the market – Kombi brokers, producers, alt tech players, overseas market developers, new products, etc. There are even some foreign (read British) vendors moving into the market and so the new battle cry becomes: “Efficiency, efficiency, efficiency and more efficiency!” Nevertheless, remember a bunch of the infrastructure that you would normally count on is not there and information sources are scarce and mostly incorrect, so it has to be created with nifty innovative technology.

One major problem that all are encountering is the state or lack of quality information that is available from any source as to the actual stability and capacity of the basic power production sources (the power plants and high transmission lines). Without this information, most power supply capacity planning is a pure wild a#* guess). Unfortunately, this scenario fits right in with trading-centric organizations. Ignorance of data means trading opportunities and political endeavors that can create any hypothesis they want and work backwards to justify the data without fear that system information will prove them wrong. I won’t mention FERC ( Federal Electric Regulatory Commission) information, because of the dark secret that everyone knows: 40 percent of FERC data is inaccurate.

Imagine, if you will, that all the US power plants are represented by a metaphorical fleet of vehicles and that you want to check that each has received its regular oil change and service according to the manufacturer’s recommendations. Well guess what, there is no way that this can be done without collecting tons of erroneous data. This simple snapshot of the plants’ health is the primary step toward establishing a capacity plan, but, as we will see, historical precedent has completely hidden this vital information. To an extent, this situation has been created by the protected monopolistic corporate structure of the power utilities themselves. Pre-deregulation, a power utility was guaranteed a service area of operation in exchange for a fixed profit return controlled by a quasi-federal state body called the Public Utility Commission (PUC). What the PUC did not see is that this led to a tri-part company structure with (1) power production (plants) separated from (2) power transmission (power lines ) separated from (3) power distribution and service (customer). Inter-trading and operations amongst the three often made any efficiency examination impossible. To further compound the problem, shareholder and analyst pressure on the foremost part of the company (part 3), where the utility executive management operated, led to multiple disastrous decisions to benefit short-term apparent value often at the expense of transmission (2) and production (1).

The rate of return was governed by the size of the business volume cash flow and not by efficiency. Indeed, many utilities that pushed hyper-efficiency early on, like Duke Power, were penalized by the PUC for doing so. This, in turn, led to all sorts of misinformation flowing up and down the tri-partite corporate ladder. A classic involved preventative maintenance (PM) as we have seen earlier, in the unit that is farthest down the totem pole (the power production unit, then transmission). Good long-term preventative maintenance costs money in the short term to offset massive costs in the long term. These long-term cost benefits, however, were outside of Walls Street’s short-term window and so major equipment was literally allowed to self-destruct through lack of financial support from utility corporate. And, when it did destruct or break down, they turned to the insurance vendor to pick up the cost. This was and is an endemic problem for all US utilities. It has been calculated by knowledgeable sources inside the industry that more than 60 percent of the data collected is either erroneous or of no value. This makes any type of infrastructure analysis worthless.

The Myth That Nuclear Power is Dying

This common belief, if looked at from a very pragmatic point of view — just the numbers — is incorrect. For the purposes of this discussion, we will concern ourselves just with US / Western Europe / Russia and the old USSR satellites.

Fact A: US total inventory model Year 2000 = 136 units / Attrition – Decommissioned – Cancelled etc = 26 units = Attrition = 19 percent.

Fact B: Europe inventory model Year 2000 = 215 units / Attrition – Decommissioned – Cancelled etc = 49 units = Attrition = 22 percent. If any one country in Europe (except Italy) forcibly decommissioned a substantial part of their Nuclear Power Production resources, it would automatically affect the country’s GNP, triggering violation clauses with its ECC charter.

Fact C: Russia + Old USSR inventory model Year 2000 = 119 units / Attrition – Decommissioned – Cancelled etc = 25 units = Attrition = 21 percent. In this region, even in the Ukraine where Chernobyl occurred, the inventory is growing, as most of the monitoring and redevelopment costs are looked at as a US financed problem. The new countries’ major focus is power production and not efficient design — except when someone else pays.

Each area has different reasons for revival and so let’s first examine the US.

  1. US: In the first part of this paper, we examined how the US power market grew, what preferences and problems existed and what the controlling swing factors were — predominantly that the fuel cost is 50 percent of the operations cost and that all companies operated in a protected monopoly environment.
  2. Deregulation, Profit Margins and Efficiency Rewards: The abolition of the monopoly structure means the abandonment of incompetent management with guaranteed markets and profit levels with the utilities determining all operational criteria. To a supply and demand market with the new brokers acting as the industry Stock Exchange determining efficient pricing and distribution, this places extreme emphasis on Planning / Management / Efficiency for the first time.
  3. Stranded Costs, Recoverable Costs, Liability Distributed: The average nuclear plant suffered from cost overruns of 400-500 percent and the political PUC/NRC forced most utilities to carry the cost overruns on their books as non chargeable/non-recoverable costs. A typical $2 billion 1,000 MGW unit carried an $8 billion liability. This is without taking into account used fuel rod disbursement and other abnormal liability costs. Today, if the plant is sold to a third party for its base cost of $1-2 billion, the stranded costs are left on the old utility’s books and are now allowable or chargeable. That’s why you see a hot market for new reactors as a new way to bury and offset abnormal, politically contentious expenses. The federal government is also issuing waivers to these new third parties to offset its promise and commitment to take care of all used fuel problems. Ex-naval personnel with detailed, broad experience in reactor use and management are being used at all levels by corporations that specialize in nuclear power. Efficiency, design and standardization of operation are now the buzzwords.
  4. Detailed planning and resource management are now predominant. Data management and operation control now drive the plant instead of the reverse. An interesting point to note is the design and engineering criteria that were used to build these systems — inside containment reactor critical areas = 5:1 / 10: 1 vs. all systems / exterior containment areas = 5:1 and all systems / all data-networks = 1:1 — with no future expansion or development flexibility. Guess where the potential markets are?
  5. The nuclear model still makes sense, albeit with a much more detailed and developed management strategy involved. It was not politically viable to consider extending plant licenses under the original model, but with new ownership and a highly skilled concentrated management approach, this will now be feasible.
  6. Fuel cost increases with oil and gas at $50/barrel: As other fuel cost sources rise to  and beyond pre-OPEC price levels — early 2005 costs for oil already reached these levels — the Nuclear model becomes even more workable.

All the above in the US alone mean a very highly receptive new type of potential customer for any company that is supplying advanced maintenance and management tools for all aspects of nuclear power plant operation.

The Holy Grail: The Real Reactive Parametric Model and Informatics (RRPMI)

Most monitoring and data recording that occurs in power production and similar industries require large amounts of interpretive and management expertise to understand. Accordingly, the data is subject to distortion and misinterpretation depending on the data source origin and its audience. Future clarity will come from the ability to first design the entire plant’s infrastructure using a real world parametric data source and then build the actual plant using a digital model as a baseline to monitor against for performance and use.

A completely new set of metrics will have to be created that clearly illustrate the infrastructure’s past and present use for future capacity planning. These same metrics will in turn constantly communicate with the parametric digital representation of the real world model and update it in such a way that efficient operation is shown clearly. One of the primary beneficiaries of this corrected data will be the insurance companies that, for so long, have looked to deep pockets to fund inefficient operations. A second natural occurrence of correct data flow will be the trust that it evokes, if the collection entity is trusted, about REAL performance metrics. As an opposite factor, forces that have benefited from slanted statistical data can be counted on to oppose any model that tells the truth. Read Enron.

Any technology that moves in this direction should be viewed as a precious resource and pursued accordingly. LANL and Sandia have prime existing sources, especially in modeling, informatics and the search and identification of real knowledge. (Note: 55 percent of the critical operational staff is going to be retiring from utilities in the next 10 years and once they are gone, their entire working knowledge will go with them.)

New Directions

When thinking about capital costs and deployment of VPN and similar data transfer technology, new sources of revenue should be considered. Major customers for accurate and uncorrupted data about patterns of operation of all types of industrial facilities including power plants, are the NRC and federal/state sources, insurance companies both primary and secondary, and any product development group together with multiple legal and litigant cases and any company involved in the dealing of futures, both capital and goods for the specific industry. What better way to predict definitive trends and real supply capabilities than the ability to measure the plant’s current and future capability and have a valuable market for the data? The current US fossil unit traditionally works at around 30-40 percent efficiency when it is operating properly. That means for every .0001 percentage point increase in efficiencies, the industry as a whole would save $10 billion. Real innovation is needed here, not rocket science. The primary problem with the early brokers such as Enron is their main business revenue came from trading advantages on confused information sources. The theory is correct, the application is currently wrong.

The US’s real future is in coal and nuclear power. With coal alone, there is the capability to improve operations by 40 percent with known technology and locally available innovation. (Think purified low coal slurry and Canadian coal tar sands with innovative emission controls.) We keep looking for the magic wand – wind, solar, Mickey Mouse on a bike - when the answer is right in front of us. Do what we do with local resources, better.

The US became great through the ability to provide cheap energy, and it needs to continue that course. Most of the overseas bickering that is currently going on is purely an economic development weapon targeted against this country. The caveat here is that we could easily meet many of these standards by doing what we do far more efficiently with current technology.  We don’t need to reinvent the wheel and ignore systems that have made a tremendous contribution to our country.

For a more detailed analysis on possible data deployment and new market revenue see the scenario paper Risonanza™.