“Be kind, for everyone you meet is fighting a hard battle” - Often attributed to Plato but likely from Ian McLaren (pseudonym of Reverend John Watson)

Sunday, September 27, 2015

How much storage is needed, part 4

Image credit: Unknown
My previous post in this series related some of the drawbacks of my simplistic analysis, the objectives I want to achieve, and a sketch of the methodology I've employed. Briefly, I've used a Monte Carlo simulation (yes, I linked to something other than Wikipedia, you're welcome Doctor Steve) to determine a likely outcome for generation of energy by a single hypothetical wind turbine in Dalhart, TX.

I ran 1000 simulations of 8760 data points of wind speed from what turned out to be a mixture distribution combining a normal and a Gamma distribution. This generated a total of 8,760,000 wind speeds. Each was delivered to an interpolating function generated from a digitized power curve of a 3MW nameplate capacity wind turbine. This resulted in 8,760,000 data points, each representing the average power delivered by the turbine for a hypothetical hour.

From there, finding the average power delivered was a simple process, and the result of this simulation was a mean power of 788 kW. This equates to a capacity factor of ~788*100/3000=26.3\%~. This is a surprisingly low number for the location and turbine chosen, given published figures at sites such as this that yield an implied capacity factor of 33.1%.  Further, the model estimate has no allowance for planned and unplanned maintenance outages. And, of course, the 33.1% number is ostensibly from measured data, so, as Dr. Steve might say, "who ya gonna believe, me or your lyin' eyes?"

All that said, perhaps Dalhart isn't the ideal location, perhaps the wind gradient is steeper than the model I used, perhaps they've used more highly optimized equipment, perhaps the measured year had, for some reason, particularly strong (but not too strong) winds. I'm going to proceed with my analysis based on the model data.

So, the next step is to determine the storage required for the ability to deliver a given power at, say, 99.99% reliability. That is, the system should be able to supply the specified power for all but ~8760/10000=0.876~ hours/year. This is actually less than the SAIDI*SAIFI (system average interruption duration index, measured as the average duration of outages*system average interruption frequency index, measured as the average number of outages per customer per year) and so sounds quite reasonable if not overly conservative.

One assumption will be that, when the turbine is delivering more than the power under consideration and the storage facility is "topped off," we can send the power to the grid. Another will be that, for the level of power being considered, the storage system is capable of delivering power at that level. As I've discussed in previous posts, there are two primary characteristics of an energy storage installation: the quantity of energy that the system can store; and the rate at which it can deliver that energy.

Of note, approximately 4.0% of the time, the wind is below the cut in speed of the turbine and thus all energy delivered by the system must come from storage. The modeled wind exceeded the cut out speed of the turbine a negligibly small 0.0004% of the time. But there are no black swan events in the distribution (think tornadoes).

It took me a little time to decide on an effective way to proceed, but ultimately I decided to start with a guess of storage and loop through each increment (i.e., each hour's worth) of power (since the power is in kilowatts and the increments are hours, no conversion is necessary). If the storage plus the increment minus the steady use exceeded the maximum available storage, the excess was discarded and the maximum was kept for the next iteration. If the sum was less, that was kept for the next iteration. Upon completion, determine the number of iterations at which storage was zero or less, adjust maximum storage if and as necessary and try again. Using the mean power from all of the trials, no amount of storage sufficed, but reducing it to 725kW gave me what I wanted.

And finally, the result: If our 3MW turbine plus storage system is committed to delivering 725 kilowatts and we can provide 40MWh* of storage, there's effectively zero chance of not having the committed power available. Of course, the system can deliver greater power than that when the wind blows and/or when plenty of energy is stored but committing to greater power than 725kW or installing less storage than 40MWh means that there will be times when the system cannot deliver. Obviously, installing it in an integrated grid system can offset this, but the goal here was to determine what storage will enable what level of reliable base load power for a single turbine so the result is likely to be conservative. This is a virtue in the world of engineering. Below is a chart showing the first 100,000 increments with increment number on the x-axis and energy stored on the y-axis.




One widely discussed concept in energy generation is "capacity value," a very different concept (and number) than capacity factor. Basically, this number represents how much other generating capacity can be avoided with the installation of a generator and, for wind in particular, it is typically much lower than the capacity factor. Since there are times when no wind is blowing and demand does not abate, for an unaided turbine, sufficient generating capacity must be available to meet the demand, even though it may only be used sporadically. The goal of adding storage in this analysis is to bring the capacity value of the wind turbine close to the capacity factor.

As I noted in my previous post (on another topic), most utilities are not looking for days of storage (my analysis above determined that 48 hours of storage at 24.2% of the turbine's nameplate capacity would provide that power continuously and reliably), they're looking for a few hours. And, of course, the myriad complexities of transmission constraints, demand side variability, planned and unplanned generator outages, etc. have not been considered. Others have taken some of these into account using a similar methodology (i.e., Monte Carlo simulation). None that I've found, however, incorporate storage into the analysis. If I were a professor at a research institution or an NREL researcher or, perhaps, if I worked for a turbine manufacturer or a storage technology firm, I'd implement a much more sophisticated model incorporating the above factors as well as a wind farm as opposed to a single turbine.

Next in this series (which, as readers may have noted, may be interrupted by posts on other topics) will be an analysis of the economics of such a system, or at least the beginning of such an analysis. I anticipate that the cost will be prohibitive without pricing the externalities of fossil fuel generation (i.e., without implementing a carbon tax).




*In several trials, 35MW would have sufficed with no increments less than 0, but this run had a particularly calm stretch and, even with 40MW, had 0.0088% of the increments less than 0. However, this met the criteria of 99.99% reliability at 99.9912%.

Saturday, September 26, 2015

Rocks on rails

Image credit: Advanced Rail Energy Storage North America
In an earlier post I covered a concept of utilizing a massive rock piston over pressurized water to store energy. Another firm uses a concept analogous to pumped hydro storage but, rather than massive amounts of water and pumps and turbines, they use large solid masses and motor/generators. That firm is ARES, an acronym for Advanced Rail Energy Storage.

First, the headline numbers: ARES claims that their technology allows storage facilities of from 200 MWh of energy that can be delivered at a rate of 100 MW (i.e., it can run at full power for two hours) to 16-24 GWh that can be delivered at a rate of 2-3 GW (i.e., it can run at full power for eight hours). It's claimed to have a round trip efficiency of 80% (or 85%, depending on which interviewee you're listening to). The claimed ramp-up time is on the order of 8 seconds, dramatically better than any fossil fuel plant or pumped or stored hyrdro system, the only storage system to better that number is electrochemical (battery) storage. Finally, ARES says that the cost of an Advanced Rail Energy Storage facility is about 60% of that of an equivalent pumped hydro installation. All this sounds pretty good.

OK, what actually happens? During times of plentiful generation by intermittent generators or of low electrical prices if arbitrage is the name of the game, rail cars full of rocks are transported by rail up inclines via axle mounted motor generators on the cars. Unfortunately, their technical page has scant information regarding the specifics of the system, that information must be gleaned from other articles.

Nevertheless, we can see that ARES envisions three classes of system:
  • Ancillary services: The system is used as a Limited Energy Storage Resource (LESR) for frequency stabilization, spinning reserves, VAR (volt ampere reactive) support, etc.
  • Intermediate scale: The system is used for ancillary services as above, as well as for short duration storage to facilitate intermittent generation integration. Such a system is envisioned as capable of delivering 50 to 200 MW and having a two hour capacity.
  • Grid scale storage as described above, with 200 MW to 3GW delivery for up to 16 hours.
While the system cannot compete with pumped hydro for systems requiring days of storage, it is far less complex to construct and appropriate siting is dramatically easier to locate, and should be far easier to shepherd through the myriad review and permitting processes. And many systems don't require several days of storage. William Peitzke, ARES Founder and Director of Technology Development is quoted as saying "Generally, the market for storage tends to be an 8 hour requirement and in fact a lot of the utilities we talk with really only require five to six hours of discharge.”
Image credit: ARES

The cars carry a mass consisting of concrete and rock, and utilize electric traction motors to lift the masses up inclines. The same motors then act as generators when descending. Complex, automated control systems enable quick adjustments to suit system requirements, and the system can have some cars ascending while others descend. Scale can be increased simply by adding more mass. Energy is received and delivered via electrified rails. The cars themselves are modified ore cars. ARES holds patents on the system, but the individual components and systems are mature technologies with no technological breakthroughs needed.

ARES has constructed a pilot system in Tehachapi at about 1:3.75 scale (see photo at right) but, according to various reports, in Pahrump, Nevada, the Valley Electric Association has agreed to work with ARES to implement a 50 MW system. The projected cost is $40M. The objective is actually to accomplish frequency stabilization for the California ISO (Independent System Operator, known as "Cal-ISO"). The planned system would use 34 cars on a 9.2 km track with approximately a 7% incline. The difference in elevation between the top and bottom will be approximately 640 meters. Each shuttle will transport a mass of 230 tons (209 tonnes). A quick calculation [(34 cars)*(209 tonnes)*(1000 kg/ton)*(9.8 m/s)*(640 meters)*(80%)/(3.6*10^9 joules/MWh)] shows that this system may be able to store and deliver a maximum of just under 10 MWh. However, this is an "Ancillary Services" installation and thus not designed for primary purpose of storage per se, but rather for the regulation goals mentioned above. Unfortunately, I'm not able to find recent information on progress to date. The Valley Electric Association web site is silent on ARES with the exception of a pdf magazine from October of 2014.
Image credit: www.gearedsteam.com

I'd not go so far as to say that rail energy storage is the silver bullet for solving the integration of intermittent renewables into the grid, but it certainly seems to have significant benefits and few drawbacks, assuming that it hasn't jumped the track.



Update: A great set of photos of the pilot project in Tehachapi can be found at gizmag.

Sunday, September 20, 2015

How much storage is needed, part 3

Image credit: www.theyogakids.com
Previously, I imported the daily mean wind speed recorded at Dalhart, TX (at the airport, I assume) for the period beginning January 1, 2000 through the most recent day recorded in Wolfram's curated weather data. I adjusted the wind speeds using a relatively standard model to estimate the wind speed at a turbine hub height of 120 meters from the (presumably) data at 10 meters. I also have digitized the data for a 3 MW nameplate capacity wind turbine. Further, I've used Mathematica's Interpolation capability to provide a "plug and play" function whose input is a wind speed and whose output is power from the turbine.

The plan from here is to do a Monte Carlo simulation from the smooth kernel distribution that's the best fit to the wind data. I'll use 8,760 points per simulation (the number of hours in a non-leap year) and use the speeds and the turbine data to determine power available over that hour.

Now, there are quite a few "yeahbutz" here, among them:

  • I've not done an analysis of any periodicity in the wind data, at some point that will need to be done via a Fast Fourier Transform from the time domain into the frequency domain to determine whether adjustments are necessary.
  • Wind speed is a continuous variable, assuming a constant speed for each hour will lead to inaccuracies.
  • There will never be 120 meter hub height towers with 108 meter diameter rotors at an airport. As a pilot, I certainly support this! Thus, any real wind turbine will be at some other location.
  • The Hellman exponent in the estimation of wind at hub height was chosen rather arbitrarily. Actual wind data at that height would provide a much better estimation.
Nevertheless, this calculation should provide a baseline estimate for the order of magnitude of storage necessary for a single turbine to deliver some amount of base load power.

From Czisch & Ernst 2001
And it's likely that the estimate will be conservative, given that the most likely scenario is a wind farm rather than a single turbine and that several wind farms with reasonably wide geographic separation are most likely to be feeding energy to the grid. And many studies have shown that the correlation of power produced by groups of wind farms decreases with increasing geographic separation at all time scales (see chart at right, h/t to Dr. Steve Carson). To be clear, low correlation is a good thing when considering base load power because we desire that, when turbine/wind farm A suffers a low wind speed, turbine/wind farm B takes up some of the slack and vice versa.

Next, it's time to state the specific goals of the simulation:
  • Determine the average power (and thus the capacity factor) of the turbine.
  • For a series of specified base load capacities, determine the storage necessary to provide this power through the periods when the turbine is not providing that power.
    • Determine minimum reliability (i.e., how many hours can be tolerated per year during which the turbine/storage cannot deliver the base load capacity. This will either need to be tolerated or supplemented with some other, typically natural gas fired, power plant).
OK, enough preamble, next will come the actual results of the simulation and conclusions, with suggestions for where to go from here, both with respect to the model and with respect to some speculation on what it means for the combination of renewables and storage as they penetrate the grid at greater levels.

Sunday, September 13, 2015

How much storage is needed, part 2

In my previous post, I started the process of attempting to estimate how much storage would be needed in order for wind in a favorable part of the US to be able to provide a stable source of base load power. I retrieved data on mean daily wind speed from Dalhart, TX and made an adjustment for wind speed at a hub height of 120 meters (the retrieved data was almost certainly for a 10 meter height).

If I plot a histogram of the daily mean wind speeds and from there determine a likely probability distribution, it's straightforward to estimate the probability that the mean wind speed for a day will exceed the cut-in speed of the Siemens SWT-3.0-108 turbine (3 meters/second as shown in the technical specifications) postulated in the previous post. I can also, if I wish, just use each daily mean wind speed, assume that that applies for 24 hours, map it onto the data sheet for the turbine, and sum that data for any particular period to come up with an estimate of generated energy over that period.

While that's a simple and straightforward exercise, it doesn't necessarily provide an excellent estimate. Turbines have cut-in speeds (below which no electricity is generated), cut-out speeds (above which turbine blades are feathered or otherwise protected from over stress and no electricity is generated), and a non-linear (and, for the matter of that, a non-cubic) response between those speeds. A typical curve relating wind speed to power generated for a 3 MW turbine is shown at right.

Further, power in wind is proportional to the cube of wind speed, so a variable wind averaging, say, 12 meters/second will deliver more power than a stable 12 meters/second wind. For example, suppose wind blows at a constant 6 meters/second for one 24 hour period then, for the next, it is still for 12 hours and and then blows at 12 meters/second for another 12 hours. There will be four times as much kinetic energy in the wind through a given swept area in the second case. Then, to further complicate matters and looking at the chart, the turbine reaches its maximum efficiency at about 12 meters/second and so the turbine would generate far less than four times the energy in the second case. I digitized the graph using the excellent WebPlotDigitizer and so I can calculate that, for this turbine, the first scenario would deliver 22.8 MWh while the second would deliver 36.1 MWh.

All of these factors play into what can be expected from a turbine and are captured in the "capacity factor" of an installed turbine. This is the ratio of the generated electricity over a period as a percentage of the electricity that the "nameplate capacity" of the turbine (here, 3 MW) would deliver.

What I'd REALLY like is a continuous stream of data but such data isn't available, Instead, I'll use a Monte Carlo simulation with pseudorandom numbers drawn from a distribution similar to the wind in Dalhart. I'll not be able to generate a continuous data stream, instead I'll simulate hourly data for a one year period (8760 samples).

So what does the distribution look like based on the data from Wolfram? To the left is a probability density histogram of the data extracted along with a smooth kernel distribution for the range of speeds reported. I confirmed that the null hypothesis that the wind speed data is distributed according to a Smooth Kernel Distribution is not rejected at the 5% level (P-Value=0.612) so this is what I'll be using.

Again, I don't want to try my readers' patience, so I'll stop here for this post. The next one will show the results of the simulation with some analysis of what I've found. Subsequently, I'll discuss a wind farm consisting of such turbines and what is needed for storage in order to assure that base load power and peaking power is available at some level yet to be determined.

But in the meantime, keep in mind that we're


Monday, September 07, 2015

How much storage is needed?

I've published several posts on energy storage but yet have not dived into any of the practicalities of what might be needed for various scenarios. While that's a very broad topic that could involve everything from what a homeowner might need to go "off grid" to what might be needed for load following, peaking, and base load for utility scale renewable electricity generation, I'm going to look at a specific scenario. Perhaps more will follow.

Years ago I was flying back from a convention in Montreal in a small airplane and made a fuel stop in Dalhart, TX. This is a small city in the northwest corner of the Texas panhandle. And according to the National Renewable Energy Laboratory, it ought to be an excellent wind resource. I'd like to understand what storage would be necessary, both in terms of power (rate of delivery) and energy (total capacity) to have the wind resource provide reliable base load power. To this end, I'd consider a system whereby the momentary load would be provided by a wind farm if possible, and available energy above that load would be stored and delivered when available wind energy was insufficient. Two things should be kept in mind here: 1) I'm not an electrical engineer; and 2) I'm sure that I'm not the first to carry out such an analysis. Nevertheless, I'll not be deterred from giving it a shot.

My first chore was to understand the wind regime in Dalhart. To do this, I utilized Mathematica's curated WeatherData to download a time series of average daily wind speeds from January 1, 2000 through September 4, 2015. The data is in kilometers/hour and a plot is shown below:


But this data is (I confidently assume) from the Dalhart Municipal Airport (KDHT) and, if taken according to standards, is measured at a height of 10m. But in this day and age, the hub height of a modern turbine of, say, 3MW nameplate capacity is likely to be 110m or even higher. I'm going to base my analysis on the Siemens SWT-3.0-108 turbine. As the designation suggests, this turbine has a nameplate capacity of 3.0 MW and a diameter of 108 meters. I'll put it on a tower yielding a hub height of 120m.

Now, given the wind at 10 meters, what do I do to estimate the wind at 120 meters? The wind gradient has been well-studied and I'm going to use ~v_{w}(h)=v_{10}(\frac{h}{h_{10}})^a~ where ~v_{w}(h)~ is wind velocity at height h meters (here, 120), ~v_{10}~ is the velocity at 10 meters, ~h~ is the hub height in meters (again, 120), ~h_{10}~ is the measurement or reference height (10 meters), and ~a~ is the so-called "Hellman exponent." I'll use 0.34 0.2, the exponent for "neutral air above human inhabited areas tall crops, hedges, and shrubs." Thus, ~(\frac{h}{h_{10}})^a=(\frac{120}{10})^{0.2}=1.644~. Thus, each wind speed in the data above will be multiplied by 2.3281.644.* Additionally, I've converted the data to meters/second. The resulting plot is below:





This should yield conservative results with respect to rate of energy delivery because power in wind (that is, rate of energy conversion) is proportional to the cube of velocity. The data shown is daily average, and variances above the average have a greater effect than variances below due to the cubic scaling.

Lest the length of this post get out of hand, I'll stop it here. Next up will be determination of capacity factor based on the data above and using the data for the selected turbine. From there, we'll look at intermittency and the storage required to deliver steady power. Finally, we'll look at the land required and the costs so that a Dalhart, TX wind farm can deliver base load power.

Meanwhile...



*Thanks to Michael Tobis for questioning the exponent. Comparing these numbers with a 120 meter wind map yields a better (and more conservative) agreement.