This will be a short one. LightSail Energy, a startup with innovations in compressed air energy storage, has already appeared in my writing twice, the most recent being after co-founder Danielle Fong was generous enough with her time to take me on a tour of their facility in Berkeley. In the first article I mentioned that I'd delve deeper into the thermodynamics of their process. After the second Ms. Fong mentioned that, in general, she felt that I'd been fair and accurate but that my skepticism regarding LightSail's ability to manufacture systems in the facility I visited was unfounded. Hmm... "uncalled for" is the actual quote. I'd like to take a look at both, but first the manufacturing capacity.
Image courtesy of LightSail Energy
As it happened, it turned out that I hadn't seen the whole facility. Further conversation with Ms. Fong revealed that LightSail's existing Berkeley facility does have some manufacturing capabilities and, in fact, has production runs of tanks in particular. It should be noted (and I've seen Fong tout this publicly in some of the amazingly large number of video presentations in which she's featured) that LightSail states that they have designed, prototyped, tested, and produced tanks with unprecedentedly high merit indices. These tanks can be sold for uses other than compressed air energy storage. Further, LightSail is able to assemble, commission, and sell their compressor expanders out of their R&D facility. Thus, while LightSail is unlikely to be able to meet full scale production of their storage units at the Berkeley facility in the event that their compressed air energy storage technology takes off at the scale that the Company envisions, it's clear that they do, in fact, have significant manufacturing capabilities there. I stand corrected.
My previous post in this series related some of the drawbacks of my simplistic analysis, the objectives I want to achieve, and a sketch of the methodology I've employed. Briefly, I've used a Monte Carlo simulation (yes, I linked to something other than Wikipedia, you're welcome Doctor Steve) to determine a likely outcome for generation of energy by a single hypothetical wind turbine in Dalhart, TX.
I ran 1000 simulations of 8760 data points of wind speed from what turned out to be a mixture distribution combining a normal and a Gamma distribution. This generated a total of 8,760,000 wind speeds. Each was delivered to an interpolating function generated from a digitized power curve of a 3MW nameplate capacity wind turbine. This resulted in 8,760,000 data points, each representing the average power delivered by the turbine for a hypothetical hour.
From there, finding the average power delivered was a simple process, and the result of this simulation was a mean power of 788 kW. This equates to a capacity factor of ~788*100/3000=26.3\%~. This is a surprisingly low number for the location and turbine chosen, given published figures at sites such as this that yield an implied capacity factor of 33.1%. Further, the model estimate has no allowance for planned and unplanned maintenance outages. And, of course, the 33.1% number is ostensibly from measured data, so, as Dr. Steve might say, "who ya gonna believe, me or your lyin' eyes?" All that said, perhaps Dalhart isn't the ideal location, perhaps the wind gradient is steeper than the model I used, perhaps they've used more highly optimized equipment, perhaps the measured year had, for some reason, particularly strong (but not too strong) winds. I'm going to proceed with my analysis based on the model data. So, the next step is to determine the storage required for the ability to deliver a given power at, say, 99.99% reliability. That is, the system should be able to supply the specified power for all but ~8760/10000=0.876~ hours/year. This is actually less than the SAIDI*SAIFI (system average interruption duration index, measured as the average duration of outages*system average interruption frequency index, measured as the average number of outages per customer per year) and so sounds quite reasonable if not overly conservative. One assumption will be that, when the turbine is delivering more than the power under consideration and the storage facility is "topped off," we can send the power to the grid. Another will be that, for the level of power being considered, the storage system is capable of delivering power at that level. As I've discussed in previous posts, there are two primary characteristics of an energy storage installation: the quantity of energy that the system can store; and the rate at which it can deliver that energy. Of note, approximately 4.0% of the time, the wind is below the cut in speed of the turbine and thus all energy delivered by the system must come from storage. The modeled wind exceeded the cut out speed of the turbine a negligibly small 0.0004% of the time. But there are no black swan events in the distribution (think tornadoes). It took me a little time to decide on an effective way to proceed, but ultimately I decided to start with a guess of storage and loop through each increment (i.e., each hour's worth) of power (since the power is in kilowatts and the increments are hours, no conversion is necessary). If the storage plus the increment minus the steady use exceeded the maximum available storage, the excess was discarded and the maximum was kept for the next iteration. If the sum was less, that was kept for the next iteration. Upon completion, determine the number of iterations at which storage was zero or less, adjust maximum storage if and as necessary and try again. Using the mean power from all of the trials, no amount of storage sufficed, but reducing it to 725kW gave me what I wanted. And finally, the result: If our 3MW turbine plus storage system is committed to delivering 725 kilowatts and we can provide 40MWh* of storage, there's effectively zero chance of not having the committed power available. Of course, the system can deliver greater power than that when the wind blows and/or when plenty of energy is stored but committing to greater power than 725kW or installing less storage than 40MWh means that there will be times when the system cannot deliver. Obviously, installing it in an integrated grid system can offset this, but the goal here was to determine what storage will enable what level of reliable base load power for a single turbine so the result is likely to be conservative. This is a virtue in the world of engineering. Below is a chart showing the first 100,000 increments with increment number on the x-axis and energy stored on the y-axis.
One widely discussed concept in energy generation is "capacity value," a very different concept (and number) than capacity factor. Basically, this number represents how much other generating capacity can be avoided with the installation of a generator and, for wind in particular, it is typically much lower than the capacity factor. Since there are times when no wind is blowing and demand does not abate, for an unaided turbine, sufficient generating capacity must be available to meet the demand, even though it may only be used sporadically. The goal of adding storage in this analysis is to bring the capacity value of the wind turbine close to the capacity factor. As I noted in my previous post (on another topic), most utilities are not looking for days of storage (my analysis above determined that 48 hours of storage at 24.2% of the turbine's nameplate capacity would provide that power continuously and reliably), they're looking for a few hours. And, of course, the myriad complexities of transmission constraints, demand side variability, planned and unplanned generator outages, etc. have not been considered. Others have taken some of these into account using a similar methodology (i.e., Monte Carlo simulation). None that I've found, however, incorporate storage into the analysis. If I were a professor at a research institution or an NREL researcher or, perhaps, if I worked for a turbine manufacturer or a storage technology firm, I'd implement a much more sophisticated model incorporating the above factors as well as a wind farm as opposed to a single turbine. Next in this series (which, as readers may have noted, may be interrupted by posts on other topics) will be an analysis of the economics of such a system, or at least the beginning of such an analysis. I anticipate that the cost will be prohibitive without pricing the externalities of fossil fuel generation (i.e., without implementing a carbon tax).
*In several trials, 35MW would have sufficed with no increments less than 0, but this run had a particularly calm stretch and, even with 40MW, had 0.0088% of the increments less than 0. However, this met the criteria of 99.99% reliability at 99.9912%.
Image credit: Advanced Rail Energy Storage North America
In an earlier post I covered a concept of utilizing a massive rock piston over pressurized water to store energy. Another firm uses a concept analogous to pumped hydro storage but, rather than massive amounts of water and pumps and turbines, they use large solid masses and motor/generators. That firm is ARES, an acronym for Advanced Rail Energy Storage. First, the headline numbers: ARES claims that their technology allows storage facilities of from 200 MWh of energy that can be delivered at a rate of 100 MW (i.e., it can run at full power for two hours) to 16-24 GWh that can be delivered at a rate of 2-3 GW (i.e., it can run at full power for eight hours). It's claimed to have a round trip efficiency of 80% (or 85%, depending on which interviewee you're listening to). The claimed ramp-up time is on the order of 8 seconds, dramatically better than any fossil fuel plant or pumped or stored hyrdro system, the only storage system to better that number is electrochemical (battery) storage. Finally, ARES says that the cost of an Advanced Rail Energy Storage facility is about 60% of that of an equivalent pumped hydro installation. All this sounds pretty good. OK, what actually happens? During times of plentiful generation by intermittent generators or of low electrical prices if arbitrage is the name of the game, rail cars full of rocks are transported by rail up inclines via axle mounted motor generators on the cars. Unfortunately, their technical page has scant information regarding the specifics of the system, that information must be gleaned from other articles. Nevertheless, we can see that ARES envisions three classes of system:
Ancillary services: The system is used as a Limited Energy Storage Resource (LESR) for frequency stabilization, spinning reserves, VAR (volt ampere reactive) support, etc.
Intermediate scale: The system is used for ancillary services as above, as well as for short duration storage to facilitate intermittent generation integration. Such a system is envisioned as capable of delivering 50 to 200 MW and having a two hour capacity.
Grid scale storage as described above, with 200 MW to 3GW delivery for up to 16 hours.
While the system cannot compete with pumped hydro for systems requiring days of storage, it is far less complex to construct and appropriate siting is dramatically easier to locate, and should be far easier to shepherd through the myriad review and permitting processes. And many systems don't require several days of storage. William Peitzke, ARES Founder and Director of Technology Development is quoted as saying "Generally, the market for storage tends to be an 8 hour requirement and in fact a lot of the utilities we talk with really only require five to six hours of discharge.”
Image credit: ARES
The cars carry a mass consisting of concrete and rock, and utilize electric traction motors to lift the masses up inclines. The same motors then act as generators when descending. Complex, automated control systems enable quick adjustments to suit system requirements, and the system can have some cars ascending while others descend. Scale can be increased simply by adding more mass. Energy is received and delivered via electrified rails. The cars themselves are modified ore cars. ARES holds patents on the system, but the individual components and systems are mature technologies with no technological breakthroughs needed. ARES has constructed a pilot system in Tehachapi at about 1:3.75 scale (see photo at right) but, according to various reports, in Pahrump, Nevada, the Valley Electric Association has agreed to work with ARES to implement a 50 MW system. The projected cost is $40M. The objective is actually to accomplish frequency stabilization for the California ISO (Independent System Operator, known as "Cal-ISO"). The planned system would use 34 cars on a 9.2 km track with approximately a 7% incline. The difference in elevation between the top and bottom will be approximately 640 meters. Each shuttle will transport a mass of 230 tons (209 tonnes). A quick calculation [(34 cars)*(209 tonnes)*(1000 kg/ton)*(9.8 m/s)*(640 meters)*(80%)/(3.6*10^9 joules/MWh)] shows that this system may be able to store and deliver a maximum of just under 10 MWh. However, this is an "Ancillary Services" installation and thus not designed for primary purpose of storage per se, but rather for the regulation goals mentioned above. Unfortunately, I'm not able to find recent information on progress to date. The Valley Electric Association web site is silent on ARES with the exception of a pdf magazine from October of 2014.
I'd not go so far as to say that rail energy storage is the silver bullet for solving the integration of intermittent renewables into the grid, but it certainly seems to have significant benefits and few drawbacks, assuming that it hasn't jumped the track.
Update: A great set of photos of the pilot project in Tehachapi can be found at gizmag.
Previously, I imported the daily mean wind speed recorded at Dalhart, TX (at the airport, I assume) for the period beginning January 1, 2000 through the most recent day recorded in Wolfram's curated weather data. I adjusted the wind speeds using a relatively standard model to estimate the wind speed at a turbine hub height of 120 meters from the (presumably) data at 10 meters. I also have digitized the data for a 3 MW nameplate capacity wind turbine. Further, I've used Mathematica'sInterpolation capability to provide a "plug and play" function whose input is a wind speed and whose output is power from the turbine. The plan from here is to do a Monte Carlo simulation from the smooth kernel distribution that's the best fit to the wind data. I'll use 8,760 points per simulation (the number of hours in a non-leap year) and use the speeds and the turbine data to determine power available over that hour. Now, there are quite a few "yeahbutz" here, among them:
I've not done an analysis of any periodicity in the wind data, at some point that will need to be done via a Fast Fourier Transform from the time domain into the frequency domain to determine whether adjustments are necessary.
Wind speed is a continuous variable, assuming a constant speed for each hour will lead to inaccuracies.
There will never be 120 meter hub height towers with 108 meter diameter rotors at an airport. As a pilot, I certainly support this! Thus, any real wind turbine will be at some other location.
Nevertheless, this calculation should provide a baseline estimate for the order of magnitude of storage necessary for a single turbine to deliver some amount of base load power.
From Czisch & Ernst 2001
And it's likely that the estimate will be conservative, given that the most likely scenario is a wind farm rather than a single turbine and that several wind farms with reasonably wide geographic separation are most likely to be feeding energy to the grid. And many studies have shown that the correlation of power produced by groups of wind farms decreases with increasing geographic separation at all time scales (see chart at right, h/t to Dr. Steve Carson). To be clear, low correlation is a good thing when considering base load power because we desire that, when turbine/wind farm A suffers a low wind speed, turbine/wind farm B takes up some of the slack and vice versa.
Next, it's time to state the specific goals of the simulation:
Determine the average power (and thus the capacity factor) of the turbine.
For a series of specified base load capacities, determine the storage necessary to provide this power through the periods when the turbine is not providing that power.
Determine minimum reliability (i.e., how many hours can be tolerated per year during which the turbine/storage cannot deliver the base load capacity. This will either need to be tolerated or supplemented with some other, typically natural gas fired, power plant).
OK, enough preamble, next will come the actual results of the simulation and conclusions, with suggestions for where to go from here, both with respect to the model and with respect to some speculation on what it means for the combination of renewables and storage as they penetrate the grid at greater levels.
In my previous post, I started the process of attempting to estimate how much storage would be needed in order for wind in a favorable part of the US to be able to provide a stable source of base load power. I retrieved data on mean daily wind speed from Dalhart, TX and made an adjustment for wind speed at a hub height of 120 meters (the retrieved data was almost certainly for a 10 meter height).
If I plot a histogram of the daily mean wind speeds and from there determine a likely probability distribution, it's straightforward to estimate the probability that the mean wind speed for a day will exceed the cut-in speed of the Siemens SWT-3.0-108 turbine (3 meters/second as shown in the technical specifications) postulated in the previous post. I can also, if I wish, just use each daily mean wind speed, assume that that applies for 24 hours, map it onto the data sheet for the turbine, and sum that data for any particular period to come up with an estimate of generated energy over that period.
While that's a simple and straightforward exercise, it doesn't necessarily provide an excellent estimate. Turbines have cut-in speeds (below which no electricity is generated), cut-out speeds (above which turbine blades are feathered or otherwise protected from over stress and no electricity is generated), and a non-linear (and, for the matter of that, a non-cubic) response between those speeds. A typical curve relating wind speed to power generated for a 3 MW turbine is shown at right.
Further, power in wind is proportional to the cube of wind speed, so a variable wind averaging, say, 12 meters/second will deliver more power than a stable 12 meters/second wind. For example, suppose wind blows at a constant 6 meters/second for one 24 hour period then, for the next, it is still for 12 hours and and then blows at 12 meters/second for another 12 hours. There will be four times as much kinetic energy in the wind through a given swept area in the second case. Then, to further complicate matters and looking at the chart, the turbine reaches its maximum efficiency at about 12 meters/second and so the turbine would generate far less than four times the energy in the second case. I digitized the graph using the excellent WebPlotDigitizer and so I can calculate that, for this turbine, the first scenario would deliver 22.8 MWh while the second would deliver 36.1 MWh.
All of these factors play into what can be expected from a turbine and are captured in the "capacity factor" of an installed turbine. This is the ratio of the generated electricity over a period as a percentage of the electricity that the "nameplate capacity" of the turbine (here, 3 MW) would deliver.
What I'd REALLY like is a continuous stream of data but such data isn't available, Instead, I'll use a Monte Carlo simulation with pseudorandom numbers drawn from a distribution similar to the wind in Dalhart. I'll not be able to generate a continuous data stream, instead I'll simulate hourly data for a one year period (8760 samples).
So what does the distribution look like based on the data from Wolfram? To the left is a probability density histogram of the data extracted along with a smooth kernel distribution for the range of speeds reported. I confirmed that the null hypothesis that the wind speed data is distributed according to a Smooth Kernel Distribution is not rejected at the 5% level (P-Value=0.612) so this is what I'll be using.
Again, I don't want to try my readers' patience, so I'll stop here for this post. The next one will show the results of the simulation with some analysis of what I've found. Subsequently, I'll discuss a wind farm consisting of such turbines and what is needed for storage in order to assure that base load power and peaking power is available at some level yet to be determined.
I've publishedseveralposts on energy storage but yet have not dived into any of the practicalities of what might be needed for various scenarios. While that's a very broad topic that could involve everything from what a homeowner might need to go "off grid" to what might be needed for load following, peaking, and base load for utility scale renewable electricity generation, I'm going to look at a specific scenario. Perhaps more will follow.
Years ago I was flying back from a convention in Montreal in a small airplane and made a fuel stop in Dalhart, TX. This is a small city in the northwest corner of the Texas panhandle. And according to the National Renewable Energy Laboratory, it ought to be an excellent wind resource. I'd like to understand what storage would be necessary, both in terms of power (rate of delivery) and energy (total capacity) to have the wind resource provide reliable base load power. To this end, I'd consider a system whereby the momentary load would be provided by a wind farm if possible, and available energy above that load would be stored and delivered when available wind energy was insufficient. Two things should be kept in mind here: 1) I'm not an electrical engineer; and 2) I'm sure that I'm not the first to carry out such an analysis. Nevertheless, I'll not be deterred from giving it a shot. My first chore was to understand the wind regime in Dalhart. To do this, I utilized Mathematica's curated WeatherData to download a time series of average daily wind speeds from January 1, 2000 through September 4, 2015. The data is in kilometers/hour and a plot is shown below:
But this data is (I confidently assume) from the Dalhart Municipal Airport (KDHT) and, if taken according to standards, is measured at a height of 10m. But in this day and age, the hub height of a modern turbine of, say, 3MW nameplate capacity is likely to be 110m or even higher. I'm going to base my analysis on the Siemens SWT-3.0-108 turbine. As the designation suggests, this turbine has a nameplate capacity of 3.0 MW and a diameter of 108 meters. I'll put it on a tower yielding a hub height of 120m. Now, given the wind at 10 meters, what do I do to estimate the wind at 120 meters? The wind gradient has been well-studied and I'm going to use ~v_{w}(h)=v_{10}(\frac{h}{h_{10}})^a~ where ~v_{w}(h)~ is wind velocity at height h meters (here, 120), ~v_{10}~ is the velocity at 10 meters, ~h~ is the hub height in meters (again, 120), ~h_{10}~ is the measurement or reference height (10 meters), and ~a~ is the so-called "Hellman exponent." I'll use 0.34 0.2, the exponent for "neutral air above human inhabited areas tall crops, hedges, and shrubs." Thus, ~(\frac{h}{h_{10}})^a=(\frac{120}{10})^{0.2}=1.644~. Thus, each wind speed in the data above will be multiplied by 2.3281.644.* Additionally, I've converted the data to meters/second. The resulting plot is below:
This should yield conservative results with respect to rate of energy delivery because power in wind (that is, rate of energy conversion) is proportional to the cube of velocity. The data shown is daily average, and variances above the average have a greater effect than variances below due to the cubic scaling. Lest the length of this post get out of hand, I'll stop it here. Next up will be determination of capacity factor based on the data above and using the data for the selected turbine. From there, we'll look at intermittency and the storage required to deliver steady power. Finally, we'll look at the land required and the costs so that a Dalhart, TX wind farm can deliver base load power. Meanwhile...
*Thanks to Michael Tobis for questioning the exponent. Comparing these numbers with a 120 meter wind map yields a better (and more conservative) agreement.
I've blogged on a few occasions regarding energy storage, most recently a "LightSail Energy redux" (though I'll be updating that in the near future to revise an inaccuracy with respect to LightSail's ability to produce commercially in their existing Berkeley facility). But there are lots of ideas for storage out there, from molten salt to bags of air beneath the sea and many others. But a new one caught my eye not too long ago that seemed out of cloud cuckoo land. The firm touting this technology is Heindl Energy and their concept is to cut an annulus around rock and space beneath the rock, thereby creating a rock mass piston in a rock mass cylinder. Water would be pumped in to lift the rock when excess energy (or cheap energy if arbitrage is the name of the game) is available and let the rock descend, pumping the water through turbines when energy is needed or expensive. The idea is that the piston diameter is equal to its length, and it would be raised and lowered a length equal to its radius, so half or more of the piston would always be beneath the ground surface. One claimed advantage is that, unlike pumped hydro or underground compressed air energy storage, the geological feature necessary for the storage is more easily found (though, obviously, they can't be built just any old place). Another is that the density of rock is greater than that of water and so a smaller volume of rock is needed for a given potential energy availability (though the factor is only about 2.5 or so). It's easy to show that the available energy, ignoring efficiency losses of various kinds, is ~E=(2\rho_{r}+\frac{3}{2}\rho_w)\pi gr^{4}~ where ~\rho_r~ is rock density, ~\rho_w~ is water density, ~\pi~ is, well, ~\pi~, ~g~ is the acceleration of gravity, and ~r~ is the radius of the rock piston. Note that the length of the piston is ~2r~ and the height to which it's raised is ~r~. Thus, the storage available scales with the fourth power of the radius. But, since construction is really all about the surface of the piston, construction time and cost scales approximately with the square of the radius. So, in this case, size truly matters! Doubling the radius gives approximately 16 times the capacity for "only" four times the construction cost and difficulty. A bit of an issue is that the Heidl site "Idea & Function" page gives the energy as ~(2\rho_r\frac{3}{2}\rho_w)\pi gr^{4}~. I'm sure it's a typo and I've emailed them to mention it but still, it doesn't lend confidence. Nevertheless, I used Wolfram Mathematica to check on the validity of the table shown on that page and it's actually conservative. They claim efficiency of 85% but I have to reduce efficiency to 57% or so to hit their numbers. Update: I received an email from Dr. Eduard Heindl recognizing the typo and stating that it has now been fixed. Dr. Heindl agreed that such an error on the technical page should not have taken place. Unlike many storage scheme sites and descriptions, Heidl limits their discussion to quantity (gigawatt hours) and doesn't, as far as I could find, discuss power (the rate at which energy can be delivered by such a system). Both metrics are, of course, crucial and this site has the opposite of my typical frustration. They also provide no discussion of any load following capabilities. We're talking here about a very large project. Using their numbers, 8 gigawatt hours of storage requires a 125 meter radius piston (250 meters or over 2 1/2 football fields across). Such a piston would weigh about 35.2 million (short) tons! To lift it would require water pressure of about 64 bar (though their table shows 52 bar). Such pressures would demand a lot from the seals between the piston and cylinder walls, otherwise, the system would act as a 250 meter diameter circular fountain! Heidl discusses the seal system here and it appears to be quite innovative (a rolling seal against, apparently, a steel cylinder sleeve) but I see no indication that a pilot plant has confirmed that it will work. The devil, as always, is in the details. Finally, how many? If a single 250 meter diameter rock piston can store 8 gigawatt hours, what is required to provide stable delivery from renewable sources so that the need for fossil fuel burning base load, spinning reserve, and peaker plants can be minimized with increasing penetration of intermittent renewable sources? According to Heidl's site, Germany currently needs 1,600 gigawatt hours of storage, of which only 40 have so far been provided. Therefore, Germany currently needs 195 such storage facilities! In the Q & A, Heidl estimates that the system is of comparable cost to pumped hydro storage at a radius of 100 meters, and less expensive above that due to the scaling factors mentioned above. They also estimate a minimum of 2 years of planning and 3 to 4 years of construction per facility. And my experience is that the grander the scale of the project, the less likely it is to meet budget and schedule estimates. And this has never been tried. While I think that it's a long shot that this type of system will ever see widespread use, the bigger picture is the scale of the undertaking needed to provide sufficient storage for deep grid scale intermittent penetration regardless of the system used. I think that distributed generation and local storage are key to providing a sustainable energy future through renewable sources. Note: R.I.P. BB King.
Physicists (one of which I am not) are quite concerned about units and dimensions and use dimensional analysis for a variety of purposes. And if the units in an equation don't match in all terms and across the equals sign, you've erred. But sometimes this can lead to confusion. For example, torque (exerting a force about an axis) is measured in dimensions of [force]*[length]. It could be pound feet or newton meters or, for the matter of that, dyne centimeters or ton furlongs. But work and energy are also measured in such units. A joule is a newton meter. Thus, in thinking about my fuel economy as a U.S. resident, I'm accustomed to thinking of miles/gallon but in countries who've adopted the SI (metric) system, people make a very sensible inversion of this and, rather than distance/volume, they use volume/distance, typically liters/100 kilometers. While this isn't an SI unit, it is metric. But it's also a volume divided by a length or [length]^3/[length] which is [length]^2 or an area. So I converted my 50 m.p.g. to an area to note that my fuel economy is 4.704*10*10^(-8) m^2 or 0.04704 mm^2. Next time someone asks about my fuel economy, I'm going to say "a bit over 47 thousandths of a square millimeter." It's not surprising to note that Randall Munroe*, of xkcd fame, has beat me to it. I will state, for the record, that I noted his post after composing all but this portion of this one.
*This is the first time I've noticed a photo of Munroe. It always amazes me how little resemblance there is between what I picture someone to look like in my mind and what they actually look like when I see them or their photo. And I always picture them.
Typically, I ignore James Kunstler and, in fact, I posted the reasons for this. And this is despite that fact that I agree with many of his positions, particularly with respect to the exigency of our energy situation. But his bombastic prose, his need to reuse the same symbols of his disdain for modern U.S. culture (tatoo parlors and tatooed people, suburbs, cars, etc.) to the point of exhaustion, and his extreme repetitiveness finally caused me to stop reading him except on an occasional basis. And, on these occasions, it was the "same old same old." But Kunstler's most recent post in his "Clusterfuck Nation-Blog" is different. He doesn't rail on about tatoos, cars, suburbs, salad shooters, cheez doodles, banker boyz, etc. He states without bombast the things that he believes a sitting and any future U.S. President should do and the reasons why. I find it to be compelling and accurate. I'm sure that I'm far to the conservative side in comparison to Kunstler, but as I've posted repeatedly, the idiocy, ignorance, obstinacy, and intransigence that passes for conservative thought these days disgusts me. The positions taken by Kunstler should be celebrated by true conservative thinkers. He deplores the security state, the militarization of police forces, the thievery engaged in by the financial sector, the revolving door in Washington, DC for lobbyists, cabinet members, congresspersons, etc. between government and private interests, Citizens United, the state of perpetual war and foreign adventurism, etc. A true conservative should applaud every one of these and I do. Further, Kunstler points out that not one candidate, announced or otherwise, has adopted any significant fraction of these positions, from the buffoonish Trump and the supercilious slime-ball Cruz on the right to the self-appointed rightful heir Clinton and the daft Sanders on the left. Do I think that Kunstler's post, let alone mine, will move the populace to demand better? No, even though I believe that most would agree with the grievances that Kunstler suggests be "nailed to the White House gate" (though I suspect that anyone approaching the White House with a hammer and nails would be whisked off to Guantanamo at best and shot at worst, particularly were that person to not have the attribute of being caucasian - full disclosure, I'm caucasian). Kunstler typically generates several hundred comments to each of his posts, this one is no different. Reading them is a somewhat sadder experience, there's a lot of "eat the rich," "burn it down" comments which serve no purpose. But I understand the frustration exemplified by such sentiments. I don't guess that this deviation from his usual smug, self-satisfied, and precious cleverness indicates a new seriousness, but I'll check tomorrow (his posts come out on Mondays). It's well worth the five minutes it takes to read.
It's difficult finding post topics if I insist that they meet certain criteria: can't take so long to research and write that my family/work/school life is disrupted; must be something about which I have some level of knowledge and the ability to do sufficient research and investigation to post intelligently; must be something that I think my (very few) readers will find to be interesting; and must not simply parrot something someone else said (other than the clear links to other sites with, perhaps, a comment). But it's clear that LightSail Energy and, in particular, Danielle Fong are of interest to the sort of person who may follow my blog. I posted an article about Ms. Fong and her firm a few months ago and was surprised to receive a Tweet from her. In the interchange that followed, she was kind enough to agree to show me LightSail's facility should I be in her area. As it happened, this week I had a meeting with another firm (to be described in a subsequent post) at UC Berkeley. LightSail is located in Berkeley and Ms. Fong, despite being jet lagged, agreed to show me around.
Because her Company is in a developmental stage and I was simply a visitor, I felt that it would have been inappropriate to photograph the facility or record Ms. Fong's replies to my many questions so readers will need to take my word for what I saw and heard. I mentioned to her that I would be completely respectful of any proprietary information, she said that she was ok with my mentioning pretty much anything (though, on two occasions, she asked that I be circumspect). I want to be, if anything, overly cautious in this regard. My first impression was that LightSail's facility looked very much like a prototyping facility. They have a large machine shop with CNC machining equipment, a robust metrology lab, assembly areas, and design facilities. There's certainly no hint of "vaporware."
Image Credit: Progressmedia.ca
Since I have a machine shop in my garage and my Company has a large amount of material testing equipment, I was surprised that, while seeing the facility and the components being assembled was fascinating, Ms. Fong's answers to my endless questioning were even more interesting. As one can find from various biographical sketches (Fong's autobiographical sketch is here), her educational background is that of a physical scientist and LightSail certainly incorporates this background into its business. But, as our conversation continued, I was compelled to ask how much of her working time is spent on: politics; fundraising, mechanical engineering issues, science, and management. Her answer, unsurprisingly, was that "it depends" but I got a strong impression that it's less science and engineering these days than it is management and fundraising. I asked Fong about her plans with respect to commercializing the technology. It's clear to me that LightSail can't be a manufacturer, at least in their present facility. She said that they do have the ability to produce a limited number of systems, using components from various vendors and others that they produce internally. I'm rather skeptical of that aspect. In discussing field implementation, Fong mentioned that LightSail has three sites currently in their backlog. One of them has been mentioned in other news articles, she didn't name the others. I'll certainly be looking forward to following the development of these projects. Fong responded to my questions about some somewhat negative press regarding some layoffs at LightSail by stating that the need for these layoffs was more operational than financial in nature. She believes that LightSail is a stronger and more efficient firm as a result. As a partner in a mid-sized business, I can certainly acknowledge that this is plausible. I asked Fong about the merger between two firms, General Compression and SustainX, that could be regarded as competitors in the area of compressed air energy storage (CAES). She generally disregarded the viability of SustainX's system (though SustainX also claims to use water during the compression process to attempt to approximate isothermal compression, one key to LightSail's technology). Among other things, Fong mentioned the sheer size of the SustainX system as being non-viable. Reading between the lines of the various articles discussing the proposed merger, it appears as though Fong is correct as it seems that SustainX will not move forward with above-ground storage (current CAES systems store air in underground caverns, typically salt domes, and General Compression follows this model). On the political and policy front, surprisingly to me, Fong was not enthusiastic about LightSail's participation in California's energy storage mandate. She is of the opinion that the commercial marketplace is the best filter for storage technologies. Fong mentioned to me that someone from the Department of Energy asked her what policy initiatives might jump start a sound energy path for the U.S. Fong answered that immigration reform would top her list, pointing out the very strong representation by immigrants in successful startups. She pooh-poohed tax reform.
Fong mentioned on several occasions that many of LightSail's technical personnel come from a background in auto racing. While I was surprised to hear this, in thinking about it further, it makes sense. Racing is a world of extremely demanding performance of mechanical components under great pressure and where innovation in mechanical design can win the day (a good driver and lots of money help too). Fong introduced me to a couple of these mechanical technicians in front of a huge valve (perhaps 60 centimeters in diameter) that was likely a titanium alloy with a titanium nitride coating.
I asked Fong about the time frame her firm's investors anticipate for a so-called "liquidity event," i.e., for them to see a financial return on their investments. These investors include Vinod Khosla's "Khosla Ventures," Bill Gates (needs no hyperlink), Peter Thiel, Total Energy Ventures, and now, apparently, Chinese VC firm Haiyin Capital, and others. She replied that they all have a relatively long investment time frame and that LightSail does not feel pressured to deliver an immediate return on investment (though she did mention a couple of very specific time frames).
Finally, we spoke in general about the need for storage and the wastefulness of burning fossil fuels (ultimately limited in supply, fracking, sands, shales, etc. notwithstanding) to generate electricity when various renewable sources can ultimately meet our needs. LightSail's technology, along with many other developing storage technologies, can make this possible by overcoming the inherent intermittency of wind and solar to enable these sources to provide "base load power" and reliable peaking power (though some will argue that the need for base load power is a myth). Following the progress of LightSail Energy will be fascinating, combining my interests in energy, thermodynamics, entrepreneurship, and finance. Note for the video below: LightSail's technology involves removing the heat (poor thermodynamic phraseology, I know) during compression of air with a water mist and reusing the heat either during expansion or for building heating, process heat, etc. Danielle Fong seems pretty "rad" to me, so "Cool It" by She's So Rad seemed a perfect fit.
John Michael Greer, the Grand Archdruid of the Ancient Order of Druids in America, publishes a weekly post at his site, The Archdruid Report. Greer is a much deeper thinker than I, particularly with respect to the interplay between past, present, and future. He describes his area of thought as the "history of ideas," and his grasp is broad and deep, regardless of whether you agree with him or not. A steady diet of The Archdruid Report would certainly depress me, though Greer does not strike me as depressed. He's certainly "cocksure" though, in the sense of certainty of his conclusions. Nevertheless, I read his posts from time to time and invariably find them thought provoking. Today, in reading his post entitled "Peak Meaninglessness," I found him citing Ivan Illich from "Energy and Equity" (available as a free pdf download) contending that
"Illich’s discussion focused on automobiles; he pointed out that if you take the distance traveled by the average American auto in a year, and divide that by the total amount of time spent earning the money to pay for the auto, fuel, maintenance, insurance, etc., plus all the other time eaten up by tending to the auto in various ways, the average American car goes about 3.5 miles an hour: about the same pace, that is, that an ordinary human being can walk."
Is it true? If it is, does it have any meaning? Since, for reasons that should be obvious, I'm not interested in using my personal financial details for such a calculation, I'll posit an American household earning the median income of $52,000/year. The household consists of a husband, wife, and two children. The husband works 2,000 hours/year and earns $40,000 and the wife works 1,000 hours/year and earns $12,000. As an aside, this family doesn't live in Southern California. The average hourly earning is thus about $17.30/hour. Of course they'll only bring home, at best, perhaps $14/hour after taxes. I'll assume two cars traveling a total of 26,000 miles per year with an average of 1.2 people in the vehicle for 31,200 passenger miles per year. The vehicles average 25 m.p.g. and gas costs $3.80/gallon, so they spend about $3,950/year on gas. One car is a relatively new (say, two years old and purchased for $29,000 on a five year loan at 5% API with 20% down) five passenger sedan. The monthly payment is around $440, or $5,280/year. The other is a minivan on a three year lease with monthly $3,000 at signing and $300/month lease payments, or $3,600/year. They paid $8,800 up front to have the vehicles but, since interest rates on money market investments are close to zero, the opportunity costs are quite low, so I'll use the up front cash divided by the respective term in months. This adds about $2,160 to the annual total. The total to finance the cars is $11,040/year. They take the vehicles in for scheduled maintenance a total of eight times per year and spend $300 each time (sometimes less, sometimes more depending on the service required by the schedule). The total is $2,400/year. This family has decent driving records and no teen drivers so the annual insurance premium is about $1,500. The grand total (leaving out car washes, aftermarket accessories, etc.) of annual expenses is $18,890. Wow, that IS a lot of money! Now, this $18,890 takes 1,350 hours (45% of their working hours) of this family's time to earn. So the final result is in: 26,000 miles/1,350 hours is about 19.25 miles per hour. This is about 5.5 times faster than Illich's estimate. So the answer to the first question, "is it true?", appears to be "no." The second question is not so easily answered. The entirety of this family's lifestyle revolves around the vehicles. Without them, it's unlikely (though certainly not impossible) that the $52,000 would be earned. And, while a vehicle undoubtedly constrains them financially, it also enables them to do many things that would otherwise be difficult or impossible. A vehicle-free lifestyle is certainly possible (I've lived such a lifestyle at various times and for various reasons), but this family has decided that the tradeoff is worth it. I WILL say, however, that they'd have been much better off with different vehicle choices. Another way of saying this is that I believe I've made assumptions that are generous to Illich's claim as repeated by Greer.
In round numbers, there are 7 billion people eating on planet Earth each day. It's pretty clear that my diet here in Southern California is very different from that of a subsistence farmer in Namibia or a subsistence fisherman (or fisherperson) in Madagascar. It's also true that, as a fully grown adult of age 60, I probably consume more calories than my one year old grandson. But I estimate that my daily intake is on the order of 2000 kilocalories per day of food energy and I don't think that long term survival for an adult is possible on much under 1000 kilocalories per day. Of course, not all humans are adults and estimates of population in each quinennial group are available from the UN. Still, for rough estimates, 7 billion people consuming 1000 kilocalories per day will work. Thus, in raw terms, this equates to humanity ingesting food energy at the rate of 7*10^12 kilocalories per day or about 10 "quads"/year (for some reason, lots of analysis of world primary energy is done in quads, where a quad is 10^15 or a quadrillion btu(a so-called "short scale" quadrillion)). Of course, the amount of chemical energy in our food as measured by bomb calorimetry exceeds this number since we cannot oxidize 100% of the mass that we ingest, the unburned residue leaves us in ... various ways... ahem. And we certainly don't ingest 100% of the food plants we eat. Further, many of us eat the meat of animals who have ingested the plants, or even the animals who have eaten the animals who... But, in my simplistic model world, I'm going to estimate that 20% of the mass of a food plant is edible, that we burn 50% of it for energy, that meat represents 20% of the kilocalories consumed by humanity, and that the "hit" on losses due to an animal intermediary is the square of the losses inherent in eating plants directly. Thus, 10*0.8*P+100*0.2*P=C where P is the available kilocalories of "primary burnable energy" ("PBE")and C is kilocalories ingested as metabolizable food energy. So we have that 28 kilocalories of PBE are required for every dietary kilocalorie in this model. So, we're now talking about 28*10 or 280 quads per year in photosynthetically created PBE. Depending on the plant species involved, the efficiency of using solar energy to convert carbon dioxide and water to biomass is in the range of 3% to 6%. I'll use 5% since it makes the arithmetic easy, and thus 5,600 quads of solar energy per year are needed to feed us. Let's move to SI units: the 5,600 quads are 5.9*10^9 terajoules and 5,600 quads/year are 187 terawatts. This energy comes, of course, from the sun. There are approximately 14 million km^2 or 14*10^12 m^2 of arable land on our planet so, on average, each arable square meter must be responsible for converting (5.9*10^9 terajoules)/(14*10^12m^2)=422 megajoules/year or an average rate of 13.4 watts of incoming solar energy into ingested food energy. And, as we see in the graphic at right, this is something like an order of magnitude away from the total incoming energy absorbed by the surface of the Earth. While I've looked at otherarticles that come to different conclusions about solar energy embodied in our food, I'd be shocked if I were off by an order of magnitude. The lesson? There's not a lot of spare capacity in our system for squandering our biota's ability to feed us. <
There have been a few seemingly simplistic or tautological things that I've incorporated into various situation analyses.
If something's wrong, something's wrong. This is a conclusion reached jointly by my closest friend Dr. Captain Right Reverend Frank Hanna (Frank is second from right in this photo from 2000 of the Northern Arizona University Department of Geography and Public Planning), may he rest in peace. It came to us during flying together but has wide applicability in business, at home, and elsewhere. For example, I'm flying at 11,000 feet density altitude, power is at 27" manifold pressure and 2,400 r.p.m. I usually get 142 knots indicated airspeed in this situation, today it's 132. "Huh, oh well, la de da." 10 minutes later: "Oh $*)^%*, I forgot to put the flaps up." If something's wrong, something's wrong.
If it did, then it can. Frank was a professor of geology and took students on field trips, sometimes to map geological features. Students would look at, measure, and map some feature, scrunch their brow and say "but can it do that"? Answer: If it did, then it can.
If they are, then they do. I got my start in the business I'm now in via performing inspection of various facets of building construction. I'd see some very shady things apparently involving various quid pro quos (quids pro quo?) between contractors and construction managers (who ostensibly represent the interests of the owner of the development). A newer inspector would say "Do they really do that"? If they are, then they do.
Everything is dramatically more difficult and complex than you thought it would be, even when you allow for things being dramatically more difficult and complex than you thought they would be. This applies to everything from installing drywall or toilets to welding to applying conservation of momentum to a dynamics problem. Self explanatory.
Now, along with always assuring that you're wearing clean underwear (about which I wrote a song decades ago, motivated by a friend who said she couldn't bring herself to commit suicide because she didn't want to be found in her unkempt apartment), applying these should come close to assuring a smooth sail through life's various meanders.