Bernie. Bernie. BERNIE. 2016.


I’m a one issue voter: Climate disruption.

Minimizing it.

Little else matters.

“All the good that you have done, all the good you can imagine doing, will be wiped out by floods, by fires, by superstorms, if you fail to act now to deal with this crisis that is a gun, pointed at the head of the future.”

That’s from Van Jones, at the Forward on the Climate Rally, D.C., 17th February 2013. I was there. As was Bill McKibbin. I’ve “been there” since 1971 (*).

And, to my progressive friends, it’s triage. This is an emergency. We cannot do it all at once. We need to make choices, to do what needs to be done, putting the other important things later. Things just have to get out of the way.

Vote Bernie Sanders, 2016.

(*) Ecology Action for Rhode Island, founded as one of the Ecology Action groups in Berkeley, CA, and a local chapter in Providence.

Posted in adaptation, Anthropocene, bridge to nowhere, carbon dioxide, Carbon Worshipers, citizenship, civilization, clean disruption, climate, climate change, climate disruption, climate models, coastal communities, compassion, corporate litigation on damage from fossil fuel emissions, corporate supply chains, decentralized electric power generation, decentralized energy, demand-side solutions, destructive economic development, Eaarth, ecology, economics, environment, exponential growth, fossil fuel divestment, geophysics, global warming, greenhouse gases, Hyper Anthropocene, investing, investment in wind and solar energy, liberal climate deniers, meteorology, physical materialism, physics, rationality, reasonableness, risk, science, Spaceship Earth, sustainability, T'kun Olam, temporal myopia, the right to know, the value of financial assets, zero carbon | Leave a comment

Phytoplankton-delineated oceanic eddies near Antarctica

Excerpt, from NASA:

Phytoplankton are the grass of the sea. They are floating, drifting, plant-like organisms that harness the energy of the Sun, mix it with carbon dioxide that they take from the atmosphere, and turn it into carbohydrates and oxygen. Phytoplankton are critical to the marine food web, being the primary producers of food for the oceanic food web, from zooplankton to fish and shellfish to whales.

Like plants and trees on land, phytoplankton give us a lot more than food. It is estimated that 50 to 80 percent of the oxygen in our atmosphere has been produced by phytoplankton. At the same time, they are responsible for drawing down significant portions of the carbon dioxide from the air. The tiniest of living organisms exert an outsized influence on the planet.

In mid-summer 2016 in the Southern Hemisphere, they took up an outsized parcel of real estate in the Southern Ocean, as shown in the image above. On January 13, 2016, the Visible Infrared Imaging Radiometer Suite (VIIRS) on the Suomi NPP satellite captured this view of extensive phytoplankton blooms stretching from the tip of South America across to the Antarctic Peninsula. The image was built with data from the red, green, and blue wavelength bands on VIIRS, plus chlorophyll data. A series of image-processing steps highlighted the color differences and more subtle features, such as nearly a dozen eddies of varying sizes.

See also Professor John Marshall’s work on the role of eddies in oceanic circulation.

See also work by the Woods Hole Oceanographic Institution on currents, gyres, and eddies and on eddies as means of oceanic transport.

And, finally, work by Dr Emily Shuckburgh on the mathematical dynamics of ocean mixing.

Posted in AMETSOC, Antarctica, Arctic, bacteria, Carbon Cycle, complex systems, differential equations, diffusion, diffusion processes, dynamic linear models, dynamical systems, Emily Shuckburgh, environment, fluid dynamics, geophysics, GLMs, John Marshall, marine biology, Mathematics and Climate Research Network, NASA, numerical analysis, numerical software, oceanic eddies, oceanography, physics, phytoplankton, science, thermohaline circulation, WHOI, Woods Hole Oceanographic Institution | Leave a comment

That two degree limit is closer than it appears

The UNFCCC’s COP21 concluded goals which aimed for limiting global warming to +1.5\textdegree C, and certainly keeping it below +2.0\textdegree C, both measured with respect to pre-industrial temperatures.

Bad news.

According to the United States National Center for Atmospheric Research (“NCAR”), in cooperation with the University Corporation for Atmospheric Research (“UCAR”), we are already at +1.5\textdegree C. The explanation?

“The physics of Earth’s climate system means that warming continues long after greenhouse gases enter the atmosphere. That’s because the oceans continue to warm up for decades in response to greenhouse gases already in the atmosphere, creating a lag in the climate system.

“As a result, we’re committed to about 0.5\textdegree C of additional warming even if we could immediately stabilize levels of carbon dioxide in the atmosphere, research has shown. The extra warming means that the planet is effectively three-quarters of the way to the +2\textdegree C target.”

Posted in adaptation, Anthropocene, carbon dioxide, carbon dioxide capture, carbon dioxide sequestration, Carbon Worshipers, chaos, civilization, climate, climate change, climate disruption, complex systems, COP21, corporate litigation on damage from fossil fuel emissions, corporate supply chains, critical slowing down, differential equations, Eaarth, ecology, environment, evidence, exponential growth, extended supply chains, forecasting, fossil fuel divestment, fossil fuels, geoengineering, geophysics, global warming, greenhouse gases, Hyper Anthropocene, IPCC, James Hansen, meteorology, mitigation, NCAR, NOAA, oceanography, physics, Principles of Planetary Climate, rationality, reasonableness, risk, science, temporal myopia, the right to be and act stupid, the right to know, the value of financial assets, Wally Broecker, zero carbon | Leave a comment

“Grid shading by simulated annealing” [Martyn Plummer]


Source: Grid shading by simulated annealing (or what I did on my holidays), aka “fun with GCHQ job adverts”, by Martyn Plummer, developer of JAGS.


I wanted to solve the puzzle but did not want to sit down with a pencil and paper. So I decided to write an R program to solve it instead. Is this cheating? Crafty Alison seems to think so, but I was inspired by Christian Robert’s regular series of R-based solutions to the weekly Le Monde puzzle. He often uses probabilistic solutions to logical problems and I realised that this grid shading puzzle could be treated as a Bayesian inference problem.

Posted in approximate Bayesian computation, Bayesian, Bayesian inversion, Boltzmann, BUGS, Christian Robert, Gibbs Sampling, JAGS, likelihood-free, Markov Chain Monte Carlo, Martyn Plummer, mathematics, maths, MCMC, Monte Carlo Statistical Methods, optimization, probabilistic programming, SPSA, stochastic algorithms, stochastic search | Leave a comment

Ah, Hypergeometric!

(“Ah, Hypergeometric!” To be said with the same resignation and acceptance as in “I’ll burn my books–Ah, Mephistopheles!” from Faust.) ;)

Dr John Cook, eminent all ’round statistician (with a specialty in biostatistics) and statistical consultant, took up a comment I posted at his site, and produced a very fine treatment connecting the hypergeometric distribution and hypergeometric series. And he kindly gave me a hat tip.



What’s the connection between the hypergeometric distributions, hypergeometric functions, and hypergeometric series?

The hypergeometric distribution is a probability distribution with parameters N, M, and n. Suppose you have an urn containing N balls, M red and the rest, N - M blue and you select n balls at a time. The hypergeometric distribution gives the probability of selecting k red balls.

Posted in card decks, card draws, card games, games of chance, John Cook, mathematics, maths, probability, sampling, sampling without replacement, statistical dependence | Leave a comment

high dimension Metropolis-Hastings algorithms

(Apologies, but I thought this posting was still in draft and had not been sent out. It was, by my apparently clicking the wrong button. The writing was hastily done, with the intent of finishing today. In fact, as written, it was wrong. Corrections are now included below.)


… If attempting to simulate from a multivariate standard normal distribution in a large dimension, when starting from the mode of the target, i.e., its mean γ, leaving the mode γ is extremely unlikely, given the huge drop between the value of the density at the mode γ and at likely realisations …

Source: high dimension Metropolis-Hastings algorithms, by Professor Christian Robert

While this is an intriguing and important observation, in retrospect, it seems to me to be another kind of the dimensionality curse. As the number of dimensions increases, the hypervolume of a mode, even if a Gaussian, gets smaller as expressed as a fraction of the total volume. Surely, a properly formed Markov Chain Monte Carlo (“MCMC”) will eventually defeat it and find the rest of the surface, but this could take an arbitrarily long time.

I’ve always seen the purpose of MCMC to explore the posterior, not to find a feature like a mode. So, yes, the behavior cited, if it proves out with standard proposals, is disappointing. Perhaps an alternative would be to use stochastic search algorithms, like James Spall’s SPSA. I’m not saying they would definitely have an easier time here: I have not done the experiment. But SPSA’s proposals are quite different than the usual proposals of MCMC and friends (see Section 7.3 of Introduction to Stochastic Search and Optimization, 2003, especially the paragraphs discussing Figure 7.2 on page 185). Indeed, at least for SPSA, proposals which are symmetric Gaussians and Uniforms don’t work well, and can be theoretically shown (there) to fail certain an inverse moment conditions in Spall’s inequality (7.8). SPSA’s best proposals are U-shaped.

SPSA attempts to estimate a stochastic gradient. In fact, I suspect, but have not yet proved, that there is a connection between this kind of gradient estimation and the gradient descent boosting of Friedman (1999).

The situation which Professor Robert describes is another instance where, in practice, it’s a good idea to initialize the MCMC from several places. Of course, it might, then, have a difficult time finding the mode, if the plateau around has a shallow gradient.

Posted in Bayes, Bayesian, Bayesian inversion, boosting, chance, Christian Robert, computation, ensembles, Gibbs Sampling, James Spall, Jerome Friedman, Markov Chain Monte Carlo, mathematics, maths, MCMC, Monte Carlo Statistical Methods, multivariate statistics, numerical software, numerics, optimization, reasonableness, Robert Schapire, SPSA, state-space models, statistics, stochastic algorithms, stochastic search, stochastics, Yoav Freund | Leave a comment

Tesla. No, not the car.

From Climate Denial Crock of the Week.





First let us ask: Whence comes all the motive power? What is the spring that drives all? We see the ocean rise and fall, the rivers flow, the wind, rain, hail, and snow beat on our windows, the trains and steamers come and go; we here the rattling noise of carriages, the voices from the street; we feel, smell, and taste; and we think of all this. And all this movement, from the surging of the mighty ocean to that subtle movement concerned in our thought, has but one common cause. All this energy emanates from one single center, one single source—the sun. The sun is the spring that drives all. The sun maintains all human life and supplies all human energy. Another answer we have now found to the above great question: To increase the force accelerating human movement means to turn to the uses of man more of the sun’s energy. We honor and revere those great men of bygone times whose names are linked with immortal achievements, who have proved themselves benefactors of humanity—the religious reformer with his wise maxims of life, the philosopher with his deep truths, the mathematician with his formul—, the physicist with his laws, the discover with his principles and secrets wrested from nature, the artist with his forms of the beautiful; but who honors him, the greatest of all,—who can tell the name of him,—who first turned to use the sun’s energy to save the effort of a weak fellow-creature? That was man’s first act of scientific philanthropy, and its consequences have been incalculable.

From the very beginning three ways of drawing energy from the sun were open to man. The savage, when he warmed his frozen limbs at a fire kindled in some way, availed himself of the energy of the sun stored in the burning material. When he carried a bundle of branches to his cave and burned them there, he made use of the sun’s stored energy transported from one to another locality. When he set sail to his canoe, he utilized the energy of the sun applied to the atmosphere or the ambient medium. There can be no doubt that the first is the oldest way. A fire, found accidentally, taught the savage to appreciate its beneficial heat. He then very likely conceived of the idea of carrying the glowing members to his abode. Finally he learned to use the force of a swift current of water or air. It is characteristic of modern development that progress has been effected in the same order. The utilization of the energy stored in wood or coal, or, generally speaking, fuel, led to the steam-engine. Next a great stride in advance was made in energy-transportation by the use of electricity, which permitted the transfer of energy from one locality to another without transporting the material. But as to the utilization of the energy of the ambient medium, no radical step forward has as yet been made known.

But, whatever our resources of primary energy may be in the future, we must, to be rational, obtain it without consumption of any material. Long ago I came to this conclusion, and to arrive at this result only two ways, as before indicated, appeared possible—either to turn to use the energy of the sun stored in the ambient medium, or to transmit, through the medium, the sun’s energy to distant places from some locality where it was obtainable without consumption of material. At that time I at once rejected the latter method as entirely impracticable, and turned to examine the possibilities of the former.

It is difficult to believe, but it is, nevertheless, a fact, that since time immemorial man has had at his disposal a fairly good machine which has enabled him to utilize the energy of the ambient medium. This machine is the windmill. Contrary to popular belief, the power obtainable from wind is very considerable. Many a deluded inventor has spent years of his life in endeavoring to “harness the tides,” and some have even proposed to compress air by tide- or wave-power for supplying energy, never understanding the signs of the old windmill on the hill, as it sorrowfully waved its arms about and bade them stop. The fact is that a wave- or tide-motor would have, as a rule, but a small chance of competing commercially with the windmill, which is by far the better machine, allowing a much greater amount of energy to be obtained in a simpler way. Wind-power has been, in old times, of inestimable value to man, if for nothing else but for enabling him, to cross the seas, and it is even now a very important factor in travel and transportation. But there are great limitations in this ideally simple method of utilizing the sun’s energy. The machines are large for a given output, and the power is intermittent, thus necessitating the storage of energy and increasing the cost of the plant.

A far better way, however, to obtain power would be to avail ourselves of the sun’s rays, which beat the earth incessantly and supply energy at a maximum rate of over four million horsepower per square mile. Although the average energy received per square mile in any locality during the year is only a small fraction of that amount, yet an inexhaustible source of power would be opened up by the discovery of some efficient method of utilizing the energy of the rays. The only rational way known to me at the time when I began the study of this subject was to employ some kind of heat- or thermodynamic-engine, driven by a volatile fluid evaporate in a boiler by the heat of the rays. But closer investigation of this method, and calculation, showed that, notwithstanding the apparently vast amount of energy received from the sun’s rays, only a small fraction of that energy could be actually utilized in this manner. Furthermore, the energy supplied through the sun’s radiations is periodical, and the same limitations as in the use of the windmill I found to exist here also. After a long study of this mode of obtaining motive power from the sun, taking into account the necessarily large bulk of the boiler, the low efficiency of the heat-engine, the additional cost of storing the energy and other drawbacks, I came to the conclusion that the “solar engine,” a few instances excepted, could not be industrially exploited with success.

Another way of getting motive power from the medium without consuming any material would be to utilize the heat contained in the earth, the water, or the air for driving an engine. It is a well-known fact that the interior portions of the globe are very hot, the temperature rising, as observations show, with the approach to the center at the rate of approximately 1 degree C. for every hundred feet of depth. The difficulties of sinking shafts and placing boilers at depths of, say, twelve thousand feet, corresponding to an increase in temperature of about 120 degrees C., are not insuperable, and we could certainly avail ourselves in this way of the internal heat of the globe. In fact, it would not be necessary to go to any depth at all in order to derive energy from the stored terrestrial heat. The superficial layers of the earth and the air strata close to the same are at a temperature sufficiently high to evaporate some extremely volatile substances, which we might use in our boilers instead of water. There is no doubt that a vessel might be propelled on the ocean by an engine driven by such a volatile fluid, no other energy being used but the heat abstracted from the water. But the amount of power which could be obtained in this manner would be, without further provision, very small.

Electricity produced by natural causes is another source of energy which might be rendered available. Lightning discharges involve great amounts of electrical energy, which we could utilize by transforming and storing it. Some years ago I made known a method of electrical transformation which renders the first part of this task easy, but the storing of the energy of lightning discharges will be difficult to accomplish. It is well known, furthermore, that electric currents circulate constantly through the earth, and that there exists between the earth and any air stratum a difference of electrical pressure, which varies in proportion to the height.

Posted in adaptation, Anthropocene, Cape Wind, Carbon Worshipers, civilization, clean disruption, climate, climate change, climate disruption, conservation, consumption, decentralized electric power generation, decentralized energy, demand-side solutions, destructive economic development, economics, efficiency, electricity, electricity markets, energy, energy reduction, energy utilities, engineering, environment, fossil fuels, games of chance, global warming, greenhouse gases, Hyper Anthropocene, investment in wind and solar energy, Mark Jacobson, meteorology, microgrids, Nikola Tesla, rationality, reasonableness, solar energy, solar power,, wind energy, wind power, zero carbon | Leave a comment

Underestimated Rates of Sea Level Rise

Posted in adaptation, Antarctica, Anthropocene, Arctic, bridge to nowhere, carbon dioxide, civilization, climate, climate change, climate disruption, coastal communities, ecology, floods, Florida, fossil fuels, geophysics, glaciers, global warming, greenhouse gases, Hyper Anthropocene, icesheets, IPCC, James Hansen, John Englander, Richard Alley, Scituate, sea level rise, Stefan Rahmstorf, temporal myopia, the right to know, Wally Broecker, zero carbon | Leave a comment

Techno Utopias

Professor Kevin Anderson on Techno Utopias. The Paris “COP21” agreement is/was not only expecting miracles, it was counting on them.

Y’think climate disruption causes ecosystem disruption: Try geoengineering.

Well the answer was simple. If we choose to continue our love affair with oil, coal and gas, loading the atmosphere with evermore carbon dioxide, then at some later date when sense prevails, we’ll be forced to attempt sucking our carbon back out of the atmosphere. Whilst a plethora of exotic Dr Strangelove options vie for supremacy to deliver on such a grand project, those with the ear of governments have plumped for BECCS (biomass energy carbon capture and storage) as the most promising “negative emission technology”. However these government advisors (Integrated Assessment Modellers – clever folk developing ‘cost-optimised’ solutions to 2°C by combining physics with economic and behavioural modelling) no longer see negative emission technologies as a last ditch Plan B – but rather now promote it as central pivot of the one and only Plan.

So what exactly does BECCS entail? Apportioning huge swathes of the planet’s landmass to the growing of bio-energy crops (from trees to tall grasses) – which, as they grow, absorb carbon dioxide through photosynthesis. Periodically these crops are harvested; processed for worldwide travel; shipped all around the globe and finally combusted in thermal powerstations. The carbon dioxide is then stripped from the waste gases; compressed (almost to a liquid); pumped through large pipes over potentially very long distances; and finally stored deep underground in various geological formations (from exhausted oil and gas reservoirs through to saline aquifers) for a millennium or so.

And I think some of my progressive, environmentally-minded friends are laughable, worrying about the ecosystem impacts of flooding from hydropower, when Those In The Know (UNFCCC) are proceeding with plans for geoengineering later this century, and commissioning scientific investigations (IPCC) to more deeply examine them and calibrate exactly how much time humanity has before things get really bad (read that as meaning the wealthy start to lose coastal properties and the like).

Posted in adaptation, Anthropocene, bollocks, bridge to nowhere, carbon dioxide, carbon dioxide capture, carbon dioxide sequestration, civilization, climate, climate change, climate disruption, coastal communities, complex systems, consumption, COP21, corporate supply chains, denial, disingenuity, economics, environment, ethics, evidence, exponential growth, extended supply chains, FEMA, finance, fossil fuels, games of chance, geophysics, glaciers, global warming, greenhouse gases, greenwashing, Hyper Anthropocene, icesheets, ignorance, IPCC, James Hansen, Kevin Anderson, Lenny Smith, liberal climate deniers, living shorelines, MA, meteorology, Neill deGrasse Tyson, oceanography, physics, planning, population biology, rationality, Ray Pierrehumbert, reasonableness, regime shifts, Sankey diagram, science, sea level rise, selfishness, silly tech devices, Techno Utopias, the right to know, the value of financial assets | Leave a comment

Causal Diagrams

Like Sankey diagrams, causal diagrams are a useful tool to assess and communicate complicated systems and their intrarelationships:

It’s possible to use these for analysis and prescription:

Here is the (promised) presentation on reenforcing loops:

So how can these techniques be used? And why are they powerful?

How about looking at methane adoption, also advertised as “natural gas” by fossil fuel companies.

There are other techniques similar to this, like analysis with stock and flow models:

Posted in adaptation, bridge to nowhere, Carbon Cycle, carbon dioxide, carbon dioxide sequestration, Carbon Tax, Carbon Worshipers, causal diagrams, clean disruption, climate, climate change, climate disruption, climate education, climate models, demand-side solutions, differential equations, dynamical systems, ecology, economics, energy utilities, environment, exponential growth, fossil fuel divestment, fossil fuels, global warming, greenhouse gases, greenwashing, Hyper Anthropocene, mathematics, mathematics education, maths, methane, mitigation, natural gas, planning, prediction, rationality, reasonableness, recycling, Sankey diagram, sustainability, the right to know, zero carbon | Leave a comment

K-Nearest Neighbors: dangerously simple

Yeah, Mathbabe’s got it right: People who use kNN often don’t think about these things.

For those who aren’t familiar with this technique, here’s a description from Zhi-Hua Zhou in Ensemble Methods: Foundations and Algorithms (section 1.2.5):

“The k-nearest neighbor (kNN) algorithm relies on the principle that objects similar in the input space are also similar in the output space. It is a lazy learning approach since it does not have an explicit training process, but simply stores the training set instead. For a test instance, a k-near neighbor learner identifieds the k insteances from the training set that are closest to the test instance. Then, for classification, the test instance will be classified to the majority class among the k instances; while for regression, the test instance will be assigned the average value of the k instances.”


I spend my time at work nowadays thinking about how to start a company in data science. Since there are tons of companies now collecting tons of data, and they don’t know what do to do with it, nor who to ask, part of me wants to design (yet another) dumbed-down “analytics platform” so that business people can import their data onto the platform, and then perform simple algorithms themselves, without even having a data scientist to supervise.

After all, a good data scientist is hard to find. Sometimes you don’t even know if you want to invest in this whole big data thing, you’re not sure the data you’re collecting is all that great or whether the whole thing is just a bunch of hype. It’s tempting to bypass professional data scientists altogether and try to replace them with software.

I’m here to say, it’s not clear that’s…

View original post 651 more words

Posted in big data, data science, evidence, machine learning | Leave a comment

Massachusetts Solar Suburbs (a Google group)

I have just created the Massachusetts Solar Suburbs Google group.

It’s Welcome Message reads:

Welcome to the Massachusetts Solar Suburbs!

This group exists to provide a forum for owners of solar installations, typically residential, or serving residences, to share their experiences, compare notes, and work together to advance adoption of solar energy in the suburbs of Massachusetts, whether by power purchase agreements, leasing solar panels, owning solar panels, having a partnership interest in a solar farm, or contracting to obtain power from other solar installations in Massachusetts.

This is a moderated Web forum, although members can receive notices of posts.

While the moderator expects many will seek to pursue solar installations for environmental reasons, notably, helping to forestall climate disruption, there are many excellent other reasons for doing solar, ranging from the financial motivations, with or without federal and state incentives, seeking to be self-sufficient in energy, and working to minimize the authority which the state and large corporate monopolies, such as utility companies, have over individuals, families, neighborhoods, and towns.

All with these interests are welcome. A civilized discussion will be expected.

Debates about climate change should be taken elsewhere (recommendation in the link).

People who don’t have solar installations but are curious, interested, or are seeking advice are welcome. Note, for legal and risk reasons, no one here can give you personal advice regarding taxes and credits. Seek your own accountant for that kind of information.

Thank you for your interest, and WELCOME!

Posted in adaptation, Anthropocene, Carbon Tax, Carbon Worshipers, citizenship, civilization, clean disruption, climate, climate change, climate disruption, corporate litigation on damage from fossil fuel emissions, decentralized electric power generation, decentralized energy, destructive economic development, diffusion, diffusion processes, economics, education, efficiency, energy, energy reduction, energy utilities, engineering, environment, ethics, fear uncertainty and doubt, fossil fuel divestment, fossil fuels, global warming, Google, greenhouse gases, Hyper Anthropocene, investing, investment in wind and solar energy, MA, meteorology, microgrids, optimization, physics, planning, politics, public utility commissions, rationality, reasonableness, risk, Sankey diagram, solar energy, solar power,, sustainability, Tony Seba, wind energy, wind power, zero carbon | Leave a comment

Life cycle analysis of emissions from various forms of energy converted to electricity

There was a recent discussion regarding the life cycle analysis of various forms of energy, principally to be converted to electricity. Given that everything I know about sustainability and life cycle analysis suggests is it is a very complicated business, I decided to dig into this a bit. (See also.) The discussion only considers a few major scholarly references, notably Stanford’s Mark Jacobson, who I’ve written a lot about on these pages, his colleague, Mark Delucchi, and then critics of their work, Barry Brook, Charles Barton, and Howard Hayden.

This is not a summary. While I have read the references below, I have not analyzed them.

I do have some impressions. Even if my impressions amount to what is essentially a meta-analysis (“LCA”), the point estimates of life cycle emissions are all over the place, possibly due to different methods and assumptions. If that sounds demeaning, it is not. The same kind of scatter can be found in most meta-analyses of medical treatments and procedures. Consider, for example, the studies leading to the settings of the “normal” range for diastolic blood pressure when diagnosing hypertension, even if there are large cohort studies as well, namely, Sesso, et al, “Systolic and diastolic blood pressure, pulse pressure, and mean arterial pressure as predictors of cardiovascular disease risk in men“. The point is that sometimes decisions need to be made, and the guidance of a meta-analysis is all we’ve got. But, if done, it needs to be done with great care and choice. And it needn’t remain that way.

In particular, if informed policy is the goal, empirical measurements of greenhouse gas emissions and their allocation to activities ought to be much better done and accounted for. What’s being used in the above are a bunch of bounds and guestimates from either ab initio calculations (which can go very wrong with engineered systems), self-reporting from those emitting, or inferences based upon economic activity. A hidden benefit of a Carbon Tax (or “fee-and-dividend scheme”, for those afraid of Tea Partyers) is that this kind of accounting might begin to be possible. Detailed time and spatial series of emissions at global scale down to neighborhoods ought to be possible, but that is an enormous undertaking.

But, also, the Hatch analysis says nuclear power has a LCA GHG emissions almost twice that of wind turbines.

The World Nuclear Organization (2015) summarizes (based upon reports from three countries) that nuclear has half the life cycle GHG emissions of solar, and (except for Finland) roughly the same as wind. But it has double the life cycle GHG emissions of hydropower. NREL concludes life cycle emissions from nuclear and wind are comparable, as are those for concentrating solar power, but solar PV are greater (but significantly less than methane).

For China, Aden, Marty, and Muller found onshore wind turbines to have LCA emissions greater than nuclear, with MOX and reprocessing considered, and much greater if they were not. Offshore wind turbines has less LCA emissions (which seems strange to me). They found concentrated solar to be in the ballpark, with solar PV 4x bigger. Hydropower again wins.

Weisser gives comparable results to the other findings, except that his assessment of LCA emissions for solar PV are less, but still 3x higher than the nuclear-wind LCAs. Weisser also cautions that hydropower may not yield the low LCA emissions estimates often touted for it, as these sometimes neglect flooding of new land, with decomposition and emissions, as well as ecosystem changes which produces higher lifetime contributions.

Sovacool looks at the LCA emissions for nuclear energy exclusively, reviewing a large number of articles and summarizing them.

Like I said, I’m not trying to summarize anything. But a significant point is that one doesn’t have to assume a limited nuclear war to arrive at a rationalized conclusion that nuclear power has life cycle GHG emissions in excess of wind and solar. Also, the ranges of reported per unit energy emissions should be considered in comparisons. For some reason, some of the reports on ranges from nuclear power LCA are huge. Sovacool provides some explanation for why.

Posted in adaptation, anemic data, Anthropocene, biofuels, bridge to nowhere, carbon dioxide, carbon dioxide sequestration, Carbon Worshipers, citizenship, civilization, clean disruption, climate, climate change, climate disruption, complex systems, corporate litigation on damage from fossil fuel emissions, corporate supply chains, decentralized electric power generation, decentralized energy, demand-side solutions, destructive economic development, economics, efficiency, energy, energy reduction, energy utilities, engineering, environment, evidence, forecasting, fossil fuel divestment, fossil fuels, global warming, Hyper Anthropocene, investment in wind and solar energy, IPCC, James Hansen, Life Cycle Assessment, Mark Jacobson, methane, natural gas, nuclear power, nuclear weapons, pipelines, Sankey diagram, solar energy, solar power,, Tea Party, transparency, wind energy, wind power, zero carbon | Leave a comment

Google’s DeepMind consistently beats Fan Hui, the European GO grandmaster

This is pretty amazing news.

DeepMind’s program AlphaGo beat Fan Hui, the European Go champion, five times out of five in tournament conditions, the firm reveals in research published in Nature on 27 January. It also defeated its silicon-based rivals, winning 99.8% of games against the current best programs. The program has yet to play the Go equivalent of a world champion, but a match against South Korean professional Lee Sedol, considered by many to be the world’s strongest player, is scheduled for March. “We’re pretty confident,” says DeepMind co-founder Demis Hassabis.

The technique is deep recurrent learning with neural networks.

Posted in artificial intelligence, deep recurrent neural networks, Go, machine learning, perceptrons | Leave a comment

Professor Marvin Minsky dies at 88: What a noble mind is here o’erthrown

As a prospective and actual graduate student in MIT’s Artificial Intelligence Laboratory during the years 1974-1976, it is difficult to convey the draw and the incisiveness of Minsky’s mind. As an undergraduate in Physics with a very keen interest in computing and programming, I was drawn to Professor Minsky’s ideas in articles like his “Steps Towards Artificial Intelligence” (see also). I had attended a lecture by him at MIT, as a high school student, asking him afterwards (paraphrase),

Would your studies lead to a set of principles for describing intelligence? How would you define ‘intelligence’?

Minsky replied:

You don’t define something unless it is a microcomponent of a theory (not the object of the theory itself).

His recorded thoughts, analysis, and the work of his students and colleagues convinced me to choose Artificial Intelligence as a subject in which to pursue graduate study, the alternative being Astronomy and the Search for Extraterrestrial Intelligence (“SETI”). I applied, and become a graduate student member of MIT-AI in September 1974.

Professor Minsky was brilliant and amazingly child-like, something he insisted was a common if not essential aspect of anyone who retained a notion of being a genius. During seminars at MIT AI, where I learned the vicious and gentle patterns and mannerisms of scholarly debate, he was almost always in attendance, and almost always silent, leaving the discussion to people like Carl Hewitt and Gerry Sussman and others.

He had a singular vision of how intelligence could be embodied in machines, and, disappointed as he was in the paths and progress of this goal’s pursuit, continued to investigate and record. I have read his book, Perceptrons, written with Seymour Papert, possibly a dozen times, and learned a great deal from it. It is ironic, then, that today’s deep learning networks are descendents of perceptrons. Still, it is not at all clear to me that the technology of deep learning has successfully overcome most of the criticisms Minsky and Papert assigned to them. They also gave me a deep respect for original theoretical work, including derivations and theorems, an organic respect much deeper than I got from my studies in maths and Physics.

I also was, in part, inspired to convert to Judaism as I did later, because of Professor Minsky’s example, even if that proved to be just another of my extended explorations of religion.

There are many things about Marvin Minsky even his students and fans don’t know, such as he was, for a time, a Fellow of Walt Disney Imagineering.

On a personal level, Minsky’s Society of Mind helped me through many tough emotional episodes, those requiring something like 15 years of psychological counselling. It is interesting that Professor Minsky foresaw a world where people could systematically replace their failing biological components with artificial ones, and eventually cheat death.

I will say I miss the idea of Professor Minsky being in the world. But, knowing, I think his sentiments, and his lessons, he would embrace nothing other than his death as being the proper and natural order of things. In that sense, while his memories and knowledge will evaporate, Marvin Minsky is never gone. And, as an inspiration for a physical materialist like myself, that’s sayin’ somethin’.

Update: 2016-01-26

Posted in artificial intelligence, machine learning, Marvin Minsky', neural nets, perceptrons, Seymour Paper | Leave a comment

Generating supports for classification rules in black box regression models

Inspired by the extensive and excellent work in approximate Bayesian computation (see also), especially that done by Professors Christian Robert and colleagues (see also), and Professor Simon Wood (see also), it occurred to me that the complaints regarding lack of interpretability of “black box regression models” for, say, the binary logistic regression problem, could be readily resolved using these techniques. Such complaints are offered by:

Essentially, if the classifier, \mathcal{C}, is trained to the investigators satisfaction, and is assumed to produce a binary outcome from a space of attributes, \mathcal{A}, an estimate of the support for \mathcal{C} can be had by simulating draws from \mathcal{A}, testing them with \mathcal{C}, and retaining a collection of draws for which the classification outcome is affirmative.

Efficient generation of draws from \mathcal{A} is the key question, but these can be done using many methods, including those described in the comprehensive textbook, Monte Carlo Statistical Methods by Robert and Casella (see also). But actually, in many cases, the generation can be simpler than that.

If independence of the attributes, \mathcal{A}, from one another is assumed, then a sample of each of their range is available in the training data used to train \mathcal{C}. Empirical methods for estimation of each attributes distribution function can be applied, and if the quantile function can be derived from these, then generators of values for each attribute are in hand, by generating uniform deviates on the unit interval and transforming them by these quantile functions. It is then possible to produce a very large number of these, subjecting each to a classification by \mathcal{C}. Those classified in the affirmation are retained. Assuming independence can never cause a miss of a portion of the support, for its hypervolume must necessarily be larger than the volume of any portion having contingencies or constraints. That is, dependency means that the space conditional upon another variable is smaller than the independent version.

Once the large collection of accepted attributes are in hand, these can be described by any of the modern means of multivariate presentation and analysis, and these descriptions interpreted as appropriate for the problem domain.

Posted in approximate Bayesian computation, Bayes, Bayesian, Bayesian inversion, generalized linear models, machine learning, numerical analysis, numerical software, probabilistic programming, rationality, reasonableness, state-space models, statistics, stochastic algorithms, stochastic search, stochastics, support of black boxes | Leave a comment

On friction and the duplicity

(Hat tip to Peter Sinclair at Climate Denial Crock of the Week.)

Has Senator Cruz called Dr Carl Mears (video) of Remote Sensing Systems, the maker and interpreter of the sensor Senator Cruz used for his Spencer-Christy-Curry carnival? No. Of course not.

The refutation.

And the practice is called cherry picking.

Posted in AMETSOC, anemic data, Berkeley Earth Surface Temperature project, BEST, climate, climate change, climate data, climate disruption, confirmation bias, corruption, denial, disingenuity, ecology, evidence, fear uncertainty and doubt, geophysics, global warming, greenhouse gases, hiatus, Hyper Anthropocene, ignorance, meteorology, model comparison, NCAR, NOAA, obfuscating data, oceanography, physics, rationality, reasonableness, statistics, time series | Leave a comment

“Causal feedbacks in climate change”

Today I was reviewing and re-reading the nonlinear time series technical literature I have, seeking ideas on how to go about using the statistical ensemble learning technique called “boosting” with them. (See the very nice book, R. E. Schapire, Y. Freund, Boosting: Foundations and Algorithms, 2010, The MIT Press.) There have been attempts, and I’m reviewing those as well. My progress on that, intending to be used for my work on Town of Sharon hydrological series and others, will be documented here some other day. During the review, I decided to go over the famous convergent cross mapping (“CCM”) ideas from George Sugihara and team (Ye, May, Hsieh, Fogarty, Munch, Clark, Isbell, Deyle, Cowles, Tilman, and colleagues Maher and Hernandez with their work on CauseMap). I have mentioned and written about this development in three places on this blog already,

And, along the way, I read another article, “Spatial ‘convergent cross mapping’ to detect causal relationships from short time-series”, by Clark, Ye, Isbell, Deyle, Cowles, Tilman, and Sugihara, which appeared late in December 2014 in the journal Ecology, and the striking paper in PNAS, by Ye, et al, “Equation-free mechanistic ecosystem forecasting using empirical dynamic modeling“. But today I found an article I missed, from earlier in 2015, pertaining to climate change, which has not, as far as I can tell, gotten much play. And it should.

That article appeared in March 2015 in Nature Climate Change, and is by Egbert van Nes, Marten Scheer, Victor Brovkin, Timothy Lenton, Hao Ye, Ethan Deyle, and George Sugihara, and is titled “Causal feedbacks in climate change“.

One of the favorite lines heard in science denier circles about climate change regards an observation that in the paleoclimate record temperature rises appear to lead increases in CO2 production and, so, they conclude, CO2 increases cannot cause temperature rises. The paleoclimate record demonstrates a very strong association and then geophysicists argue case by case in paleohistory what happened in each event, generally suggesting that the two series, temperature and CO2 concentration, are dynamically interrelated and mutually re-enforcing as a coupled system of differential equations. All that’s true. And, if even in the past, temperature rise occurred first, that does not in any way mean that an abrupt increase in CO2 concentration now would not result in abrupt increases in temperature.

But the remarkable result from van Nes, et al (2015) is that, by applying their CCM method to the two series of temperature and CO2 concentration, they conclusively demonstrate that, yes, indeed, the increase in CO2 concentration is in fact causing the increase in temperature, even there remains an enhancement of CO2 production by increases in temperature. That enhancement proves to be both much weaker than the forcing in the other direction, although it definitely exists. And, to top off the paper, van Nes, et al show “direct confirmation that internal Earth system mechanisms rather than orbital forcing have controlled climate dynamics over the Pleistocene cycles”.

The figure below is a reproduction of Figure 3 from their paper. There is more detail in supplementary online information.
(Click on figure to see larger image. Use your browser Back button to return to article in blog.)

Another denier claim is now neatly relegated to the trash, without the need for roundabout explanation.

Update, 2015-01-22

Professor Robert May illustrates the key ideas in empirical dynamic modeling in the video below:

Posted in Anthropocene, boosting, Carbon Cycle, carbon dioxide, Carbon Worshipers, climate, climate change, climate data, climate disruption, complex systems, convergent cross-mapping, denial, differential equations, diffusion processes, dynamical systems, ecology, Egbert van Nes, empirical likelihood, ensembles, environment, Ethan Deyle, Floris Takens, forecasting, fossil fuels, geophysics, George Sughihara, global warming, greenhouse gases, Hao Ye, machine learning, Maren Scheffer, mathematics, maths, meteorology, physics, rationality, reasonableness, science, state-space models, Takens embedding theorem, time series, Timothy Lenton, Victor Brovkin | Leave a comment

Social implications of our silly experiment with Earth

Climate is changing.

Climate change is threatening to human elites. Climate solutions are “fetishizing the billionaire class”. And on the risks and damage from neoliberalism.

Exponential growth, meaning keeping the permitting of new homes in the suburbs, is lethal.

We’re already in an emergency.

Climate change. Climate disruption.

Welcome to the world we have created.


Posted in adaptation, Anthropocene, Bill Gates, bridge to nowhere, carbon dioxide, civilization, climate change, environment, ethics, exponential growth, global warming, Hyper Anthropocene, ignorance, James Hansen, rationality, reasonableness, zero carbon | Leave a comment

Hottest Year on Record

Reposting from Tamino’s blog.

And there still are intelligent people out there, including statistician colleagues, who don’t buy the facts of warming. Generally speaking, they have a look at a few time series and get quickly skeptical, failing to realize that obtaining the warming result, as the BEST project discovered, is a non-trivial business. But, if pursued, in the end, you get the result that Tamino, NASA, NCAR, and the Met Office, Professor Hansen, and others obtain. BEST’s work, by the way, uses some great statistics, including applications of kriging, otherwise known as Best Linear Unbiased Estimation (“BLUE”) applied to spatial fields.

I guess the result will need to be confirmed and reconfirmed in different ways, using different techniques, as is typical in science, and maybe, just maybe, these lingering doubters will be convinced.

As for me, I never needed an observational record to accept the reality of climate disruption: It’s all just basic Physics, people.

Open Mind

Back when Richard Muller announced the formation of the Berkeley Earth Surface Temperature project, those who deny the danger from global warming were thrilled. They thought the Berkeley project would prove once and for all just how Earth’s temperature had really changed. More to the point, Muller was skeptical about Earth’s reported temperature history, so they expected it to show that all those other guys — NASA, NOAA, HadCRU — got it wrong. Anthony Watts welcomed it, praised it, and announced that he would accept the results, whatever they were.

When the Berkeley project announced their results, deniers changed their tune. Instead of “showing the love” to Richard Muller and the gang, they turned on them like a pack of wolves. That’s because the Berkeley project showed that all those other guys — NASA, NOAA, HadCRU — got it right.

View original post 553 more words

Posted in AMETSOC, Anthropocene, Berkeley, Berkeley Earth Surface Temperature project, BEST, BLUE, carbon dioxide, climate, climate change, climate data, climate disruption, climate education, climate zombies, environment, evidence, geophysics, global warming, Hyper Anthropocene, James Hansen, kriging, meteorology, NCAR, NOAA, physics, Principles of Planetary Climate, rationality, reasonableness, Richard Muller, Robert Rohde, science, science education, Tamino, the right to know, time series, University of California Berkeley | Leave a comment

“Finding Dory”

From the scientific journal Nature, a preview:

Finding Dory”, movie

Director: Andrew Stanton

Opens 17 June 2016

Digital-animation giant Pixar releases the much-anticipated follow-up to its 2003 “Finding Nemo”, a film so successful that clownfish are now often referred to as ‘nemos’. The original had marine biologists in raptures over its faithfulness to the science. Pixar has a mixed record when it comes to sequels, but if “Finding Dory”, featuring Nemo’s Paracantharus friend [pictured in original article], can combine the remarkable accuracy with the superb storytelling that the company is capable of, it could join Pixar’s list of Oscar-botherers. Rumours suggest that the film was rewritten after the success of “Blackfish”, the 2013 documentary by director Gabriela Cowperthwaite that criticized the controversial keeping of killer whales in captivity.

Posted in biology, compassion, Disney, ecology, Epcot, marine biology, Pixar, population biology, science, science education, Scripps Institution of Oceanography, Spaceship Earth, Walt Disney Company, WHOI, Woods Hole Oceanographic Institution | Leave a comment

After the Decade of Dithering, the Deadly Twenties

In a recent post, after reviewing the extreme Arctic warming event of late 2015, Professor John Baez quotes an earlier interview with Dr Gregory Benford, who is arguing for a geoengineering effort to restore the frozen Arctic. I do not know whether that is a good idea or not. Surely, something like that appears to be needed, both because a disrupted Arctic appears to be affecting weather systems worldwide, and there is increasing danger of a shutdown of the AMOC, with evidence for a slowdown underway, in hand.

Dr Benford has some choice phrases, which I like very much. For example, he refers to our present time as the Decade of Dithering, and the upcoming decade, the 2020s, as the Deadly Twenties. In fact, he expects that once the Deadly Twenties get underway, people worldwide will be screaming for somebody to do something, and thinks that this will usher in large scale geoengineering experiments.

Of course, these are, in themselves, are dangerous, since it is very difficult to do a full and proper risk assessment. Also, professionally speaking, it is not clear to me how you construct an experiment to detect whether or not a particular intervention is working. It seems that it might need to run a long time. It’s also not clear how to detect unintended consequences. For example, when a new drug is tried in a cancer intervention, there are stopgaps for stopping the trial if it looks like the intervention is causing fatalities or serious side effects. I don’t know what the comparable stopgaps would be in a geoengineering experiment.

I also think this would need to be set up formally and properly, with a full design, having success or failure criteria defined, rigid, and not updated prior to the start. The design and process and conduct of said experiment should be thoroughly transparent.

It is extremely unfortunate that it has come to this. I daresay I doubt countries, governments, and the public have any capability to properly judge whether or not such a thing should be attempted, given that, it appears, for the most part they don’t get the basics of climate change physics.

Some of the other proposals Dr Benford suggests, such as enriching phytoplankton growth via iron fertilization, have been tried on a limited scale, and don’t seem to work. Part of the problem is that we don’t understand the life cycle of these plankton. The “cartoon version” says when they expire, their Carbon-laden bodies sink to the ocean floor, but the facts seem more complicated than that. In particular, it seems they are consumed on the way down, and the Carbon returned to the upper reaches of ocean, where it in principle is formed into CO2 again, with potential for exchange with atmosphere.

These are not the kinds of things it’s advisable to proceed with a “Let’s try it and see what happens” attitude.

Simon Nicholson from the Forum for Climate Engineering Assessment has some additional thoughts:

Posted in adaptation, AMOC, Arctic, chance, changepoint detection, climate, climate change, climate disruption, critical slowing down, ecology, engineering, geoengineering, global warming, greenhouse gases, Hyper Anthropocene, ignorance, James Hansen, MIchael Mann, mitigation, oceanography, physics, politics, rationality, reasonableness, regime shifts, science, science education, state-space models, statistics, the right to know, thermohaline circulation, time series | Leave a comment

Hunt and Anderson discuss climate change

50% of the emissions come from the richest 1% of people on the planet.

Actually, I disagree with them a bit … I suspect Western societies are much more fragile than Hunt & Anderson and most people think, in terms of supply chains, and cross-sectional exposure. But, anyway, we’ll see soon enough.

Remember exponential growth:

Posted in adaptation, Anthropocene, carbon dioxide, Carbon Worshipers, civilization, climate disruption, COP21, demand-side solutions, denial, destructive economic development, ecology, environment, exponential growth, extended supply chains, fossil fuel divestment, fossil fuels, geoengineering, global warming, greenhouse gases, Hyper Anthropocene, rationality, reasonableness, Sankey diagram, zero carbon | Leave a comment

Not too shabby: “What’s warming the world” (Bloomberg Business), and “The siege of Miami” (The New Yorker)

What’s warming the world

Infographic allowing the visitor to overlay time series of candidate causes for global warming, and thereby permitting them to draw their own conclusions.

And Elizabeth Kolbert’s piece in The New Yorker, brings home the contradictions and anguish. An excerpt:

We got back into the car. Driving with one hand, Wanless shot pictures out the window with the other. “Look at that,” he said. “Oh, my gosh!” We’d come to a neighborhood of multimillion-dollar homes where the water was creeping under the security gates and up the driveways. Porsches and Mercedeses sat flooded up to their chassis.

“This is today, you know,” Wanless said. “This isn’t with two feet of sea-level rise.” He wanted to get better photos, and pulled over onto another side street. He handed me the camera so that I could take a picture of him standing in the middle of the submerged road. Wanless stretched out his arms, like a magician who’d just conjured a rabbit. Some workmen came bouncing along in the back of a pickup. Every few feet, they stuck a depth gauge into the water. A truck from the Miami Beach Public Works Department pulled up. The driver asked if we had called City Hall. Apparently, one of the residents of the street had mistaken the high tide for a water-main break. As we were chatting with him, an elderly woman leaning on a walker rounded the corner. She looked at the lake the street had become and wailed, “What am I supposed to do?” The men in the pickup truck agreed to take her home. They folded up her walker and hoisted her into the cab.

(Emphasis added.)

Nicely done.

Posted in Anthropocene, business, climate change, climate data, climate zombies, complex systems, critical slowing down, denial, disingenuity, economics, environment, evidence, fossil fuels, global warming, greenhouse gases, Hyper Anthropocene, mitigation, model comparison, time series | Leave a comment

About that “hide the decline” thing …

(Hat tip to Climate Denial Crock of the Week)
It’s amazing what you can do when you take words out of context. Watch The Daily Show some time. But that’s entertainment, not governance.

Hide the decline?

The email doesn’t say anything of the sort.

But people hear it on the Web, and in media, and are idiots enough to believe it.

Maybe people deserve some of the effects of climate disruption, especially in the United States. They’ve have had 50 years to do something about it, and there will be a lot of people elsewhere in the world who will suffer because of their inaction and selfishness.

How close are we to dangerous planetary warming?

(For a larger picture, click on image. To return to blog, use your browser Back button.)

Posted in adaptation, Anthropocene, climate, climate data, climate disruption, denial, environment, ethics, evidence, fear uncertainty and doubt, geophysics, global warming, greenhouse gases, Hyper Anthropocene, ignorance, meteorology, oceanography, physics, politics, rationality, reasonableness, science, temporal myopia | Leave a comment

“Richard Lindzen: limited understanding?” (Tamino)

Open Mind

A recent WUWT post by Richard Lindzen is a rather lame attempt to defend an equally lame opinion piece by Freeman Dyson in the Boston Globe. Evidently, Lindzen felt the need to defend Dyson’s piece because it was rather roundly refuted in a response by 8 members of the faculty of MIT.

View original post 381 more words

Posted in Anthropocene, climate, climate change, climate disruption, denial, floods, geophysics, global warming, greenhouse gases, Hyper Anthropocene, ignorance, Kerry Emanuel, meteorology, NCAR, Neill deGrasse Tyson, NOAA, physics, Principles of Planetary Climate, rationality, reasonableness, Spaceship Earth, sustainability, Tamino, zero carbon | Leave a comment

First Light

Updated: 1828 EST, 24th December 2015

Our solar PV array is up, operating, and generating. It consists of 29 SunPower X-Series X21-345 panels (specs below), generating 345w at peak, with optimizers. The generation capacity is 10 kW. The array is optimized for our high tree situation. Its controller reports results from each cell. The optimizers mean a panel can be used even if but a fraction is illuminated, something which aids in our wooded setting.

(Click on image for a larger picture. Use browser Back button to return to blog.)

(Click on image for a larger picture. Use browser Back button to return to blog.)

It was installed by RevoluSun of Burlington, MA, who navigated the permitting and inspection process of the Town of Westwood, and did an awesome and very neat job. The electrical cables are neatly tied, and the reporting signal cable to our Internet router was finely done. I highly recommend them.
(Click on image for a larger picture. Use browser Back button to return to blog.)
solar panel installation-001
(Click on image for a larger picture. Use browser Back button to return to blog.)
That’s Xavier, an electrician from RevoluSun, way up atop the roof in the background. And, of course, the proud environmentalists, Jan and Claire, in the foreground. Photo was taken by our neighbor, Nina.

The panels are unusually situated, being on 3 faces of the roof: 19 on the main face, 5 each on two faces, oriented roughly southeast and southwest. The layout is intended to make maximum use of daylight, get around tree shading, and is slightly overprovisioned to do so.
(Click on image for a larger picture. Use browser Back button to return to blog.)

(Click on image for a larger picture. Use browser Back button to return to blog.)

I will take it down, shortly, to complete its initial test. Electrical inspection on the 28th of December remains. Connection to the grid and a new meter by Eversource, our utility, remains.
(Click on image for a larger picture. Use browser Back button to return to blog.)

Posted in adaptation, Anthropocene, citizenship, clean disruption, climate change, climate disruption, conservation, consumption, decentralized electric power generation, decentralized energy, demand-side solutions, destructive economic development, ecology, economics, efficiency, energy, energy reduction, energy utilities, environment, ethics, fossil fuel divestment, global warming, Hyper Anthropocene, investment in wind and solar energy, La Nina, microgrids, public utility commissions, PUCs, solar energy, solar power,, Tony Seba, zero carbon | 2 Comments

Paris’ COP21: Great cheerleading from the diplomats, but … +ENSO is here

This target is, however, extremely demanding. Climate researchers have explored only a few scenarios that limit warming to 1.5 °C. They show that global emissions of greenhouse gases must be between 70% and 95% lower in 2050 than they were in 2010. In the second half of the century, net emissions must become negative, meaning that overall, carbon is removed from the atmosphere. It is still unclear whether this is feasible even in theory: we don’t yet know enough about the limits of negative-emission technologies with respect to, for example, the use of land and water (P. Smith et al. Nature Clim. Change; 2015).

The agreement would have been more credible if the ambitious temperature goal were matched by an equally ambitious plan for how to get there by reducing emissions. The stated aim is that global emissions should reach a peak “as soon as possible”, then decline rapidly, before achieving a “balance between anthropogenic emissions by sources and removals by sinks of greenhouse gases in the second half of this century”.


The Paris agreement is informed by science. There would be no agreement, indeed no process at all, if all countries did not recognize the importance of scientific findings on the causes and consequences of climate change. But the agreement is unlikely to be fair and efficient. The poorest people, who contribute the least to the problem, will still face the worst consequences of climatic changes, with insufficient funding from richer countries to pay for climate adaptation or to deal with loss and damage. The emerging complex of different national, local and sectoral policies and initiatives is a far cry from the uniform price on emissions for which economists have called.

Current national pledges to cut emissions are too weak, and the ramping up too vague, for us to conclude decisively that the Paris agreement will help us to avoid dangerous interference with the climate system. The agreement probably ensures that we can avoid 4 °C of warming, but it is a long way from what we need to limit warming to less than 2 °C. For the Paris agreement to deliver, the pressure is on for countries to ramp up their ambition.

This is excerpted from an editorial in the journal Nature, by Steffen Kallbekken, research director and economist at CICERO, the Center for International Climate and Environmental Research in Oslo, Norway.

Postscript on the Extreme Warmth of November-December 2015

I’m adding this postscript here because popular, meteorological answers to questions regarding recent extreme warmth are somewhat missing the point, in my opinion, and deserve a response. Yet, they don’t deserve a post unto themselves.

El Niño or the positive phase of ENSO is a phenomenon unique to the large Pacific ocean, and it exists primarily because nowhere else on Earth is there such a large stretch of water on the Equator. Essentially, the coriolis forces at this local are zero, so oceanic motions and currents and associated coupling with atmosphere moves east and west. 90% of the excess heat forcing by atmospheric greenhouse gases goes into the ocean:
(Click on image to see larger figure, and use browser Back button to return to blog.)
La Niña, the flip or negative side of ENSO, extracts heat from atmosphere, storing it at depth:
(Click to see animation.)
In years when there is an La Niña, less energy remains in the atmosphere because it is being actively pumped into the ocean. In essence, this energy is getting “banked”. When there is an El Niño, not only is the energy that would be banked not banked, and, so, available in atmosphere, but some of the energy that was banked is now being released. Accordingly, things are warmer than if simply excess atmospheric warming from greenhouse radiative forcing were active, because a bunch of that forcing is now being released.

Accordingly, questions like “How much of this warming is due to El Niño and how much to climate change?” are poorly posed, in my opinion, and attempts to determine attributions by percent to each effect mislead, I think. It’s about energy and its conservation. La Niña may give us a pass from excess warming for a few years, but it builds, and, on the flip side, El Niño restores it. Sure, there’s always been an ENSO, with effects somewhat like the present. But our collective contribution to the atmospheric greenhouse has stored much more energy in oceans, which we are now seeing.

It’s going to get worse, unless we really act, as Dr Steffen Kallbekken indicates.
(Click on image to see larger figure, and use browser Back button to return to blog.)

Posted in adaptation, Anthropocene, carbon dioxide, carbon dioxide capture, climate, climate change, climate disruption, COP21, Eaarth, ecology, economics, El Nino, ENSO, environment, forecasting, fossil fuels, geophysics, global warming, greenhouse gases, Hyper Anthropocene, La Nina, meteorology, NCAR, NOAA, physics, planning, rationality, reasonableness, science, science education, sustainability, Svante Arrhenius, zero carbon | Leave a comment

All I do is complain, complain …

I was reviewing a presentation given as part of a short course in the machine learning genre today, and happened across the following two bullets, under the heading “Strictly Stationary Processes”:

  • Predicting a time series is possible if and only if the dependence between values existing in the past is preserved also in the future.
  • In other terms, though measures change, the stochastic rule underlying their realization does not. This aspect is formalized by the notion of stationarity

I began to grumble. Even though the rest of the talk is quite okay and useful, this kind of progression leaves bad impressions with students, suggesting that, somehow, non-stationary time series, let alone chaotic series, are impossible or at least very difficult to forecast. Some students go on to say nonsensical things about natural time series, or, at least, can be misled by demagogues who know these distinctions are not only abstract, they are subtle, and not widely understood.

This matter has been addressed most comprehensively and recently by Lenny Smith of the London School of Economics, both in an encyclopedia entry on “Predictability and Chaos”, and in an excellent little book I’m very fond of, Chaos: A Very Short Introduction.

Facts are, we’ve collectively come a long way since Lorenz, whether by using the methods sketched by Slingo and Palmer, or by Sugihara and Deyle based upon work by Takens, or by Ye, Beamish, Glaser, Grant, Hsieh, Richards, and Schnute. (Nice summary here.) Even the famous Lorenz butterfly is a the subject of ordinary exposition.

Stochastic-based search and optimization is a major industry these days. I make a good chunk of my professional living from it.

So I think presentations ought to be more careful in what they say.

Then again, maybe I’m just a grumbly old codger.

Posted in bifurcations, chaos, citizen science, convergent cross-mapping, dynamic linear models, dynamical systems, engineering, Floris Takens, generalized linear models, geophysics, George Sughihara, ignorance, Lenny Smith, Lorenz, mathematics, maths, meteorology, prediction, probability, rationality, reasonableness, statistics, stochastic algorithms, stochastic search, stochastics, Takens embedding theorem, the right to know, time series | 1 Comment

Climate Conclusions: The American Petroleum Institute (1980)

The following are excerpted from a memorandum quoted by the Inside Climate News team, documenting the minutes of a 29th February 1980 of a task force on climate change at the American Petroleum Institute.

Hat tip to Climate Denial Crock of the Week. Emphasis added by me. I have also taken most of the text and reduced it to mixed case rather than the all upper case in the original document.


  • Global averaged 2.5\textdegree C rise expected by 2038 at a 3% p.a. growth rate of atmospheric \text{CO}_{2} concentration
  • Large error in this estimate — 1 in 10 chance of this change by 2005
  • No regional climate change estimates yet possible
  • Likely impacts:
    • 1\textdegree C rise (2005): barely noticeable
    • 2.5\textdegree C rise (2038): major economic consequences, strong regional dependence
    • 5\textdegree C rise (2067): globally catastrophic effects

    Note the proper interpretation of uncertainty when they quote “1 in 10 chance of this change by 2005”. The task force clearly understood that one wants to consider the worst possible impacts, not the least. Elsewhere in the document, the task force reports that:

    • \text{CO}_{2} “doubling” date is 2038 at a 3% p.a. growth of atmospheric release rate
    • Error in this estimate is small compared with other sources of error

    They also note

    • Strong empirical evidence that rise caused by anthropogenic release of \text{CO}_{2}, mainly from fossil fuel burning

    Bad, bad, bad.

Posted in adaptation, American Petroleum Institute, AMETSOC, Anthropocene, bridge to nowhere, Carbon Cycle, carbon dioxide, Carbon Worshipers, Chevron, climate, climate change, climate data, climate disruption, climate models, corporate litigation on damage from fossil fuel emissions, denial, ecology, economics, environment, ethics, evidence, Exxon, fear uncertainty and doubt, forecasting, fossil fuel divestment, fossil fuels, geophysics, global warming, greenhouse gases, Gulf Oil, Hyper Anthropocene, meteorology, natural gas, open data, physics, pipelines, Principles of Planetary Climate, rationality, reasonableness, science, selfishness, Standard Oil of California, sustainability, Texaco, the right to know, the value of financial assets | Leave a comment