Why Science Matters

I’ve seen a lot of praise for science and technology lately and in my opinion, it deserves a lot more credit than it is already getting. But it is concerning that I’m seeing people misunderstanding the limitations of Science and Technology the same way a somebody unfamiliar with checking accounts might misunderstand the limitations of a check book or debit card.

A debit card is not a source of money. A debit card is a method by which you can convert existing resources (money) into various other resources you need or want to live your life.

Technology is not a source of energy. Technology is a method by which we can transform finite natural resources into various other resources we need or want to live our lives.

In my opinion, main reason #WhyScienceMatters today is that it is warning us that some of our current behaviors are unsustainable and are literally killing the living planet.

I’ve noticed that each side of the political spectrum tends to deal with this information differently.

Those who would be considered more conservative definitely accept some science but the conclusions that go against their own deeply held beliefs get labeled as some evil conspiracy being executed by the Commies or Globalists.

Those who consider themselves more liberal accept a bit more of the science than the other side but still end up dismissing the important parts as evil conspiracy if the conclusions contradict their own deeply held beliefs.

Science has conclusively proven three things about modern civilization given its current size, complexity and the comfortable way of life it provides. Referred to onward as simply Modern Civilization.

1) Modern Civilization’s reliance upon fossil fuels is destabilizing the climate which will eventually lead to planet-wide catastrophe.

2) Modern Civilization would not and cannot exist without the vast amounts of surplus energy provided by fossil fuels. It requires that energy to support its size and complexity.

3) Modern Civilization is temporary (finite) because its fuel source (fossil fuels) is also temporary (finite).

What this means is that Modern Civilization is unsustainable and will be diminishing in size and complexity as quickly as its fuel sources deplete.

This means that places that didn’t have electricity or drinkable running water 50 years ago probably won’t have these things 50 years from now.

This means that places that didn’t have internet 50 years ago probably won’t have internet 50 years from now.

This means that medical advances and technologies we didn’t have available to us 50 years ago will tend to be unavailable to us 50 years from now.

These things might be hard to believe here sitting close to the peak but the deterioration process has already begun.

http://thearchdruidreport.blogspot.com/2015/05/the-whisper-of-shutoff-valve.html

The only alternative is continued and ever increasing energy consumption which is impossible because fossil fuels are finite and technology merely transforms the energy we have. It isn’t an energy source.

Now I predict I’m going to be bombarded with scientific breakthroughs brought up to contradict conclusions 2) and 3). These I warmly welcome as I’ve been looking feverishly for any and all valuable/valid responses to this predicament for some time. A name already exists for technologies that have proven valuable and valid. The term is, “Appropriate Technology.”

So I encourage you to take the time to evaluate the technology you’re praising to see if it actually is a valuable and valid response to these hard scientific facts.

For instance, there is a brilliant bit of technology that literally turns sunlight and freely available CO2 into fuel that is easily made compatible with most engines. This sounds miraculous but when one looks at the details it loses a bit of its shine. If an article fails to mention the efficiency of the technology being praised, it is probably not worth the time.

But I had some time so I dug into it. Turns out that if we can continue the development for another decade or two* and increase its efficiency by a factor of 10 we might end up with a solar panel that would produce a barrel of useful fuel over a period of a year. That is almost useless in a world that consumes 500,000 barrels every 10 minutes.

Given the conclusions above, that technology is obviously neither a valuable nor valid response to the predicament we face. A handful of acorns or black locust seeds planted in living soil would provide more of a benefit to us than that technology ever could.

Sadly, most of the large scale solutions we call, “renewable” are also considered outside the realm of, “Appropriate Technology,” for reasons similar to those that knocked the, “Artificial Leaf,” off its pedestal dozens of words ago.

Many in the millennial generation blame the boomers for trying to maintain their own wealth and lifestyles at the expense of future generations. There may or may not be some truth to this belief.

But if our generation continues to develop inappropriate technologies in a vain attempt to continue an unsustainable modern lifestyle there is NO doubt we will be doing a far worse disservice to our own grandchildren.

So if we think science matters the way we claim then we need to accept all of its conclusions with dignity (or at least investigate these technological claims with the skepticism they deserve).

*Science has also shown that its own progress/advancement is also highly dependent upon economic growth which is completely dependent upon increasing fossil fuel consumption. The globalized economy itself owes its existence to fossil fuels. Thus technology that requires rare minerals that only exist in a cave in China are by default, inappropriate as responses.

Thermodynamic Consequences (Why the children of Boomers are going Bust)

I found this on Facebook…

There’s gotta be something more to this millennial “I can’t adult” thing than my generation simply being bad at simple tasks. An entire generation can’t unanimously be lazy, unmotivated, blase, apolitical pieces of internet trash. An entire generation can’t not know how to drive, let alone haggle affordability at a car dealership. An entire generation can’t overwhelmingly lack the know-how to find a viable apartment and continue living in it for more than 6 months. An entire fucking generation can’t be going to school more than ever and facing lower wages than ever at the same time and have it be their own fucking fault. My generation can’t possibly be the ones fucking things up for ourselves, since my generation doesn’t have political or economic control over jack shit.

It’s almost like the generation that raised us didn’t prepare us at all for the world they created. It’s almost like there wasn’t a chance in hell we could have been prepared for this in the first place. It’s almost like it’s being blamed on us before we could even have the chance to fuck up. It’s almost like it’s not our fault we’re failing.

The author of that post is right, for the most part.

The situation that the millennial generation finds itself in is one that could have been avoided. The boomers (and others) were told repeatedly that the current trajectory of the US economy was unsustainable. They continued anyway and then misinformed their children about what they could realistically expect from the future while, at the same time, preventing their children from learning from their own mistakes.

Millennials are making and learning from the same mistakes we all made. The difference is that they’re being forced to make those mistakes as adults in an economic environment where even minor failure can leave you with nothing.

It gets worse from here because there is absolutely no hope of economic recovery. The American Dream is fossil fueled and that fuel is frighteningly finite.

The idea behind this nightmare situation is pretty simple and deep down we all want to believe it.

Everybody that gets educated and works hard can become an economic success because…economic growth.

That statement is false and evokes the archetype of the perpetual motion machine. It is physically impossible.

But it does feel right when the economy actually is growing which is why so many people came to believe it despite all of the evidence against the idea.

If the economy could continue to grow there would be room for the horde of college graduates to have and hold jobs that would allow them to have the life their boomer parents dreamed of.

But it can’t because economic growth requires growth in the amount of fossil fuel consumed and as I stated earlier, fossil fuels are finite.

So what can millennials expect?

Unfortunately… a New Dark Age

Reblog: Darwin’s Casino

Originally posted: WEDNESDAY, JULY 08, 2015

http://thearchdruidreport.blogspot.com/2015/07/darwins-casino.html

Darwin’s Casino

Our age has no shortage of curious features, but for me, at least, one of the oddest is the way that so many people these days don’t seem to be able to think through the consequences of their own beliefs. Pick an ideology, any ideology, straight across the spectrum from the most devoutly religious to the most stridently secular, and you can count on finding a bumper crop of people who claim to hold that set of beliefs, and recite them with all the uncomprehending enthusiasm of a well-trained mynah bird, but haven’t noticed that those beliefs contradict other beliefs they claim to hold with equal devotion.

I’m not talking here about ordinary hypocrisy. The hypocrites we have with us always; our species being what it is, plenty of people have always seen the advantages of saying one thing and doing another. No, what I have in mind is saying one thing and saying another, without ever noticing that if one of those statements is true, the other by definition has to be false. My readers may recall the way that cowboy-hatted heavies in old Westerns used to say to each other, “This town ain’t big enough for the two of us;” there are plenty of ideas and beliefs that are like that, but too many modern minds resemble nothing so much as an OK Corral where the gunfight never happens.

An example that I’ve satirized in an earlier post here is the bizarre way that so many people on the rightward end of the US political landscape these days claim to be, at one and the same time, devout Christians and fervid adherents of Ayn Rand’s violently atheist and anti-Christian ideology.  The difficulty here, of course, is that Jesus tells his followers to humble themselves before God and help the poor, while Rand told hers to hate God, wallow in fantasies of their own superiority, and kick the poor into the nearest available gutter. There’s quite precisely no common ground between the two belief systems, and yet self-proclaimed Christians who spout Rand’s turgid drivel at every opportunity make up a significant fraction of the Republican Party just now.

Still, it’s only fair to point out that this sort of weird disconnect is far from unique to religious people, or for that matter to Republicans. One of the places it crops up most often nowadays is the remarkable unwillingness of people who say they accept Darwin’s theory of evolution to think through what that theory implies about the limits of human intelligence.

If Darwin’s right, as I’ve had occasion to point out here several times already, human intelligence isn’t the world-shaking superpower our collective egotism likes to suppose. It’s simply a somewhat more sophisticated version of the sort of mental activity found in many other animals. The thing that supposedly sets it apart from all other forms of mentation, the use of abstract language, isn’t all that unique; several species of cetaceans and an assortment of the brainier birds communicate with their kin using vocalizations that show all the signs of being languages in the full sense of the word—that is, structured patterns of abstract vocal signs that take their meaning from convention rather than instinct.

What differentiates human beings from bottlenosed porpoises, African gray parrots, and other talking species is the mere fact that in our case, language and abstract thinking happened to evolve in a species that also had the sort of grasping limbs, fine motor control, and instinctive drive to pick things up and fiddle with them, that primates have and most other animals don’t.  There’s no reason why sentience should be associated with the sort of neurological bias that leads to manipulating the environment, and thence to technology; as far as the evidence goes, we just happen to be the one species in Darwin’s evolutionary casino that got dealt both those cards. For all we know, bottlenosed porpoises have a rich philosophical, scientific, and literary culture dating back twenty million years; they don’t have hands, though, so they don’t have technology. All things considered, this may be an advantage, since it means they won’t have had to face the kind of self-induced disasters our species is so busy preparing for itself due to the inveterate primate tendency to, ahem, monkey around with things.

I’ve long suspected that one of the reasons why human beings haven’t yet figured out how to carry on a conversation with bottlenosed porpoises, African gray parrots, et al. in their own language is quite simply that we’re terrified of what they might say to us—not least because it’s entirely possible that they’d be right. Another reason for the lack of communication, though, leads straight back to the limits of human intelligence. If our minds have emerged out of the ordinary processes of evolution, what we’ve got between our ears is simply an unusually complex variation on the standard social primate brain, adapted over millions of years to the mental tasks that are important to social primates—that is, staying fed, attracting mates, competing for status, and staying out of the jaws of hungry leopards.

Notice that “discovering the objective truth about the nature of the universe” isn’t part of this list, and if Darwin’s theory of evolution is correct—as I believe it to be—there’s no conceivable way it could be. The mental activities of social primates, and all other living things, have to take the rest of the world into account in certain limited ways; our perceptions of food, mates, rivals, and leopards, for example, have to correspond to the equivalent factors in the environment; but it’s actually an advantage to any organism to screen out anything that doesn’t relate to immediate benefits or threats, so that adequate attention can be paid to the things that matter. We perceive colors, which most mammals don’t, because primates need to be able to judge the ripeness of fruit from a distance; we don’t perceive the polarization of light, as bees do, because primates don’t need to navigate by the angle of the sun.

What’s more, the basic mental categories we use to make sense of the tiny fraction of our surroundings that we perceive are just as much a product of our primate ancestry as the senses we have and don’t have. That includes the basic structures of human language, which most research suggests are inborn in our species, as well as such derivations from language as logic and the relation between cause and effect—this latter simply takes the grammatical relation between subjects, verbs, and objects, and projects it onto the nonlinguistic world. In the real world, every phenomenon is part of an ongoing cascade of interactions so wildly hypercomplex that labels like “cause” and “effect” are hopelessly simplistic; what’s more, a great many things—for example, the decay of radioactive nuclei—just up and happen randomly without being triggered by any specific cause at all. We simplify all this into cause and effect because just enough things appear to work that way to make the habit useful to us.

Another thing that has much more to do with our cognitive apparatus than with the world we perceive is number. Does one apple plus one apple equal two apples? In our number-using minds, yes; in the real world, it depends entirely on the size and condition of the apples in question. We convert qualities into quantities because quantities are easier for us to think with.  That was one of the core discoveries that kickstarted the scientific revolution; when Galileo became the first human being in history to think of speed as a quantity, he made it possible for everyone after him to get their minds around the concept of velocity in a way that people before him had never quite been able to do.

In physics, converting qualities to quantities works very, very well. In some other sciences, the same thing is true, though the further you go away from the exquisite simplicity of masses in motion, the harder it is to translate everything that matters into quantitative terms, and the more inevitably gets left out of the resulting theories. By and large, the more complex the phenomena under discussion, the less useful quantitative models are. Not coincidentally, the more complex the phenomena under discussion, the harder it is to control all the variables in play—the essential step in using the scientific method—and the more tentative, fragile, and dubious the models that result.

So when we try to figure out what bottlenosed porpoises are saying to each other, we’re facing what’s probably an insuperable barrier. All our notions of language are social-primate notions, shaped by the peculiar mix of neurology and hardwired psychology that proved most useful to bipedal apes on the East African savannah over the last few million years. The structures that shape porpoise speech, in turn, are social-cetacean notions, shaped by the utterly different mix of neurology and hardwired psychology that’s most useful if you happen to be a bottlenosed porpoise or one of its ancestors.

Mind you, porpoises and humans are at least fellow-mammals, and likely have common ancestors only a couple of hundred million years back. If you want to talk to a gray parrot, you’re trying to cross a much vaster evolutionary distance, since the ancestors of our therapsid forebears and the ancestors of the parrot’s archosaurian progenitors have been following divergent tracks since way back in the Paleozoic. Since language evolved independently in each of the lineages we’re discussing, the logic of convergent evolution comes into play: as with the eyes of vertebrates and cephalopods—another classic case of the same thing appearing in very different evolutionary lineages—the functions are similar but the underlying structure is very different. Thus it’s no surprise that it’s taken exhaustive computer analyses of porpoise and parrot vocalizations just to give us a clue that they’re using language too.

The takeaway point I hope my readers have grasped from this is that the human mind doesn’t know universal, objective truths. Our thoughts are simply the way that we, as members of a particular species of social primates, to like to sort out the universe into chunks simple enough for us to think with. Does that make human thought useless or irrelevant? Of course not; it simply means that its uses and relevance are as limited as everything else about our species—and, of course, every other species as well. If any of my readers see this as belittling humanity, I’d like to suggest that fatuous delusions of intellectual omnipotence aren’t a useful habit for any species, least of all ours. I’d also point out that those very delusions have played a huge role in landing us in the rising spiral of crises we’re in today.

Human beings are simply one species among many, inhabiting part of the earth at one point in its long lifespan. We’ve got remarkable gifts, but then so does every other living thing. We’re not the masters of the planet, the crown of evolution, the fulfillment of Earth’s destiny, or any of the other self-important hogwash with which we like to tickle our collective ego, and our attempt to act out those delusional roles with the help of a lot of fossil carbon hasn’t exactly turned out well, you must admit. I know some people find it unbearable to see our species deprived of its supposed place as the precious darlings of the cosmos, but that’s just one of life’s little learning experiences, isn’t it? Most of us make a similar discovery on the individual scale in the course of growing up, and from my perspective, it’s high time that humanity do a little growing up of its own, ditch the infantile egotism, and get to work making the most of the time we have on this beautiful and fragile planet.

The recognition that there’s a middle ground between omnipotence and uselessness, though, seems to be very hard for a lot of people to grasp just now. I don’t know if other bloggers in the doomosphere have this happen to them, but every few months or so I field a flurry of attempted comments by people who want to drag the conversation over to their conviction that free will doesn’t exist. I don’t put those comments through, and not just because they’re invariably off topic; the ideology they’re pushing is, to my way of thinking, frankly poisonous, and it’s also based on a shopworn Victorian determinism that got chucked by working scientists rather more than a century ago, but is still being recycled by too many people who didn’t hear the thump when it landed in the trash can of dead theories.

A century and a half ago, it used to be a commonplace of scientific ideology that cause and effect ruled everything, and the whole universe was fated to rumble along a rigidly invariant sequence of events from the beginning of time to the end thereof. The claim was quite commonly made that a sufficiently vast intelligence, provided with a sufficiently complete data set about the position and velocity of every particle in the cosmos at one point in time, could literally predict everything that would ever happen thereafter. The logic behind that claim went right out the window, though, once experiments in the early 20th century showed conclusively that quantum phenomena are random in the strictest sense of the world. They’re not caused by some hidden variable; they just happen when they happen, by chance.

What determines the moment when a given atom of an unstable isotope will throw off some radiation and turn into a different element? Pure dumb luck. Since radiation discharges from single atoms of unstable isotopes are the most important cause of genetic mutations, and thus a core driving force behind the process of evolution, this is much more important than it looks. The stray radiation that gave you your eye color, dealt an otherwise uninteresting species of lobefin fish the adaptations that made it the ancestor of all land vertebrates, and provided the raw material for countless other evolutionary transformations:  these were entirely random events, and would have happened differently if certain unstable atoms had decayed at a different moment and sent their radiation into a different ovum or spermatozoon—as they very well could have. So it doesn’t matter how vast the intelligence or complete the data set you’ve got, the course of life on earth is inherently impossible to predict, and so are a great many other things that unfold from it.

With the gibbering phantom of determinism laid to rest, we can proceed to the question of free will. We can define free will operationally as the ability to produce genuine novelty in behavior—that is, to do things that can’t be predicted. Human beings do this all the time, and there are very good evolutionary reasons why they should have that capacity. Any of my readers who know game theory will recall that the best strategy in any competitive game includes an element of randomness, which prevents the other side from anticipating and forestalling your side’s actions. Food gathering, in game theory terms, is a competitive game; so are trying to attract a mate, competing for social prestige, staying out of the jaws of hungry leopards, and most of the other activities that pack the day planners of social primates.

Unpredictability is so highly valued by our species, in fact, that every human culture ever recorded has worked out formal ways to increase the total amount of sheer randomness guiding human action. Yes, we’re talking about divination—for those who don’t know the jargon, this term refers to what you do with Tarot cards, the I Ching, tea leaves, horoscopes, and all the myriad other ways human cultures have worked out to take a snapshot of the nonrational as a guide for action. Aside from whatever else may be involved—a point that isn’t relevant to this blog—divination does a really first-rate job of generating unpredictability. Flipping a coin does the same thing, and most people have confounded the determinists by doing just that on occasion, but fully developed divination systems like those just named provide a much richer palette of choices than the simple coin toss, and thus enable people to introduce a much richer range of novelty into their actions.

Still, divination is a crutch, or at best a supplement; human beings have their own onboard novelty generators, which can do the job all by themselves if given half a chance.  The process involved here was understood by philosophers a long time ago, and no doubt the neurologists will get around to figuring it out one of these days as well. The core of it is that humans don’t respond directly to stimuli, external or internal.  Instead, they respond to their own mental representations of stimuli, which are constructed by the act of cognition and are laced with bucketloads of extraneous material garnered from memory and linked to the stimulus in uniquely personal, irrational, even whimsical ways, following loose and wildly unpredictable cascades of association and contiguity that have nothing to do with logic and everything to do with the roots of creativity.

Each human society tries to give its children some approximation of its own culturally defined set of representations—that’s what’s going on when children learn language, pick up the customs of their community, ask for the same bedtime story to be read to them for the umpteenth time, and so on. Those culturally defined representations proceed to interact in various ways with the inborn, genetically defined representations that get handed out for free with each brand new human nervous system.  The existence of these biologically and culturally defined representations, and of various ways that they can be manipulated to some extent by other people with or without the benefit of mass media, make up the ostensible reason why the people mentioned above insist that free will doesn’t exist.

Here again, though, the fact that the human mind isn’t omnipotent doesn’t make it powerless. Think about what happens, say, when a straight stick is thrust into water at an angle, and the stick seems to pick up a sudden bend at the water’s surface, due to differential refraction in water and air. The illusion is as clear as anything, but if you show this to a child and let the child experiment with it, you can watch the representation “the stick is bent” give way to “the sticklooks bent.” Notice what’s happening here: the stimulus remains the same, but the representation changes, and so do the actions that result from it. That’s a simple example of how representations create the possibility of freedom.

In the same way, when the media spouts some absurd bit of manipulative hogwash, if you take the time to think about it, you can watch your own representation shift from “that guy’s having an orgasm from slurping that fizzy brown sugar water” to “that guy’s being paid to pretend to have an orgasm, so somebody can try to convince me to buy that fizzy brown sugar water.” If you really pay attention, it may shift again to “why am I wasting my time watching this guy pretend to get an orgasm from fizzy brown sugar water?” and may even lead you to chuck your television out a second story window into an open dumpster, as I did to the last one I ever owned. (The flash and bang when the picture tube imploded, by the way, was far more entertaining than anything that had ever appeared on the screen.)

Human intelligence is limited. Our capacities for thinking are constrained by our heredity, our cultures, and our personal experiences—but then so are our capacities for the perception of color, a fact that hasn’t stopped artists from the Paleolithic to the present from putting those colors to work in a galaxy of dizzyingly original ways. A clear awareness of the possibilities and the limits of the human mind makes it easier to play the hand we’ve been dealt in Darwin’s casino—and it also points toward a generally unsuspected reason why civilizations come apart, which we’ll discuss next week.

Reblog: The Dream of the Machine

Originally Posted on WEDNESDAY, JULY 01, 2015
Originally Posted at http://thearchdruidreport.blogspot.com/2015/07/the-dream-of-machine.html

As I type these words, it looks as though the wheels are coming off the global economy. Greece and Puerto Rico have both suspended payments on their debts, and China’s stock market, which spent the last year in a classic speculative bubble, is now in the middle of a classic speculative bust. Those of my readers who’ve read John Kenneth Galbraith’s lively history The Great Crash 1929 already know all about the Chinese situation, including the outcome—and since vast amounts of money from all over the world went into Chinese stocks, and most of that money is in the process of turning into twinkle dust, the impact of the crash will inevitably proliferate through the global economy.

So, in all probability, will the Greek and Puerto Rican defaults. In today’s bizarre financial world, the kind of bad debts that used to send investors backing away in a hurry attract speculators in droves, and so it turns out that some big New York hedge funds are in trouble as a result of the Greek default, and some of the same firms that got into trouble with mortgage-backed securities in the recent housing bubble are in the same kind of trouble over Puerto Rico’s unpayable debts. How far will the contagion spread? It’s anybody’s guess.

Oh, and on another front, nearly half a million acres of Alaska burned up in a single day last week—yes, the fires are still going—while ice sheets in Greenland are collapsing so frequently and forcefully that the resulting earthquakes are rattling seismographs thousands of miles away. These and other signals of a biosphere in crisis make good reminders of the fact that the current economic mess isn’t happening in a vacuum. As Ugo Bardi pointed out in a thoughtful blog post, finance is the flotsam on the surface of the ocean of real exchanges of real goods and services, and the current drumbeat of financial crises are symptomatic of the real crisis—the arrival of the limits to growth that so many people have been discussing, and so many more have been trying to ignore, for the last half century or so.

A great many people in the doomward end of the blogosphere are talking about what’s going on in the global economy and what’s likely to blow up next. Around the time the next round of financial explosions start shaking the world’s windows, a great many of those same people will likely be talking about what to do about it all.  I don’t plan on joining them in that discussion. As blog posts here have pointed out more than once, time has to be considered when getting ready for a crisis. The industrial world would have had to start backpedaling away from the abyss decades ago in order to forestall the crisis we’re now in, and the same principle applies to individuals.  The slogan “collapse now and avoid the rush!” loses most of its point, after all, when the rush is already under way.

Any of my readers who are still pinning their hopes on survival ecovillages and rural doomsteads they haven’t gotten around to buying or building yet, in other words, are very likely out of luck. They, like the rest of us, will be meeting this where they are, with what they have right now. This is ironic, in that ideas that might have been worth adopting three or four years ago are just starting to get traction now. I’m thinking here particularly of a recent article on how to use permaculture to prepare for a difficult future, which describes the difficult future in terms that will be highly familiar to readers of this blog. More broadly, there’s a remarkable amount of common ground between that article and the themes of my book Green Wizardry. The awkward fact remains that when the global banking industry shows every sign of freezing up the way it did in 2008, putting credit for land purchases out of reach of most people for years to come, the article’s advice may have come rather too late.

That doesn’t mean, of course, that my readers ought to crawl under their beds and wait for death. What we’re facing, after all, isn’t the end of the world—though it may feel like that for those who are too deeply invested, in any sense of that last word you care to use, in the existing order of industrial society. As Visigothic mommas used to remind their impatient sons, Rome wasn’t sacked in a day. The crisis ahead of us marks the end of what I’ve called abundance industrialism and the transition to scarcity industrialism, as well as the end of America’s global hegemony and the emergence of a new international order whose main beneficiary hasn’t been settled yet. Those paired transformations will most likely unfold across several decades of economic chaos, political turmoil, environmental disasters, and widespread warfare. Plenty of people got through the equivalent cataclysms of the first half of the twentieth century with their skins intact, even if the crisis caught them unawares, and no doubt plenty of people will get through the mess that’s approaching us in much the same condition.

Thus I don’t have any additional practical advice, beyond what I’ve already covered in my books and blog posts, to offer my readers just now. Those who’ve already collapsed and gotten ahead of the rush can break out the popcorn and watch what promises to be a truly colorful show.  Those who didn’t—well, you might as well get some popcorn going and try to enjoy the show anyway. If you come out the other side of it all, schoolchildren who aren’t even born yet may eventually come around to ask you awed questions about what happened when the markets crashed in ‘15.

In the meantime, while the popcorn is popping and the sidewalks of Wall Street await their traditional tithe of plummeting stockbrokers, I’d like to return to the theme of last week’s post and talk about the way that the myth of the machine—if you prefer, the widespread mental habit of thinking about the world in mechanistic terms—pervades and cripples the modern mind.

Of all the responses that last week’s post fielded, those I found most amusing, and also most revealing, were those that insisted that of course the universe is a machine, so is everything and everybody in it, and that’s that. That’s amusing because most of the authors of these comments made it very clear that they embraced the sort of scientific-materialist atheism that rejects any suggestion that the universe has a creator or a purpose. A machine, though, is by definition a purposive artifact—that is, it’s made by someone to do something. If the universe is a machine, then, it has a creator and a purpose, and if it doesn’t have a creator and a purpose, logically speaking, it can’t be a machine.

That sort of unintentional comedy inevitably pops up whenever people don’t think through the implications of their favorite metaphors. Still, chase that habit further along its giddy path and you’ll find a deeper absurdity at work. When people say “the universe is a machine,” unless they mean that statement as a poetic simile, they’re engaging in a very dubious sort of logic. As Alfred Korzybski pointed out a good many years ago, pretty much any time you say “this is that,” unless you implicitly or explicitly qualify what you mean in very careful terms, you’ve just babbled nonsense.

The difficulty lies in that seemingly innocuous word “is.” What Korzybski called the “is of identity”—the use of the word “is” to represent  =, the sign of equality—makes sense only in a very narrow range of uses.  You can use the “is of identity” with good results in categorical definitions; when I commented above that a machine is a purposive artifact, that’s what I was doing. Here is a concept, “machine;” here are two other concepts, “purposive” and “artifact;” the concept “machine” logically includes the concepts “purposive” and “artifact,” so anything that can be described by the words “a machine” can also be described as “purposive” and “an artifact.” That’s how categorical definitions work.

Let’s consider a second example, though: “a machine is a purple dinosaur.” That utterance uses the same structure as the one we’ve just considered.  I hope I don’t have to prove to my readers, though, that the concept “machine” doesn’t include the concepts “purple” and “dinosaur” in any but the most whimsical of senses.  There are plenty of things that can be described by the label “machine,” in other words, that can’t be described by the labels “purple” or “dinosaur.” The fact that some machines—say, electronic Barney dolls—can in fact be described as purple dinosaurs doesn’t make the definition any less silly; it simply means that the statement “no machine is a purple dinosaur” can’t be justified either.

With that in mind, let’s take a closer look at the statement “the universe is a machine.” As pointed out earlier, the concept “machine” implies the concepts “purposive” and “artifact,” so if the universe is a machine, somebody made it to carry out some purpose. Those of my readers who happen to belong to Christianity, Islam, or another religion that envisions the universe as the creation of one or more deities—not all religions make this claim, by the way—will find this conclusion wholly unproblematic. My atheist readers will disagree, of course, and their reaction is the one I want to discuss here. (Notice how “is” functions in the sentence just uttered: “the reaction of the atheists” equals “the reaction I want to discuss.” This is one of the few other uses of “is” that doesn’t tend to generate nonsense.)

In my experience, at least, atheists faced with the argument about the meaning of the word “machine” I’ve presented here pretty reliably respond with something like “It’s not a machine in that sense.” That response takes us straight to the heart of the logical problems with the “is of identity.” In what sense is the universe a machine? Pursue the argument far enough, and unless the atheist storms off in a huff—which admittedly tends to happen more often than not—what you’ll get amounts to “the universe and a machine share certain characteristics in common.” Go further still—and at this point the atheist will almost certainly storm off in a huff—and you’ll discover that the characteristics that the universe is supposed to share with a machine are all things we can’t actually prove one way or another about the universe, such as whether it has a creator or a purpose.

The statement “the universe is a machine,” in other words, doesn’t do what it appears to do. It appears to state a categorical identity; it actually states an unsupported generalization in absolute terms. It takes a mental model abstracted from one corner of human experience and applies it to something unrelated.  In this case, for polemic reasons, it does so in a predictably one-sided way: deductions approved by the person making the statement (“the universe is a machine, therefore it lacks life and consciousness”) are acceptable, while deductions the person making the statement doesn’t like (“the universe is a machine, therefore it was made by someone for some purpose”) get the dismissive response noted above.

This sort of doublethink appears all through the landscape of contemporary nonconversation and nondebate, to be sure, but the problems with the “is of identity” don’t stop with its polemic abuse. Any time you say “this is that,” and mean something other than “this has some features in common with that,” you’ve just fallen into one of the corel boobytraps hardwired into the structure of human thought.

Human beings think in categories. That’s what made ancient Greek logic, which takes categories as its basic element, so massive a revolution in the history of human thinking: by watching the way that one category includes or excludes another, which is what the Greek logicians did, you can squelch a very large fraction of human stupidities before they get a foothold. What Alfred Korzybski pointed out, in effect, is that there’s a metalogic that the ancient Greeks didn’t get to, and logical theorists since their time haven’t really tackled either: the extremely murky relationship between the categories we think with and the things we experience, which don’t come with category labels spraypainted on them.

Here is a green plant with a woody stem. Is it a tree or a shrub? That depends on exactly where you draw the line between those two categories, and as any botanist can tell you, that’s neither an easy nor an obvious thing. As long as you remember that categories exist within the human mind as convenient handles for us to think with, you can navigate around the difficulties, but when you slip into thinking that the categories are more real than the things they describe, you’re in deep, deep trouble.

It’s not at all surprising that human thought should have such problems built into it. If, as I do, you accept the Darwinian thesis that human beings evolved out of prehuman primates by the normal workings of the laws of evolution, it follows logically that our nervous systems and cognitive structures didn’t evolve for the purpose of understanding the truth about the cosmos; they evolved to assist us in getting food, attracting mates, fending off predators, and a range of similar, intellectually undemanding tasks. If, as many of my theist readers do, you believe that human beings were created by a deity, the yawning chasm between creator and created, between an infinite and a finite intelligence, stands in the way of any claim that human beings can know the unvarnished truth about the cosmos. Neither viewpoint supports the claim that a category created by the human mind is anything but a convenience that helps our very modest mental powers grapple with an ultimately incomprehensible cosmos.

Any time human beings try to make sense of the universe or any part of it, in turn, they have to choose from among the available categories in an attempt to make the object of inquiry fit the capacities of their minds. That’s what the founders of the scientific revolution did in the seventeenth century, by taking the category of “machine” and applying it to the universe to see how well it would fit. That was a perfectly rational choice from within their cultural and intellectual standpoint. The founders of the scientific revolution were Christians to a man, and some of them (for example, Isaac Newton) were devout even by the standards of the time; the idea that the universe had been made by someone for some purpose, after all, wasn’t problematic in the least to people who took it as given that the universe was made by God for the purpose of human salvation. It was also a useful choice in practical terms, because it allowed certain features of the universe—specifically, the behavior of masses in motion—to be accounted for and modeled with a clarity that previous categories hadn’t managed to achieve.

The fact that one narrowly defined aspect of the universe seems to behave like a machine, though, does not prove that the universe is a machine, any more than the fact that one machine happens to look like a purple dinosaur proves that all machines are purple dinosaurs. The success of mechanistic models in explaining the behavior of masses in motion proved that mechanical metaphors are good at fitting some of the observed phenomena of physics into a shape that’s simple enough for human cognition to grasp, and that’s all it proved. To go from that modest fact to the claim that the universe and everything in it are machines involves an intellectual leap of pretty spectacular scale. Part of the reason that leap was taken in the seventeenth century was the religious frame of scientific inquiry at that time, as already mentioned, but there was another factor, too.

It’s a curious fact that mechanistic models of the universe appeared in western European cultures, and become wildly popular there, well before the machines did. In the early seventeenth century, machines played a very modest role in the life of most Europeans; most tasks were done using hand tools powered by human and animal muscle, the way they had been done since the dawn of the agricultural revolution eight millennia or so before. The most complex devices available at the time were pendulum clocks, printing presses, handlooms, and the like—you know, the sort of thing that people these days use instead of machines when they want to get away from technology.

For reasons that historians of ideas are still trying to puzzle out, though, western European thinkers during these same years were obsessed with machines, and with mechanical explanations for the universe. Those latter ranged from the plausible to the frankly preposterous—René Descartes, for example, proposed a theory of gravity in which little corkscrew-shaped particles went zooming up from the earth to screw themselves into pieces of matter and yank them down. Until Isaac Newton, furthermore, theories of nature based on mechanical models didn’t actually explain that much, and until the cascade of inventive adaptations of steam power that ended with James Watt’s epochal steam engine nearly a century after Newton, the idea that machines could elbow aside craftspeople using hand tools and animals pulling carts was an unproven hypothesis. Yet a great many people in western Europe believed in the power of the machine as devoutly as their ancestors had believed in the power of the bones of the local saints.

A habit of thought very widespread in today’s culture assumes that technological change happens first and the world of ideas changes in response to it. The facts simply won’t support that claim, though. As the history of mechanistic ideas in science shows clearly, the ideas come first and the technologies follow—and there’s good reason why this should be so. Technologies don’t invent themselves, after all. Somebody has to put in the work to invent them, and then other people have to invest the resources to take them out of the laboratory and give them a role in everyday life. The decisions that drive invention and investment, in turn, are powerfully shaped by cultural forces, and these in turn are by no means as rational as the people influenced by them generally like to think.

People in western Europe and a few of its colonies dreamed of machines, and then created them. They dreamed of a universe reduced to the status of a machine, a universe made totally transparent to the human mind and totally subservient to the human will, and then set out to create it. That latter attempt hasn’t worked out so well, for a variety of reasons, and the rising tide of disasters sketched out in the first part of this week’s post unfold in large part from the failure of that misbegotten dream. In the next few posts, I want to talk about why that failure was inevitable, and where we might go from here.

Reblog: The Whisper of the Shutoff Valve

Originally Posted: WEDNESDAY, MAY 06, 2015
Link: http://thearchdruidreport.blogspot.com/2015/05/the-whisper-of-shutoff-valve.html

Last week’s post on the impending decline and fall of the internet fielded a great many responses. That was no surprise, to be sure; nor was I startled in the least to find that many of them rejected the thesis of the post with some heat. Contemporary pop culture’s strident insistence that technological progress is a clock that never runs backwards made such counterclaims inevitable.

Still, it’s always educational to watch the arguments fielded to prop up the increasingly shaky edifice of the modern mythology of progress, and the last week was no exception. A response I found particularly interesting from that standpoint appeared on one of the many online venues where Archdruid Report posts appear. One of the commenters insisted that my post should be rejected out of hand as mere doom and gloom; after all, he pointed out, it was ridiculous for me to suggest that fifty years from now, a majority of the population of the United States might be without reliable electricity or running water.

I’ve made the same prediction here and elsewhere a good many times. Each time, most of my readers or listeners seem to have taken it as a piece of sheer rhetorical hyperbole. The electrical grid and the assorted systems that send potable water flowing out of faucets are so basic to the rituals of everyday life in today’s America that their continued presence is taken for granted.  At most, it’s conceivable that individuals might choose not to connect to them; there’s a certain amount of talk about off-grid living here and there in the alternative media, for example.  That people who want these things might not have access to them, though, is pretty much unthinkable.

Meanwhile, in Detroit and Baltimore, tens of thousands of residents are in the process of losing their access to water and electricity.

The situation in both cities is much the same, and there’s every reason to think that identical headlines will shortly appear in reference to other cities around the nation. Not that many decades ago, Detroit and Baltimore were important industrial centers with thriving economies. Along with more than a hundred other cities in America’s Rust Belt, they were thrown under the bus with the first wave of industrial offshoring in the 1970s.  The situation for both cities has only gotten worse since that time, as the United States completed its long transition from a manufacturing economy producing goods and services to a bubble economy that mostly produces unpayable IOUs.

These days, the middle-class families whose tax payments propped up the expansive urban systems of an earlier day have long since moved out of town. Most of the remaining residents are poor, and the ongoing redistribution of wealth in America toward the very rich and away from everyone else has driven down the income of the urban poor to the point that many of them can no longer afford to pay their water and power bills. City utilities in Detroit and Baltimore have been sufficiently sensitive to political pressures that large-scale utility shutoffs have been delayed, but shifts in the political climate in both cities are bringing the delays to an end; water bills have increased steadily, more and more people have been unable to pay them, and the result is as predictable as it is brutal.

The debate over the Detroit and Baltimore shutoffs has followed the usual pattern, as one side wallows in bash-the-poor rhetoric while the other side insists plaintively that access to utilities is a human right. Neither side seems to be interested in talking about the broader context in which these disputes take shape. There are two aspects to that broader context, and it’s a tossup which is the more threatening.

The first aspect is the failure of the US economy to recover in any meaningful sense from the financial crisis of 2008. Now of course politicians from Obama on down have gone overtime grandstanding about the alleged recovery we’re in. I invite any of my readers who bought into that rhetoric to try the following simple experiment. Go to your favorite internet search engine and look up how much the fracking industry has added to the US gross domestic product each year from 2009 to 2014. Now subtract that figure from the US gross domestic product for each of those years, and see how much growth there’s actually been in the rest of the economy since the real estate bubble imploded.

What you’ll find, if you take the time to do that, is that the rest of the US economy has been flat on its back gasping for air for the last five years. What makes this even more problematic, as I’ve noted in several previous posts here, is that the great fracking boom about which we’ve heard so much for the last five years was never actually the game-changing energy revolution its promoters claimed; it was simply another installment in the series of speculative bubbles that has largely replaced constructive economic activity in this country over the last two decades or so.

What’s more, it’s not the only bubble currently being blown, and it may not even be the largest. We’ve also got a second tech-stock bubble, with money-losing internet corporations racking up absurd valuations in the stock market while they burn through millions of dollars of venture capital; we’ve got a student loan bubble, in which billions of dollars of loans that will never be paid back have been bundled, packaged, and sold to investors just like all those no-doc mortgages were a decade ago; car loans are getting the same treatment; the real estate market is fizzing again in many urban areas as investors pile into another round of lavishly marketed property investments—well, I could go on for some time. It’s entirely possible that if all the bubble activity were to be subtracted from the last five years or so of GDP, the result would show an economy in freefall.

Certainly that’s the impression that emerges if you take the time to check out those economic statistics that aren’t being systematically jiggered by the US government for PR purposes. The number of long-term unemployed in America is at an all-time high; roads, bridges, and other basic infrastructure is falling to pieces; measurements of US public health—generally considered a good proxy for the real economic condition of the population—are well below those of other industrial countries, heading toward Third World levels; abandoned shopping malls litter the landscape while major retailers announce more than 6000 store closures. These are not things you see in an era of economic expansion, or even one of relative stability; they’re markers of decline.

The utility shutoffs in Detroit and Baltimore are further symptoms of the same broad process of economic unraveling. It’s true, as pundits in the media have been insisting since the story broke, that utilities get shut off for nonpayment of bills all the time. It’s equally true that shutting off the water supply of 20,000 or 30,000 people all at once is pretty much unprecedented. Both cities, please note, have had very large populations of poor people for many decades now.  Those who like to blame a “culture of poverty” for the tangled relationship between US governments and the American poor, and of course that trope has been rehashed by some of the pundits just mentioned, haven’t yet gotten around to explaining how the culture of poverty all at once inspired tens of thousands of people who had been paying their utility bills to stop doing so.

There are plenty of good reasons, after all, why poor people who used to pay their bills can’t do so any more. Standard business models in the United States used to take it for granted that the best way to run the staffing dimensions of any company, large or small, was to have as many full-time positions as possible and to use raises and other practical incentives to encourage employees who were good at their jobs to stay with the company. That approach has been increasingly unfashionable in today’s America, partly due to perverse regulatory incentives that penalize employers for offering full-time positions, partly to the emergence of attitudes in corner offices that treat employees as just another commodity. (I doubt it’s any kind of accident that most corporations nowadays refer to their employment offices as “human resource departments.” What do you do with a resource? You exploit it.)

These days, most of the jobs available to the poor are part-time, pay very little, and include nasty little clawbacks in the form of requirements that employees pay out of pocket for uniforms, equipment, and other things that employers used to provide as a matter of course. Meanwhile housing prices and rents are rising well above their post-2008 dip, and a great many other necessities are becoming more costly—inflation may be under control, or so the official statistics say, but anyone who’s been shopping at the same grocery store for the last eight years knows perfectly well that prices kept on rising anyway.

So you’ve got falling incomes running up against rising costs for food, rent, and utilities, among other things. In the resulting collision, something’s got to give, and for tens of thousands of poor Detroiters and Baltimoreans, what gave first was the ability to keep current on their water bills. Expect to see the same story playing out across the country as more people on the bottom of the income pyramid find themselves in the same situation. What you won’t hear in the media, though it’s visible enough if you know where to look and are willing to do so, is that people above the bottom of the income pyramid are also losing ground, being forced down toward economic nonpersonhood. From the middle classes down, everyone’s losing ground.

That process doesn’t continue any further than the middle class, to be sure. It’s been pointed out repeatedly that over the last four decades or so, the distribution of wealth in America has skewed further and further out of balance, with the top 20% of incomes taking a larger and larger share at the expense of everybody else. That’s an important factor in bringing about the collision just described. Some thinkers on the radical fringes of American society, which is the only place in the US you can talk about such things these days, have argued that the raw greed of the well-to-do is the sole reason why so many people lower down the ladder are being pushed further down still.

Scapegoating rhetoric of that sort is always comforting, because it holds out the promise—theoretically, if not practically—that something can be done about the situation. If only the thieving rich could be lined up against a convenient brick wall and removed from the equation in the time-honored fashion, the logic goes, people in Detroit and Baltimore could afford to pay their water bills!  I suspect we’ll hear such claims increasingly often as the years pass and more and more Americans find their access to familiar comforts and necessities slipping away.  Simple answers are always popular in such times, not least when the people being scapegoated go as far out of their way to make themselves good targets for such exercises as the American rich have done in recent decades.

John Kenneth Galbraith’s equation of the current US political and economic elite with the French aristocracy on the eve of revolution rings even more true than it did when he wrote it back in 1992, in the pages of The Culture of Contentment. The unthinking extravagances, the casual dismissal of the last shreds of noblesse oblige, the obsessive pursuit of personal advantages and private feuds without the least thought of the potential consequences, the bland inability to recognize that the power, privilege, wealth, and sheer survival of the aristocracy depended on the system the aristocrats themselves were destabilizing by their actions—it’s all there, complete with sprawling overpriced mansions that could just about double for Versailles. The urban mobs that played so large a role back in 1789 are warming up for their performances as I write these words; the only thing left to complete the picture is a few tumbrils and a guillotine, and those will doubtless arrive on cue.

The senility of the current US elite, as noted in a previous post here, is a massive political fact in today’s America. Still, it’s not the only factor in play here. Previous generations of wealthy Americans recognized without too much difficulty that their power, prosperity, and survival depended on the willingness of the rest of the population to put up with their antics. Several times already in America’s history, elite groups have allied with populist forces to push through reforms that sharply weakened the power of the wealthy elite, because they recognized that the alternative was a social explosion even more destructive to the system on which elite power depends.

I suppose it’s possible that the people currently occupying the upper ranks of the political and economic pyramid in today’s America are just that much more stupid than their equivalents in the Jacksonian, Progressive, and New Deal eras. Still, there’s at least one other explanation to hand, and it’s the second of the two threatening contextual issues mentioned earlier.

Until the nineteenth century, fresh running water piped into homes for everyday use was purely an affectation of the very rich in a few very wealthy and technologically adept societies. Sewer pipes to take dirty water and human wastes out of the house belonged in the same category. This wasn’t because nobody knew how plumbing works—the Romans had competent plumbers, for example, and water faucets and flush toilets were to be found in Roman mansions of the imperial age. The reason those same things weren’t found in every Roman house was economic, not technical.

Behind that economic issue lay an ecological reality.  White’s Law, one of the foundational principles of human ecology, states that economic development is a function of energy per capita. For a society before the industrial age, the Roman Empire had an impressive amount of energy per capita to expend; control over the agricultural economy of the Mediterranean basin, modest inputs from sunlight, water and wind, and a thriving slave industry fed by the expansion of Roman military power all fed into the capacity of Roman society to develop itself economically and technically. That’s why rich Romans had running water and iced drinks in summer, while their equivalents in ancient Greece a few centuries earlier had to make do without either one.

Fossil fuels gave industrial civilization a supply of energy many orders of magnitude greater than any previous human civilization has had—a supply vast enough that the difference remains huge even after the vast expansion of population that followed the industrial revolution. There was, however, a catch—or, more precisely, two catches. To begin with, fossil fuels are finite, nonrenewable resources; no matter how much handwaving is employed in the attempt to obscure this point—and whatever else might be in short supply these days, that sort of handwaving is not—every barrel of oil, ton of coal, or cubic foot of natural gas that’s burnt takes the world one step closer to the point at which there will be no economically extractable reserves of oil, coal, or natural gas at all.

That’s catch #1. Catch #2 is subtler, and considerably more dangerous. Oil, coal, and natural gas don’t leap out of the ground on command. They have to be extracted and processed, and this takes energy. Companies in the fossil fuel industries have always targeted the deposits that cost less to extract and process, for obvious economic reasons. What this means, though, is that over time, a larger and larger fraction of the energy yield of oil, coal, and natural gas has to be put right back into extracting and processing oil, coal, and natural gas—and this leaves less and less for all other uses.

That’s the vise that’s tightening around the American economy these days. The great fracking boom, to the extent that it wasn’t simply one more speculative gimmick aimed at the pocketbooks of chumps, was an attempt to make up for the ongoing decline of America’s conventional oilfields by going after oil that was far more expensive to extract. The fact that none of the companies at the heart of the fracking boom ever turned a profit, even when oil brought more than $100 a barrel, gives some sense of just how costly shale oil is to get out of the ground. The financial cost of extraction, though, is a proxy for the energy cost of extraction—the amount of energy, and of the products of energy, that had to be thrown into the task of getting a little extra oil out of marginal source rock.

Energy needed to extract energy, again, can’t be used for any other purpose. It doesn’t contribute to the energy surplus that makes economic development possible. As the energy industry itself takes a bigger bite out of each year’s energy production, every other economic activity loses part of the fuel that makes it run. That, in turn, is the core reason why the American economy is on the ropes, America’s infrastructure is falling to bits—and Americans in Detroit and Baltimore are facing a transition to Third World conditions, without electricity or running water.

I suspect, for what it’s worth, that the shutoff notices being mailed to tens of thousands of poor families in those two cities are a good working model for the way that industrial civilization itself will wind down. It won’t be sudden; for decades to come, there will still be people who have access to what Americans today consider the ordinary necessities and comforts of everyday life; there will just be fewer of them each year. Outside that narrowing circle, the number of economic nonpersons will grow steadily, one shutoff notice at a time.

As I’ve pointed out in previous posts, the line of fracture between the senile elite and what Arnold Toynbee called the internal proletariat—the people who live within a failing civilization’s borders but receive essentially none of its benefits—eventually opens into a chasm that swallows what’s left of the civilization. Sometimes the tectonic processes that pull the chasm open are hard to miss, but there are times when they’re a good deal more difficult to sense in action, and this is one of these latter times. Listen to the whisper of the shutoff valve, and you’ll hear tens of thousands of Americans being cut off from basic services the rest of us, for the time being, still take for granted.