Psychological barriers affect our reaction to climate change

Change is hard. Threats that seem nebulous are difficult to react to in ways that radically alter our lives.

I think that most people would recognize that climate change data is complex, and not always easy to wrap one’s head around. At the same time, we are continuing to find out that things are changing that will likely have major effects on the Earth and its inhabitants. What to do?

The American Psychological Association (APA) has studied the psychological factors that have an effect on our responses to the threat of climate change.

Despite warnings from scientists and environmental experts that limiting the effects of climate change means humans need to make some severe changes now, people don’t feel a sense of urgency.

[From Psychological factors help explain slow reaction to global warming, says APA task force]

Not really a surprise here. Scientists and government leaders who insist we need to change our lives in ways that change our habits, consumer-oriented culture, lifestyle and priorities are not generally well-received. For why should we trust sources that haven’t always been accurate in terms of assessing risks and predicting specific outcomes? And why should we mess our lives up so much if there might not actually be a climate change crisis coming down the pipe? (For the sake of argument, ignore the compelling evidence, data trends and scientific consenus – in fact, the APA has shown that our brains often deal with the climate change threat in exactly this way, by undervaluing risks and/or moving into “denial” mode).

In my view, the most important contribution of the study is in answer to the question, “How can psychologists assist in limiting climate change?”

Most people hate the idea of being manipulated. Think incentives instead of manipulation. Economic incentives, strong social marketing, etc…

The full report (pdf format) can be found at http://www.apa.org/releases/climate-change.pdf

The myths of “zero emission”

The Nissan Leaf has just been announced. While this electric vehicle is an extremely welcome step toward reducing our dependence on vehicles powered directly by fossil fuels, it is far too easy for us to ignore the complex web of energy and resources which are required to manufacture, transport and use the Leaf. We do the same thing with other items we consume, whether it be disposable shavers or a sophisticated piece of electronics. In each case, energy and resources are used in the mining operations, metal resources are extracted from the earth, the metals are refined, fossil fuels are extracted, petrochemicals are created, other materials are extracted and manufactured, transportation is required at each and every stage, and the list goes on…

For the Leaf, not only do a lot of energy and resources go into the manufacture of each electric vehicle, but it doesn’t end there. The energy used to power it isn’t “zero.” Yes, zero at the tailpipe, but the electric grid still has to supply the energy (from fossil fuels, nuclear, hydroelectric, wind, solar, etc…). And the batteries used to store the energy are not easily disposed (like any other batteries we use on a much smaller scale, they have to be treated as hazardous waste). And after 10 years or more, the vehicle is finished, and on we go to better technology and fancier bells and whistles.

This isn’t to take the wind from Nissan’s sails, but the problem of energy and resource consumption and waste on our planet is systemic. Our collective aspirations and our technological world combine to create a “consumption spiral.” We are gradually becoming more aware of the problem, and many of us have accepted that the path we find ourselves on is unsustainable. We cannot keep on using up finite energy and other non-renewable resources. These will diminish, and then there will be no more. Buying into the concept that electric vehicles (or recycling newspapers or changing our light bulbs or using extra home insulation) are going to solve the problem is an easy way to sidestep the reality and perhaps delay the inevitable just a little bit.

The truth is that in order to stay on this planet, what is required is total waste reclamation and zero emissions, from ALL sources, not just those we see at the final product.

That being said, “way to go Nissan!!” Now, what’s the next step?

Robots that kill…

Science fiction writers have known for decades that this day would finally arrive. The militarization of robotics has been increasing at a quickening pace. Artificial intelligence is able to think, reason, react, and act decisively to seek and destroy based on pre-programmed parameters.

“Robots that can decide where to kill, who to kill and when to kill is high on all the military agendas,” Professor Sharkey said at a meeting in London.

[From BBC NEWS | Technology | Call for debate on killer robots]

According to Noel Sharkey, a University of Sheffield professor of artificial intelligence and robotics, the main problems are that these military “drones” have trouble with “friend vs. foe” distinctions, and they can’t deal with the concept of “proportionality”, the determination of the amount of force that is prudent and necessary to gain the required military advantage. Until recently, these issues have not been on the radar of nations with this military capability. The result was a certain amount of collateral damage. Think Pakistan and the U.S. military drones.

Some scientists have suggested that Isaac Asimov’s Three Laws of Robotics be adapted from his writings to the current realities on Earth.

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Of course, these laws preclude most military applications. Wartime and anti-insurgent use would have to be seriously curtailed and reworked to make these laws fit. Perhaps a focus on military intelligence? Or might this still contravene the first and second laws by aiding and abetting humans bent on the kill???

I guess the alternative is to change nothing in how we approach the militarization of artificial intelligence… then perhaps Battlestar Galactica and the Cylons will not be that far off.