30 April 2012

More on Levelized Tuition

Writing in the Boulder Daily Camera, Bob Greenlee, a local columnist, finds the idea of a levelized tuition model appealing:
All state universities are proud icons representing the faith, hope, and future economic health of local taxpayers whose funds make state institutions possible. The level of state funding in Colorado has declined over the years to the extent that one can question whether or not CU is state supported or merely state named. The declining level of financial support may not be keeping pace with the growing needs of this important institution that is relying more on tuition hikes to keep the lights on and retain the talent required to maintain basic academic standards. For both parents and students the rising tuition costs are becoming a burden with studies indicating the amount of debt carried by students who mortgage their future exceeds the total amount of all credit card debt held by American consumers.

A number of intriguing observations about these issues have emerged from Roger Pielke, Jr., professor of environmental studies at CU-Boulder. Last year Pielke wrote an article in the Chronicle of Higher Education attempting to gain support for completely revamping how tuition is charged. Tuition for in-state CU students currently runs around $7,700 a year. Out-of-staters pay nearly four times as much. Pielke notes the financial viability of CU depends on "securing a large proportion of non-residents (that) creates incentives to favor their admission." He notes that two-thirds of all tuition income comes from a third of those attending the university and questions why there should be a distinction because the economic benefits that accrue for someone obtaining a college education are universal. Perhaps, he argues, a flat tuition of around $14,000 should apply to everyone and rather than state higher education funds going directly to the university he proposes Colorado should provide a direct subsidy to resident students.
I'm glad that he put that last bit in. The notion of a levelized tuition does not mean eliminating the state subsidy for in-state students. Nor is it about increasing tuition. It is about adopting a model for financing a state flagship university that aligns incentives with costs, and works to elevate quality of instruction, facilities, faculty and students.

For those wanting a bit more background, here is a link to my original essay in the Chronicle of Higher Education.   I also discussed this proposal on this blog here and here and here and here and here and here.

28 April 2012

Reader Mail: Win Some, Lose Some

I received the thoughtful comment below from a reader down under about my ANU talk on The Climate Fix. At almost the exact came time, I received an email from another reader who explained that he recommended the talk to two atmospheric scientists at a US state department of environmental protection, who refused to watch it because I do not "acknowledge atmospheric science data" -- whatever that means;-) As a policy scholar you quickly learn that some people are willing to engage and others are not (and it can be surprising who falls into which category). Win some, lose some!
Reader mail

Thank you for this lecture, it’s a breath of fresh air in what I feel is an increasingly putrid political atmosphere with regards to climate change in Australia (no lame puns intended).

For example, just a few days ago, the ABC here devoted 2 hours of airtime (a 1 hour documentary followed by a 1 hour talk show discussion) to a tiresome ‘debate’ between a climate-believer and climate-sceptic (called “Can I change your mind about climate?” http://www.abc.net.au/tv/changeyourmind/), with all of the predictable and useless shenanigans resulting. Thankfully, your lecture undermines the pure silliness of such a question in the space of a few seconds. If only all of the people who wasted those 2 hours could’ve watched your 1 hour lecture instead!

Your refreshingly straightforward use of measuring sticks got me thinking about how the issue of decarbonising the economy is communicated by some people in Australia. It occurred to me that there may well be people smart enough to comprehend the enormous engineering scale of the challenge, but have found clever ways to disguise its magnitude by saying, for example, that Australia could decarbonise its economy by simply building a big 50km x 50km solar panel. Considering the enormous arid, sunny expanses of Australia, this figure can actually come across as underwhelming (!), as mentioned in this government report by ABARES (the Australian Bureau of Agriculture and Resource Economics) – http://adl.brs.gov.au/data/warehouse/pe_aera_d9aae_002/aeraCh_10.pdf (part 10.3.1, page 268), and in this presentation by a Melbourne urban planner, Rob Adams, who is talking about future energy use in Australia (though he misquotes the 50km x 50km, i.e. 2500 sq. km figure as 50 sq. km) - http://www.youtube.com/watch?v=ZYJpdH-VGwc (15:44 in).

I did some simple maths from your presentation, which I think produces a comparable figure to the one mentioned in the ABARES report and Adams' presentation (assuming that each Cloncurry solar farm is roughly a hectare, 100m x 100m, in area – though this may well be an underestimate):

25% decarbonisation = 50 776 solar farms

100% decarbonisation = 203 104 solar farms = 450 x 450 solar farms approx.

(450 solar farms x 100m) x (450 solar farms x 100m) = 45km x 45km (rounded up to 50km x 50km)

Undoubtedly, it is more appealing to say that Australia’s carbon-free future lies in building one big 50km x 50km solar panel somewhere in the middle of the desert, rather than saying that Australia’s carbon-free future lies in building more than 200 000 solar farms. This is similar to the example you give of the hard imagery of dozens of nuclear plants versus the soft imagery of clean, green initiatives in the UK – a case of same difference.

Do you think this communication barrier can ever be overcome, when it seems that communicators themselves have figured out ways to massage the use of measuring sticks to make their messages more palatable? Or are we doomed to be stuck in a situation where hard, objective realities are obfuscated by people projecting their own mentally pleasing imagery onto them? I think it's a fascinating question that goes to the heart of the efficacy of political communication.

25 April 2012

Questionable Research Practices: The "Steroids of Scientific Competition"

A new research paper just out in the journal Psychological Science by John et al. seeks to quantify the incidence of what are called "questionable research practices" in psychological research.  They write:
Although cases of overt scientific misconduct have received significant media attention recently (Altman, 2006; Deer, 2011; Steneck, 2002, 2006), exploitation of the gray area of acceptable practice is certainly much more prevalent, and may be more damaging to the academic enterprise in the long run, than outright fraud. Questionable research practices (QRPs), such as excluding data points on the basis of post hoc criteria, can spuriously increase the likelihood of finding evidence in support of a hypothesis. Just how dramatic these effects can be was demonstrated by Simmons, Nelson, and Simonsohn (2011) in a series of experiments and simulations that showed how greatly QRPs increase the likelihood of finding support for a false hypothesis. QRPs are the steroids of scientific competition, artificially enhancing performance and producing a kind of arms race in which researchers who strictly play by the rules are at a competitive disadvantage. QRPs, by nature of the very fact that they are often questionable as opposed to blatantly improper, also offer considerable latitude for rationalization and self-deception.
John et al. used multiple methods to assess the prevalence of questionable research practices among psychology researchers.  They found a surprising high prevalence of such practices in their study.

They note that some questionable research practices are indeed, questionable, but that the researchers surveyed also found many of the practices to be unjustifiable:
As noted in the introduction, there is a large gray area of acceptable practices. Although falsifying data (Item 10 in our study) is never justified, the same cannot be said for all of the items on our survey; for example, failing to report all of a study’s dependent measures (Item 1) could be appropriate if two measures of the same construct show the same significant pattern of results but cannot be easily combined into one measure. Therefore, not all self-admissions represent scientific felonies, or even misdemeanors; some respondents provided perfectly defensible reasons for engaging in the behaviors. Yet other respondents provided justifications that, although self categorized as defensible, were contentious (e.g., dropping dependent measures inconsistent with the hypothesis because doing so enabled a more coherent story to be told and thus increased the likelihood of publication). It is worth noting, however, that in the follow-up survey—in which participants rated the behaviors regardless of personal engagement—the defensibility ratings were low. This suggests that the general sentiment is that these behaviors are unjustifiable.
Even so, there are incentives in the research world to engage in questionable research practices:
We assume that the vast majority of researchers are sincerely motivated to conduct sound scientific research. Furthermore, most of the respondents in our study believed in the integrity of their own research and judged practices they had engaged in to be acceptable. However, given publication pressures and professional ambitions, the inherent ambiguity of the defensibility of “questionable” research practices, and the well-documented ubiquity of motivated reasoning (Kunda, 1990), researchers may not be in the best position to judge the defensibility of their own behavior. This could in part explain why the most egregious practices in our survey (e.g., falsifying data) appear to be less common than the relatively less questionable ones (e.g., failing to report all of a study’s conditions). It is easier to generate a post hoc explanation to justify removing nuisance data points than it is to justify outright data falsification, even though both practices produce similar consequences.
The authors suggest that the prevalence of questionable research practices may help to explain the finding the many studies cannot be replicated:
QRPs can waste researchers’ time and stall scientific progress, as researchers fruitlessly pursue extensions of effects that are not real and hence cannot be replicated. More generally, the prevalence of QRPs raises questions about the credibility of research findings and threatens research integrity by producing unrealistically elegant results that may be difficult to match without engaging in such practices oneself. This can lead to a “race to the bottom,” with questionable research begetting even more questionable research. If reforms would effectively reduce the prevalence of QRPs, they not only would bolster scientific integrity but also could reduce the pressure on researchers to produce unrealistically elegant results.
I think I am on safe ground when I say that the problem of questionable research practices goes well beyond the discipline of psychology.

23 April 2012

Pushing Back on Extreme Nonsense

UPDATE 4/25: Mike Wallace sends the following comment in by email:
The quote from my article shouldn't be interpreted as indicating that I'm not concerned about human-induced global warming. To put it in perspective, please read on. You can find the entire article on the LA Times web site.
 Please do read the whole thing, Mass' also.

Cliff Mass and Mike Wallace at the University of Washington have expressed some thoughts on the hype associated with climate change and extreme events.

Mass writes on his blog:
It is happening frequently lately.  A major weather event occurs---perhaps a hurricane, heat wave, tornado outbreak, drought or snowstorm-- and a chorus of activist groups or media folks either imply or explicitly suggest that the event is the result of human-caused (anthropogenic) global warming.  Perhaps the worst offender is the organization www.350.org and their spokesman Bill McKibben.  Close behind is Climate Central, which even has an extreme weather/climate blog.  The media has noted many times that the U.S. in 2011 experienced a record 14 billion-dollar weather disasters--and many of the articles imply or suggest a connection with human-forced global warming.  Even the NY Times has jumped into the fray recently, giving front-page coverage of an unscientific survey that found that a large majority of Americans believe recent extreme weather events are the result of anthropogenic global warming. One does not have to wonder very hard about where Americans are getting their opinions--and it is not from the scientific community.
He explains:
It is somewhat embarrassing for me to admit this, but part of the problem is that a small minority of my colleagues--people who should know better-- are feeding the extreme-weather/climate hype in the mistaken belief that by doing so they can encourage people to do the right thing--lessen their carbon footprint.
Writing in the LA Times yesterday, Mike Wallace takes issue with the cavalier linkage of the March heat wave to human-caused climate change, reminding us that climate is complex:
The cause of last month's strange weather was an extraordinarily large and persistent meander of the jet stream that swept tropical air, with temperatures reaching into the 80s as far north as southern Canada.

Likening today's climate system to a muscle-bound, drugged athlete performing feats far beyond the capabilities of straight athletes would be appropriate if the extreme and persistent distortions of the jet stream we saw in March could be demonstrated to have been caused by global warming.

But let's remember where the burden of proof lies. In the world of sports, when an athlete is accused of relying on performance-enhancing drugs, it is the prosecutor who must prove the case. The same should apply to claims that the behavior of the jet stream is being profoundly altered by global warming. Thus far, such assertions are not well supported by scientific evidence.

In the absence of proof that the jet stream's variability is human-induced, we must consider the possibility that the apparent weirdness of the weather in March isn't all that weird if viewed in a larger historical context. In this respect, it's noteworthy that large areas of the U.S. were just about as warm in March 1910 as they were in March 2012. With weather, weird things happen every now and again.

Fortunately, the flora and fauna and the human inhabitants of temperate latitudes are accustomed to dealing with huge swings in wintertime temperatures, and so most of the effects of March Madness will be short-lived.
Over the long term I have every confidence that scientific questions will be resolved using the tools of science. In the meantime, it sure is nice to see these prominent scientists standing up for the integrity of their field, even if it means sticking their necks out and risking criticism from a few overly enthusiastic scientists and reporters.

20 April 2012

Quote of the Day

Courtesy John Kay:
“How convenient it is to be a reasonable creature, since it enables one to make or find a reason for whatever one has a mind to do.”

Benjamin Franklin

19 April 2012

German Mittelstand: One of a Kind?

Above is feature from The Economist on Germany's Mittelstand manufacturing companies. It accompanies a valuable briefing on Germany's economy. Here is an excerpt:
Many Mittelstand firms are oligopolists, argues Mr Schmiedeberg, occupying niches so narrow that they attract few rivals. Increasingly, the niches are being defended with services, in this context not the term of derision it often is in manufacturing circles. Beckhoff builds its own sales and maintenance networks, relying little on dealers—unlike some of its non-German competitors.

The next stage is “hybrid value-added”, in which the product is an outcome that the customer wants rather than the good that produces it. Wolf Heiztechnik of Bavaria is developing a contract under which it sells temperature control rather than heating equipment. “Every Chinese firm can do the industrial part, not the whole hybrid,” says Karl Lichtblau of IW Consult, a consultancy. Counting industry-related services, he reckons, manufacturing’s share of GDP is more like 30% than 20%.
The German experience, The Economist argues, may not be exportable. The whole briefing is worth a read.

18 April 2012

The Climate Fix Lecture - With Slides


Thanks to a very helpful reader (Thanks Richard!) and the ANU Crawford School, the video of my recent lecture in Canberra now has the powerpoint slides integrated. It is above, Enjoy!

Who Cares What the Science Says?

The latest NYT story on extremes and climate change celebrates the fact that many Americans fail to understand how human-caused climate change may be related to recent extreme events. Today's NYT reports a new poll that indicates that a large portion of the public believes that specific, recent events can be attributed to greenhouse gas emissions.

Yet, rather than citing recent research on the topic -- such as the IPCC SREX report -- the NYT decides to cheer about the public misunderstanding and speculate on its possible political usefulness:
Read together, the polls suggest that direct experience of erratic weather may be convincing some people that the problem is no longer just a vague and distant threat.
Ends justify the means -- This reminds me of Dick Cheney's comments about connections between Al Qaeda and Saddam Hussein.  It is the political outcome that matters, no?

The poll reported by the NYT actually reports nothing new, the public has for a long time (decades and centuries, actually, see Stehr and von Storch, PDF) believed that the human impact on weather is much greater than the science shows.

Here is an excerpt from The Climate Fix where I discuss this very issue:
In some respects, the campaign to convince people that climate change is a threat may have been too successful, such that people have come to believe things that the science cannot support. For instance, a 2007 New York Times/CBS Poll found that of the three- quarters of people who believed that weather over the past few years had been stranger than normal, 43 percent attributed that weather to “global warming” and a further 15 percent to “pollution/damage to the environment.” Yet, as most scientists will explain, weather events and even climate patterns over a period of years simply cannot be attributed to greenhouse gas emissions. Detecting changes in climate requires decades of observations. A very cold winter or two does not disprove a decades-long warming trend, and a series of damaging hurricanes is not evidence of a human influence.

Some advocates, including some scientists, seek to have things both ways when they assert that a particular weather event is “consistent with” predictions of human-caused climate change. The snowy period of early 2010 along the U.S. East Ccoast saw those opposed to action suggesting that the record snow and cold cast doubt on the science of human-caused climate change, while at the same time those calling for action explained that the weather was “consistent with” the forecasts from climate models. Both lines of argument were misleading. Any and all weather is “consistent with” predictions from climate models under a human influence on the climate system. Similarly, any and all weather is also “consistent with” failing predictions of long-term climate change. Simply put, weather is not climate. Given the degree of politicization of the climate debate, we should not be surprised that even the weather gets politicized.

By the same token, it should come as no surprise that many in the public hold views about climate science that are way out in front of the scientific consensus on climate change as represented by the reports of the IPCC. The result is that when people learn what the science actually says, there is a risk that they will learn that their views are in fact incompatible with what the science can support, leading to a belief that the science has been overstated in public debate.

16 April 2012

Honest Brokering and Biosecurity Advice

Over the weekend Nature reported that Michael Osterholm, a member of the US government's National Science Advisory Board for Biosecurity, has accused the Board and its staff of biasing its advisory process in a recent high profile case. The situation raises important questions about expert advice and how it is structured to ensure quality, authority and legitimacy all at the same time.

Nature writes:
A closed meeting, convened last month by the US Government to decide the fate of two controversial unpublished papers on the H5N1 avian influenza virus was stacked in favour of their full publication, a participant now says. Michael Osterholm, who heads the University of Minnesota’s Center for Infectious Disease Research and Policy in Minneapolis, is a member of the National Science Advisory Board for Biosecurity (NSABB), which was tasked with evaluating the research. In a letter to Amy Patterson, associate director for science policy at the National Institutes of Health in Bethesda, Maryland, and sent to other members of the NSABB, Osterholm writes that the meeting agenda and presenters were “designed to produce the outcome that occurred“. The letter was leaked to Nature by an anonymous source.
Nature also put the letter online (here as DOC). Here is a passage from that letter which details Osterholm's concerns:
I believe that the agenda and speakers for the March 29 and 30th NSABB meeting as determined by the OBA staff and other USG officials was designed to produce the outcome that occurred. It represented a very “one sided” picture of the risk-benefit of the dissemination of the information in these manuscripts. The agenda was not designed to promote a balanced reconsideration of the manuscripts. While I don’t suggest that there was a sinister motive by the USG with regard to either the agenda or invited speakers, I believe there was a bias toward finding a solution that was a lot less about a robust science- and policy-based risk-benefit analysis and more about how to get us out of this difficult situation. I also believe that this same approach in the future will mean all of us, including life science researchers, journal editors and government policy makers, will just continue to “kick the can down the road” without coming to grips with the very difficult task of managing DURC and the dissemination of potentially harmful information to those who might intentionally or unintentionally use that information in a way that risks public safety. Merely providing a “minority report” in the final findings and recommendations of the meeting does nothing to address the fundamental issues of how the risk and benefits were determined, described, and considered at the meeting.
Let's see if we can disentangle some of the issues here. First, what kind of science advisory body is the NSABB?

Even though it has the phrase "science advisory" in its title, the Board is more accurately described as a "policy advisory" body. The Board is not focused on rendering expert judgments on scientific questions -- what I have called "science arbitration" -- but rather it is tasked with a much broader mission of making recommendations on action.

Here is how the NSABB charter describes its functions:
The NSABB is a federal advisory committee chartered to provide advice, guidance, and leadership regarding biosecurity oversight of dual use research, defined as biological research with legitimate scientific purpose that may be misused to pose a biologic threat to public health and/or national security.

The NSABB is charged specifically to:
  • Recommend strategies and guidance for enhancing personnel reliability among individuals with access to biological select agents and toxins.
  • Provide recommendations on the development of programs for outreach, education and training in dual use research issues for scientists, laboratory workers, students and trainees in relevant disciplines.
  • Advise on policies governing publication, public communication, and dissemination of dual use research methodologies and results.
  • Recommend strategies for fostering international engagement on dual use biological research issues.
  • Advise on the development, utilization and promotion of codes of conduct to interdisciplinary life scientists, and relevant professional groups.
  • Advise on polices regarding the conduct, communication, and oversight of dual use research and results, as requested.
  • Advise on the Federal Select Agent Program, as requested.
  • Address any other issues as directed by the Secretary of HHS.
As a committee that makes policy recommendations the committee can go one of two ways in offering policy advice.  One way would be to make recommendations that describe a preferred course of action -- that is, to advocate that a particular decision be made.  Where there is disagreement among the committee a minority position might be included in a report. However, there is no commitment to surveying or evaluating a wide range of options.

A second way to handle policy advice would not be to recommend a particular course of action but lay out the various options available for action, along with the risks and benefits associated with each fork in the road -- the honest brokering of policy alternatives. Here there would not be a single course of action favored and thus there would not be a need for majority or minority views. The key criteria of a report produced from such a process would be whether or not the scope of choice is adequately represented and if each alternative has been fairly and comprehensively evaluated in terms of risks and benefits.

In his letter to the NSABB, Osterholm suggests that the US government, the entity to which the NSABB provides advice, had an interest in a particular outcome from the advisory process:
It has been two weeks since the NSABB meeting of March 29-30 where the Board was requested by the USG to reconsider our previous decision recommending the redaction of both the above referenced manuscripts before publication.
In such a situation -- characterized by a controversial issue in which the decision maker requesting advice has a vested interest -- the integrity of the advisory process is protected by approaching advice from the standpoint of an honest broker. This serves several functions.
  • Clearly delineates advice from decision
  • Places responsibility for the decision with decision makers
  • Allows for a full consideration of all sides of risks and benefits associated with different courses of action
  • Brings controversy, uncertainty, ignorance and values out into the open
  • Protects the advisory body from charges of bias in its deliberations 
In his letter, Osterholm provides a compelling list of reasons and justifications explaining why he thinks that the full ranges of risks and benefits were not explored by the NSABB.

For his part, Osterholm expresses a desire to have had a "disinterested subject-matter expert" involved in the process, based on his concerns about a lack of diversity of perspective among the committee related to the subject being considered. He explains:
The subject matter experts that addressed this issue at the meeting have a real conflict of interest in that their laboratories are involved in this same type of work and the results of our deliberations directly affect them, too. The same can be said about the attendees and outcome of the February World Health Organization consultation. In short, it was the “involved influenza research community” telling us what they should and shouldn’t be allowed to do based on their interested perspective. Such a perspective is very important and should be included in this discussion, but it shouldn’t be the only voice.
While looking for "disinterested" experts makes sense in such situation, alone, it cannot address the issues that Osterholm raises.

In this case there are two routes that the Board can follow. One is to explicitly take on the role as a partisan in the advisory process, making its best case for a particular course of action. This provides authority and cover for decisions, but leaves the advisory process vulnerable to legitimate complaints about procedure and substance.

A second route would be for the Board to explicitly consider a range of decision alternatives and then carefully evaluate the risks and benefits associated with each course of action. If the Board does not have experts or advocates representing each of these perspectives, then it should seek to have them represented in some manner.

At a minimum bodies like the NSABB which are expected to provide policy advice should explicitly discuss and clarify how exactly they are functioning. Do they seek to reduce choice or expand it? They can't do both at once, and leaving the issue murky can lead to exactly the situation that the NSABB finds itself in today.

15 April 2012

Upcoming Lecture in Berlin

On Tuesday evening I'll be speaking at the Berlin-Brandenburgische Akademie der Wissenschaften (Berlin-Brandenburg Academy of Sciences and Humanities, lecture at 18:00 in central Berlin, directions etc. here).
Lessons from 50 Years of Science Advice to the US President

More than ever, decision making in governments around the world depends upon expert advice. In areas such as energy policy, agricultural production, climate change and even economics, health and the military, policy makers depend upon experts to inform policy making. At the same time, many of these same issues are debated among the public and in the media, often passionately and politically. How might modern governments best utilize expert advice in policy making while at the same time respecting the authority of democratic processes?  This talk will draw on more than 50 years of experience of science advice to the US president to illustrate the challenges and opportunities for the effective use of experts in democratic decision making. Policy makers and experts each face important choices in how they relate to one another, with effective policy and politics the ultimate stakes.
It is a completely new lecture, and will completed well before Bayern-Madrid kicks off (trust me), so if you are in the area please do stop by and say Hallo.

Details here.

12 April 2012

My Bridges Columns All in One Place

Thanks to Ami Nacu-Schmidt we have a webpage that collects all of my Bridges columns over the past 6 years in one convenient place (Thanks Ami!). My latest will be out any day and is on the debate over the importance American manufacturing.

Meantime, here are my past columns:
Innovation Policy Lessons of the Vasa (Vol. 32, December 16, 2011)
PDF | Website | MP3 download
Lessons of the L'Aquila Lawsuit (Vol. 31, October 24, 2011)
PDF | Website | MP3 download
The Policy Advisor’s Dilemma (Vol. 30, July 20, 2011)
PDF | Website | MP3 download
Democracy’s Open Secret (Vol. 29, April 18, 2011)
PDF | Website | MP3 download
Beyond the Annual Climate Confab (Vol. 28, December 21, 2010)
PDF | Website | MP3 download  
Success is not Guaranteed (Vol. 27, October 19, 2010)
PDF | Website | MP3 download
Sport: An Academic’s Perfect Laboratory (Vol. 26, July 14, 2010)
PDF | Website | MP3 download
Inside the Black Box of Science Advisory Committee Empanelment (Vol. 25, April 21, 2010)
PDF | Website | MP3 download
Building Bridges between Europe and North America in Science Policy (Vol. 24, December 21, 2009)
PDF | Website
Understanding the Copenhagen Climate Deal: The Fix is In (Vol. 23, October 15, 2009)
PDF | Website | MP3 download
First Reflections from a Workshop on Science Policy Research and Science Policy Decisions (Vol. 22, July 17, 2009)
PDF | Website | MP3 download
Obama's Climate Policy: A Work in Progress (Vol. 21, April 10, 2009)
PDF | Website | MP3 download
An Interview with John H. Marburger, Outgoing US President's Science Advisor (Vol. 20, December 22, 2008)
PDF | Website | MP3 download
The Role of Risk Models in the Financial Crisis (Vol. 19, October 16, 2008)
PDF | Website | MP3 download
Has Technology Assessment Kept Pace with Globalization? (Vol. 18, July 1, 2008)
PDF | Website | MP3 download
Blinded by Assumptions (Vol. 17, April 28, 2008)
PDF | Website | MP3 download
Technology Assessment and Globalization (Vol. 16, December 2007)
PDF | Website | MP3 download
Late Action by Lame Ducks (Vol. 15, September 28, 2007)
PDF | Website | MP3 download
From "Is it True?" to "So What?" (Vol. 14, July 12, 2007)
PDF | Website | MP3 download
The Honest Broker (Vol. 13, April 16, 2007)
PDF | Website | MP3 download
The 2006 US Midterm Elections and Science and Technology Policy (Vol. 12, December 2006)
PDF | Website | MP3 download
Self-Segregation of Scientists by Political Predispositions (Vol. 11, September 2006)
PDF | Website | MP3 download
How to Break Up NASA (Vol. 10, June 29, 2006)
PDF | Website | MP3 download
Science Policy Without Science Policy Research (Vol. 9, April 19, 2006)
PDF | Website | MP3 download
The Role of Science Studies in Science Policy (Vol. 8, December 6, 2005)
PDF | Website
Making Sense of Trends in Disaster Losses (Vol. 7, September 20, 2005)
PDF | Website
Science Academies as Political Advocates (Vol. 6, July 13, 2005)
PDF | Website

10 April 2012

Slides from my "Wag the Dog" Talk

Due to a large number of requests I am posting up the slides from my talk yesterday - Wag the Dog (here in PDF). Since my commentary doesn't accompany the slides, the slides alone might be unclear or confusing, so please use the comments to ask questions.

Also, do have a look at these interesting comments over at Dot Earth, especially from Marty Hoerling (NOAA) and Mike Wallace (U of W).  Here is Wallace:
By exaggerating the influence of climate change on today’s weather and climate-related extreme events, a part of our community is painting itself into a rhetorical corner...

I’ve become convinced that many of the editors of the high impact journals are inclined to cast opinion pieces as salvos in the ongoing war between climate change believers and skeptics.
Such comments reinforce the optimistic tone of my talk ... though some of my senior colleagues expressed their view in the discussion that followed that they were not so sanguine. Time will tell, and I'll watch with interest from the vantage point of a scholar studying sports governance ;-)

09 April 2012

Historical Global Tropical Cyclone Landfalls

Weinkle et al. 2012 is now online at the Journal of Climate. I provided a summary of the paper a few months ago when it was accepted, including these factoids:
  • Over 1970 to 2010 the globe averaged about 15 TC landfalls per year
  • Of those 15, about 5 are intense (Category 3, 4 or 5) 
  • 1971 had the most global landfalls with 32, far exceeding the second place, 25 in 1996
  • 1978 had the fewest with 7
  • 2011 tied for second place for the fewest global landfalls with 10 (and 3 were intense, tying 1973, 1981 and 2002)
  • 1999 had the most intense TC landfalls with 9
  • 1981 had the fewest intense TC landfalls with zero
  • There have been only 8 intense TC landfalls globally since 2008 (2009-2011), very quiet but not unprecedented (two unique 3-year periods saw only 7 intense landfalls)
  • The US is currently in the midst of the longest streak ever recorded without an intense hurricane landfall  
Here is the abstract:
Historical global tropical cyclone landfalls (PDF)

Jessica Weinkle, Ryan Maue and Roger Pielke, Jr.
Journal of Climate http://dx.doi.org/10.1175/JCLI-D-11-00719.1

In recent decades, economic damage from tropical cyclones (TCs) around the world has increased dramatically. Scientific literature published to date finds that the increase in losses can be explained entirely by societal changes (such as increasing wealth, structures, population, etc) in locations prone to tropical cyclone landfalls, rather than by changes in annual storm frequency or intensity. However, no homogenized dataset of global tropical cyclone landfalls has been created that might serve as a consistency check for such economic normalization studies. Using currently available historical TC best-track records, we have constructed a global database focused on hurricane-force strength landfalls. Our analysis does not indicate significant long-period global or individual basin trends in the frequency or intensity of landfalling TCs of minor or major hurricane strength. This evidence provides strong support for the conclusion that increasing damage around the world during the past several decades can be explained entirely by increasing wealth in locations prone to TC landfalls, which adds confidence to the fidelity of economic normalization analyses.
Enjoy!

Follow Up: Revisiting the 2010 IPCC Press Release on Economics of Disasters

As I prepared for my lunch seminar which I am giving later today. I had a chance to revisit the press release issued by the IPCC on January 25, 2010 in response to an article that appeared in the UK Sunday Times one day earlier which detailed failures of the IPCC AR4 related to claims made about climate change and disasters.

The Sunday Times article was about how the 2007 IPCC AR4 mishandled the issue of the economic toll of disasters and climate change. With the advantage of hindsight, we can now see that the claims made in the Sunday Times article have been completely vindicated and the IPCC press release was full of misinformation (to put it kindly).  This post has the details.

The IPCC press release of 26 January 2010 started out as follows (PDF):
The January 24 Sunday Times ran a misleading and baseless attacking the way the Fourth Assessment Report of the IPCC handled an important question concerning recent trends in economic losses from climate-related disasters
What did the Sunday Times article claim?
The United Nations climate science panel faces new controversy for wrongly linking global warming to an increase in the number and severity of natural disasters such as hurricanes and floods.

It based the claims on an unpublished report that had not been subjected to routine scientific scrutiny — and ignored warnings from scientific advisers that the evidence supporting the link too weak. The report's own authors later withdrew the claim because they felt the evidence was not strong enough. . .

The new controversy also goes back to the IPCC's 2007 report in which a separate section warned that the world had "suffered rapidly rising costs due to extreme weather-related events since the 1970s".

It suggested a part of this increase was due to global warming and cited the unpublished report, saying: "One study has found that while the dominant signal remains that of the significant increases in the values of exposure at risk, once losses are normalised for exposure, there still remains an underlying rising trend."

The Sunday Times has since found that the scientific paper on which the IPCC based its claim had not been peer reviewed, nor published, at the time the climate body issued its report.

When the paper was eventually published, in 2008, it had a new caveat. It said: "We find insufficient evidence to claim a statistical relationship between global temperature increase and catastrophe losses."

Despite this change the IPCC did not issue a clarification ahead of the Copenhagen climate summit last month. It has also emerged that at least two scientific reviewers who checked drafts of the IPCC report urged greater caution in proposing a link between climate change and disaster impacts — but were ignored.
None of these claims are "misleading and baseless" but are factually correct. (Note that full text of the Times article can be found here.)

In its press release, the IPCC explained its position by re-asserting what was claimed in the report:
one study detected an increase in economic losses, corrected for values at risk, but that other studies have not detected such a trend
We now know that the "study" that was cited by the IPCC (a white paper from a workshop that I had organized) did not contain any analysis of trends. Instead, that paper was intentionally miscited by one of the chapter's authors to circumvent the deadline for inclusion of relevant publications. When the miscited paper actually did appear in the literature it said this:
“We find insufficient evidence to claim a statistical relationship between global temperature increase and normalized catastrophe losses.“
Thus, the paper that the IPCC wanted to cite did not say what the was claimed in a specially invented graph that was made up for the report but which did not appear in the miscited paper. Further, the IPCC intentionally miscited it to get it into the report in the first place. Three bad moves.

The IPCC press release also said that
In writing, reviewing, and editing this section, IPCC procedures were carefully followed to produce the policy-relevant assessment that is the IPCC mandate.
The IPCC did not follow its procedures for citing grey literature, for following its own deadline for publications, for proper citation of source material and included a graph that cannot be found in any literature anywhere. The IPCC press release was thus wrong again -- the procedures were ignored, not "carefully followed."

The bottom line is that the Sunday Times article has proven correct comprehensively on the substantive and procedural aspects of the IPCC's failures (the substance of which has recently been reaffirmed by the IPCC SREX report).

The IPCC 26 January 2010 press release still sits uncorrected on the IPCC website (here in PDF). If the IPCC has a commitment to getting things right, shouldn't it correct "baseless and misleading" claims that it has made?

06 April 2012

Upcoming Talk

I'm giving a completely new talk next week here at CU that will invoke Dustin Hoffman, Peter Gleick, Hwang Woo-suk, Ward Churchill, Michael Mann and Steve McIntyre, Robert DeNiro, Mike Daisey, Bjorn Lomborg, Steve Schneider, Marc Hauser, Al Gore, the New York Times, NOAA, Fred Singer, IPCC and if time allows, Colin Powell, Barack Obama, George W. Bush and a host of other bit players.

I can guarantee that there will be more questions than answers, and hopefully an extended and rich discussion. Should be fun -- Experimental at least.

Here is the abstract:
CSTPR Noontime Seminar Spring 2012 Series
Mondays 12:00 - 1:00 PM
GOING TO EXTREMES: Science and Social Response
April 9, 2012

WAG THE DOG: ETHICS, ACCURACY AND IMPACT OF THE SCIENCE OF EXTREMES IN POLITICAL DEBATES
by Roger Pielke, Jr.
Center for Science and Technology Policy Research, University of Colorado Boulder

Location: CIRES Auditorium

Free and open to the public

Abstract: Wag the Dog is the title of a 1997 movie in which a political operative and a movie producer together stage a war to cover-up a presidential sex scandal. In the movie one of the characters exclaims, 'What difference does it make if it's true? If it's a story and it breaks, they're gonna run with it.' This seminar is about truth, responsibility and science at the messy interface of the practice of science and the broader society of which it is a part. At that interface traditional roles are often blurred, a situation made even more complicated by the rise of new media -- we see scientists who act much as journalists, and journalists making judgments about science. In this context what does it mean to practice 'responsible science'? Does anything go? How should we act? Are there norms or guidelines for practitioners who work at the science-society interface? This talk will offer little in the way of answers, but will discuss various examples from a range of different contexts to stimulate a discussion and debate.

Biography: Roger Pielke Jr. is a Fellow of CIRES and professor at the Center of Science and Technology Policy Research. He is currently serving on the National Research Council Committee on Responsible Science which has been tasked with updating guidelines for scientists last proposed in 1992. This seminar is also part of his graduate seminar on science and technology policy, which this semester has a focus on 'responsible science.'

05 April 2012

A Primer on How to Avoid Magical Solutions in Climate Policy

By now there is really no excuse for any professional involved in climate policy not to understand the implications of the Kaya Identity. The risks of not understanding the Kaya Identity is that one can get caught out proposing magic as the main mechanism of reducing carbon dioxide emissions.

Developed by Yoichi Kaya, a Japanese scientist, in the 1980s as means of generating emissions projections for use in climate models, the identity is also an extremely powerful tool of policy analysis, because it encompasses all of the tools in the policy toolbox that might be used to reduce emissions. The identity is comprised of four parts:
  • Population
  • Per capita wealth
  • Energy intensity of the economy (energy consumption/GDP)
  • Carbon intensity of energy (carbon dioxide emissions/energy consumption)
If we wish to reduce emissions of carbon dioxide with the goal of stabilizing its concentrations in the atmosphere, then we only have four levers, represented by each of the factors in the Kaya Identity.

In The Climate Fix, I simplify even further by combining population and per capita wealth, the result of which is simply GDP, and by combining energy intensity and carbon intensity, the product of which is carbon intensity of GDP.

That means that there are only two ways to reduce emissions to a level consistent with stabilization of concentrations at a low level (pick your favorite number, 350, 450, 550 ppm -- the policy implications are identical). One is to reduce GDP. The second is to reduce the carbon intensity of GDP -- to decarbonize. While there are a few brave/foolish souls who advocate a willful imposition of poverty as the remedy to accumulating carbon dioxide, that platform has not gathered much political steam. (See discussion of the Iron Law in TCF).

Instead, the only option left is innovation in how we produce and consume energy. That is it -- innovation is the only game in town. Consequently, the correct metric of progress in innovation is a decrease in the ratio of carbon to GDP. For those who wish to stabilize carbon dioxide emissions, the proper policy debate is thus how do we stimulate energy innovation?

Pricing carbon (or energy) is not a point of dispute. Some argue that putting a price on carbon will motivate the necessary innovation. The causal mechanism underlying carbon pricing is that higher priced energy will cause economic discomfort throughout the economy which will consequently motivate investments in innovation on the consumption and production sides of energy. Others, me and my Hartwell colleagues included, argue that the point of putting a price on carbon is not to cause economic discomfort, but rather to raise resources to invest in innovation, with the benefits of those investments securing the political capital necessary for the approach to sustain. Obviously if your goal is to cause economic discomfort you'll favor a much higher price on carbon price than those who seek to raise money for investment without causing economic discomfort.

Another point of debate is whether it makes sense to advocate for emissions reductions directly or focus on those policies that lead to an accelerated rate of decarbonzization, but can be justified on a broader basis than emissions reductions alone (examples include the economic benefits of improving efficiency and the economic and social benefits of dramatically expanding energy access around the world). Again, the Hartwell group looks at the evidence and sees that the political likelihood of dramatically increasing the costs of energy (as well as noting the social and economic consequences of higher priced energy) means that we don't really have a choice in what strategy makes more sense -- the answer is obvious.

(See TCF for a book-length treatment of these issues and more.)

So if you ever read anyone arguing that "innovation is not enough" and "emissions intensity — emissions per unit of economic output ... is fundamentally the wrong metric" then you know that they haven't done their homework, and instead are invoking magic.  Don't invoke magic, be informed.

The Climate Fix Lecture and Slides


I have recently received a bunch of requests for the slides which accompany the lecture that I gave at the Australian National University in Canberra in February.

The lecture can be seen above and here is a link to a PDF of the slides. Comments of course welcomed, and here is a link to the book.

Enjoy!

Innovation not Simply Manufacturing

UPDATE: At ITIF Rob Atkinson has a go at the Porter article here.  Frequent readers will know that I side with Porter on the big picture here, but as Atkinson says, correctly, all is not black or white in this debate.  Have a look.

Yesterday's New York Times had an absolutely brilliant article by Eduardo Porter on manufacturing, the economy and innovation. The figure above accompanies the article, which deserves to be read in full, but here is an excerpt:
Things have not looked this promising for manufacturing jobs in a long while. Rising costs in China — where the government is letting the currency gain against the dollar and wages are rising at a double-digit pace — are making it more attractive for American companies to produce at home. Expensive oil adds to the cost by pushing up the price of freight.

Yet a revolution in manufacturing employment seems far-fetched. Most of the factory jobs lost over the last three decades in this country are gone for good. In truth, they are not even very good jobs.

As much as the administration needs a jobs strategy, one narrowly focused on manufacturing is unlikely to deliver.

Much of the anxiety about factory jobs is based on the misconception that job losses have been due to a sclerotic manufacturing sector, unable to compete against cheap imports. Until the Great Recession clobbered the world economy, manufacturing production was actually holding its own. Real value added in manufacturing, the most precise measure of its contribution to the economy, has grown by more than two thirds since its heyday in 1979, when manufacturing employed almost 20 million Americans — eight million more than today.

American companies make a smaller share of the world’s stuff, of course. But what else could one expect? Thirty years ago China made very little of anything. Today its factory output is almost 20 percent of world production and about 15 percent of manufacturing value added.

What’s surprising is how little the United States lost in that time. American manufacturers contribute more than a fifth to global value added.

Manufacturers are shedding jobs around the industrial world. Germany lost more than a fifth of its factory jobs from 1991 to 2007, according to the United Nations Industrial Development Organization, about the same share as the United States. Japan — the manufacturing behemoth of the 1980s — lost a third.

This was partly because of China’s arrival on the world scene after it joined the World Trade Organization in 2001. Since then, China has gained nearly 40 million factory jobs. But something else happened too: companies across the developed world invested in labor-saving technology.
The article includes this interesting nugget about the government role in innovation in agriculture:
Remember agriculture? In the 1960s, plant scientists at the University of California, Davis, developed an oblong tomato that ripened uniformly, and its engineers developed a machine to harvest it with one pass through the fields. By the 1970s the number of workers hired for the tomato harvest in California had fallen by 90 percent.

In the book “Promise Unfulfilled,” Philip Martin, an economist at the university, says that in 1979 the worker advocacy group California Rural Legal Assistance sued the university for using public money on research that helped agribusiness at the expense of farm workers. And in 1980, Jimmy Carter’s agriculture secretary, Bob Bergland, declared that the government wouldn’t finance any more projects aimed at replacing “an adequate and willing work force with machines.” It’s hard to say that workers won this battle, however. After Mr. Bergland pulled the plug, research on agricultural mechanization came to a near-halt.
The article's take-home point is a gem:
[E]ach job in an “innovation” industry, broadly understood, creates five other local jobs, about three times the number for an average job in manufacturing. Two of them are highly paid professional positions and three are low-paid jobs as waiters or clerks.

Innovation — not manufacturing —has always propelled this country’s progress. A strategy to reward manufacturers who increase their payroll in the United States may not be as effective as one to support the firms whose creations — whether physical stuff or immaterial services — can conquer world markets and pay for the jobs of the rest of us.
Do read the whole thing.

04 April 2012

Be Careful What You Wish For

When US President Barack Obama announced a government-wide effort to protect federal science from political interference, the US Department of the Interior (DOI) took an early lead. In 2011, it became the first agency to finalize a new policy on scientific integrity and it has hired ten scientific-integrity officers to work with staff in its various bureaus. But the DOI may also be the first to run into a problem with the way the policies are implemented, as one of those officers claims to have been fired for upholding the guidelines.

“I thought I was doing the job I was hired to do and was doing the right thing. I was stifled,” says Paul Houser, a hydrologist at George Mason University in Fairfax, Virginia, who was appointed as scientific-integrity officer for the DOI’s Bureau of Reclamation in April 2011. Houser was fired on 10 February and filed a complaint under the DOI’s scientific-integrity policy two weeks later.
What is the issue in this case?
Houser says that he was asked by a press officer to check some material that the DOI planned to make public about the probable environmental impact of the dams’ removal. But the material painted an overly rosy picture of the benefit, Houser says. For example, in a summary document, the DOI said that studies had shown that the annual production of Chinook salmon (Oncorhynchus tshawytscha) would rise by 83% a year after the dams were removed. However, it did not include any of the uncertainties about how the population would respond that an expert panel commissioned by the DOI had listed. In the final version of the summary — which is now on a government website — the number was changed to 81.4%. “That number expresses an accuracy that’s ludicrous,” says Houser. The figure comes from an unpublished computer-modelling study and had an uncertainty range of −59.9% to 881.4%, which was not reported in the summary.

Houser says that last September, his supervisor, deputy commissioner for external and intergovernmental affairs Kira Finkler, chided him for documenting his concerns. He says Finkler told him that “the secretary wants to remove those dams”. Finkler did not respond to questions from Nature about the situation, but the scientific-integrity officer who is over­seeing implementation of the department’s policy, Ralph Morgenweck, confirms that Houser’s complaint is being investigated.

Kelly says that the DOI is looking forward to the outcome of the investigation. “We believe all actions will be proven to be fully justified,” she says, adding that the studies the agency is using about the impact of the dam removal are available on the Internet for anyone to see and review. However, members of the expert panel contacted by Nature have said that they, too, felt that the materials flagged by Houser played down the uncertainties in scientific predictions.
While the various allegations are under investigation, Houser's complaint (which you can read in full here in PDF) signals that the implementation of scientific integrity policies is going to be a rocky road for federal agencies.  In effect, the policy empowers all agency employees (and in some cases, like this one, a watchdog) to challenge all agency communications and decisions based on how information is presented.  The Department of the Interior was the first out of the gate with its guidelines (presented at the White House here in PDF), so it is not unexpected that a controversy shows up under that agency.

Because all communication involves discretion in the section of material to include and not to include, the notion of an agency watchdog over the use of science in agency communications is likely to lead to many more such disputes.

For instance, one of Houser's complaints (detailed in the excerpt above) is that a DOI document did not include the full range of uncertainties that could be found in the underlying study. As a summary document there was probably a lot of information from the original study that was not included.

Who decides what information should and should not be included?  Who gets to second guess agency policy makers and the press office on what information should be included or not included? I testified on this issue in 2007 before the House Government Reform Committee during the period when the Bush Administration received similar criticisms (here in PDF). At the time I wrote:
[N]o information management policy can ever hope to eliminate political considerations in the preparation of government reports with scientific content
 The issues remain much the same today under a different administration. Even though the political context has changed, the underlying dynamics have not.

The Houser case will likely prompt some additional thinking about these issues and what it means to try to regulate or otherwise manage the scientific content of agency information. I suspect that eventually agencies will have to accept the reality that in many if not most cases the proper place for debate over agency decisions and communications is simply in the broader political arena as part of ongoing policy debate.

As it stands, the DOI scientific integrity policy may foreshadow ever more disputes over science between career government employees and political appointees, and perhaps even a further politicization of agency science. This is probably not the outcome expected or desired by the Obama Administration when putting forward a call for agency integrity guidelines.

Why Regulation of Financial Innovation Cannot Follow an FDA Model

The Economist points towards an interesting new paper by Posner and Weyl titled, "An FDA for Financial Innovation:Applying the Insurable Interest Doctrine to Twenty-First-Century Financial Markets." The paper argues that innovation in finance ought to be regulated in a manner similar to how the Food and Drug Administration regulates innovation in medicine:
We make two contributions. First, we propose a simple test for determining whether a financial instrument is socially valuable or socially costly, and argue that socially costly financial instruments should be banned... Second, we propose ex ante regulation of the market in financial derivatives, where financial innovators must submit proposed new financial products to the government for approval before they may sell them to the public. We will refer to this agency as the Financial Products Agency (FPA), although we are agnostic as to whether a new agency should be created or existing agencies, such as the SEC or CFTC, should be given these powers.7 We draw on the analogy of the Food and Drug Administration (FDA), which similarly has the power to ban new pharmaceuticals that do not meet stringent safety standards.
The Economist takes issue with the values that Posner and Weyl seek to regulate -- speculation is bad, hedging is good (a focus of 35 of the paper's 47 pages) -- and notes that the distinction is not so clear in practice. While I agree with this argument, let's set it aside as there is a more fundamental problem with the idea of a financial FDA.

The Economist explains:
The Posner and Weyl argument provides a sense of what’s wrong with the FDA idea. It requires the regulator to have near-perfect foresight about how a product will be used in the future. According to our recent survey, it’s not new products that cause harm. Problems primarily develop when products mutate and become so prevalent they pose systemic risk. Many new products never get to this stage; if they’re ill-conceived they die an early market death. The challenge for regulators should not be predicting the evolution of every new product, but rather monitoring financial markets to detect innovations that have become large enough to pose real danger. For example moving widely traded derivatives to an exchange is a good idea because that imposes more transparency and limits counter-party risk.

Even if a benevolent and omnipotent financial regulator existed, new products are often created in response to an individual client's need. If an American or British bank can’t sell it to the client until it undergoes a lengthy government review, the client will go somewhere else. To be fair Mssrs Posner and Weyl admit this issue merits “further research".

It’s tempting to see the harm financial products can cause and liken it to medication which, if not thoroughly tested, can also serious illness or death. But it’s a bad analogy. New drugs can be tested in trials and in a lab. There is no equivalent for financial products. The only laboratory is the market. To effectively balance efficiency and safety, a good regulatory system should observe financial products closely in the wild and only then determine which pose a threat.
This critique is spot on.

To cite a practical example, I have often been critical of so-called "catastrophe models" that are used in the insurance and reinsurance industry, which are a form of financial innovation. The nub of my critique is not that such models are inherently bad or good, but rather, that there is no one in industry or the public sector who is providing an evaluation of the use and impacts of such models in decision making. Are they being well-used? Do they create systemic risks along the lines of the risks created by the injudicious use of VaR models? Both? Who knows? With the exception of a cursory review done by a Florida commission, there is no systematic assessment of the models, their predictions (and yes, they offer predictions, just like VaR models) or their use and impact.

Understanding the impacts of financial innovation in broad societal context is not at all like assessing an pharmaceutical intervention into the human body, and by its nature cannot ever be treated in the same manner. The regulation of financial innovation necessarilty has to focus on monitoring and response. Arguably, right now we don't do a very good job on the monitoring, except after crashes occur.

03 April 2012

An Interview with an Activist Journalist

The Columbia Journalism Review has an informative and eye-opening interview with Justin Gillis, who covers climate change for the New York Times. I have been fairly critical of Gillis' reporting on this blog on several occasions (even awarding one of his stories the worst climate story ever in the NYT for its uncritical reporting of the NOAA billion-dollar disaster nonsense). It is of course perfectly understandable that Gillis hasn't much appreciated my critiques, but it goes with being a reporter at one of the most prominent media outlets in the world. He won't like this post either.

In the CJR interview, Gillis is remarkably candid and in doing so he provides a clear sense of where he is coming from -- the perspective that he brings is not plain vanilla journalism that you might expect from the paper of record, but journalism colored with a heavy tinge of yellow.

Gillis explains to CJR how he came to work on the climate beat while on a fellowship at MIT and Harvard:
I started taking classes and the more I learned, the more I thought to myself, “This is the biggest problem we have—bigger than global poverty. Why am I not working on it?” From there, the question was, how do I get myself into a position to work on the problem?
The notion of "working on the problem" is a fine ambition, but is clearly much more aligned with advocacy for action rather than reporting a beat. Rather than informing his readers Gillis is in the business of making an argument.

On the East Anglia emails that were released in 2009 Gillis makes a strong statement:
One was forced to read the e-mails and ask, “Do they suggest any sort of scientific misconduct?” As we studied them, it became clear to me that they didn’t, so we asked ourselves, “How do we respond in this situation when the evidence is all pointing in the same direction?”
Good guys, bad guys. All the evidence. One direction. That explains the lack of nuance in the NYT reporting of climate change science and politics.

How does he handle the perspectives of so-called "skeptics" in his reporting (emphasis added)?
Even when, in the context of a 4,000-word story, I quote skeptics for three or four paragraphs and then drop them and move on, I can reliably count on some sort of attack from somebody saying I shouldn’t have done that. I think these people are just being a little—what’s the right word—ditzy.
If one is covering evolution these days, one can afford to ignore the anti-evolutionists most of the time because they are completely scientifically discredited and, more importantly, sort of spent as a social force. Unfortunately, we just are not at that point with climate science.

However discredited the scientific case questioning climate science may be, it is influencing half the Congress and a substantial fraction of the population. So this is almost like if you’d been in Tennessee in 1925 getting ready to cover the Scopes Monkey Trial. The anti-evolutionists were already scientifically discredited by then, but as a journalist, you could not have avoided quoting them in order to put the whole thing in its political context. I’m sad to say that in 2012, that’s still where we are with climate science.
I'm not surprised to hear such a dismissive perspective from Gillis on points of view that he disagrees with.

Last year, when we still were engaging by email I sent him a link to this post on food prices and their impact. That post was based on a paper from a scholar at the International Food Policy Research Institute that discussed the many complexities of assessing "food insecurity."  The paper and other analyses raised a host of questions ignored in Gillis' article on food prices, which emphasized climate change.  I concluded in that post with:
One of the frustrations of practicing policy analysis is that you are reminded on a daily basis that we are never as smart as we think we are, and things that we thought we knew for sure, sometimes just ain't so.
Gillis responded to my post by outright dismissing the research that I had discussed, suggesting that the experts that he had chosen to report on were the ones who were correct (actually, several of the experts that he cited said the opposite of what he reported, but I digress).

I responded to Gillis by encouraging him to consider that there are complexities that he might not have considered:
It is not implausible that there are other causes for the riots, e.g., see Bahrain.  It is also not implausible that people have enough food to eat but don't like high prices anyway ... the point in his paper [discussed on my blog] is not that prices did not get higher, it is that food insecurity did not grow as conventional wisdom suggests.

I try not to dismiss detailed and rigorous quantitative policy analyses out of hand because they do not conform to my preconceptions ;-)
After another curt and dismissive email from Gillis (he has not granted me permission to publish them, unfortunately), I responded a final time:
I admire the speed and certainty that you bring to deciding what academic perspectives have merit and which do not, I can only surmise where my own 20 years of work falls out ;-)
 Where is Gillis going next?  Why, extreme events of course:
One thing I’m seeing—and I see it in our own paper as well as many other news outlets—is that people are covering the crazy weather we’re having and, more often than not, dodging the subject of whether there’s any relationship to climate change. TV weathermen are dodging that subject. Print reporters are dodging the subject. And it’s not so easy to cover because science does not have particularly good answers for us. The concept that I wrote about last week—that we’re in the middle of a sort of weather “weirding”—isn’t really a scientific concept for which you can build a weird index and figure out where we are on that index, but there are some things that scientists can say about weather extremes. Some of the extremes are very consistent with what is expected and what has long been predicted, and we’re seeing very clear trends in certain extremes like heat waves and heavy precipitation events. Reporters are not going to be able to be definitive, in real time, about whether this particular event was or wasn’t connected to climate change, but it’s a bit of a scandal that there’s not enough connecting the dots for people.
All this dodging of extremes -- It is too bad that the IPCC hasn't taken up the issue of extreme events, especially high impact events like floods, tornadoes and hurricanes around the world. If they did, I'm sure that the NYT would be all over it. ;-)

Both advocacy and journalism are fundamental to a healthy democracy, but when they are mixed together, especially on the news pages of the NYT, neither is served particularly well. Please count me among those who prefers to get news from plain vanilla journalism, not the yellow kind.