One thing I noticed when panels of scientists came to talk to us about our bark beetle response (from CU particularly) is that they kept talking about our “going into the backcountry and doing fuel treatments” and why this was a bad idea. We would tell them we weren’t actually doing that, but I don’t think they believed us. I have found in general, that people at universities tend to think that a great deal more management is possible on the landscape than actually ever happens. The fact that Colorado has few sawmills means we aren’t cutting many trees for wood..
Anyway, in my efforts to convey this “there isn’t much we really can/can afford to do in these places”, I ran across this story.. It just seems so common-sensical and drama free. Perhaps that is the culture of the San Luis Valley, reflected in its press coverage.
Note: Dan Dallas, the Forest Supervisor of the Rio Grande National Forests (and former Manager of the San Luis Valley Public Lands Center, a joint FS/BLM operation, ended for unclear reasons) is a fire guy, so has practitioner knowledge of fires, fire behavior and suppression.
A beetle epidemic in the forest will have ramifications for generations to come.
Addressing the Rio Grande Roundtable on Tuesday, Rio Grande National Forest staff including Forest Supervisor Dan Dallas talked about how the current spruce beetle epidemic is affecting the forest presently and how it could potentially affect the landscape and watershed in the future. They also talked about what the Forest Service and other agencies are doing about the problem.
We’ve got a large scale outbreak that we haven’t seen at this scale ever, Dallas said.
SLV Interagency Fire Management Officer Jim Jaminet added the infestation and disease outbreak in the entire forest is pretty significant with at least 70 percent of the spruce either dead or dying “just oceans of dead standing naked canopy, just skeletons standing out there.”
Dallas said unless something changes, and he and his staff do not think it will, all the spruce will be dead in a few years.
As far as effects on wildlife, Dallas said the elk and deer would probably do fine, but this would have a huge impact on the lynx habitat.
He also expected impacts on the Rio Grande watershed all the way down to the New Mexico line. For example, the snowpack runoff would peak earlier.
However, Dallas added, “All that said, it is a natural event.”
He said the beetle epidemic destroying the Rio Grande National Forest spread significantly in just a few years. He attributed the epidemic to a combination of factors including “blow down” of trees where the beetles concentrated on the downed trees, as well as drought stressing the trees so they were more susceptible to the bugs, which are always present in the forest but because of triggering factors like drought have really taken over in recent years.
“There’s places up there now where every tree across the board is gone, dead,” Dallas said. “It’s gone clear up to timberline.”
He said the beetle infestation could be seen all the way up the Rocky Mountain range into Canada.
To date, the U.S. Forest Service’s response has focused on health and safety both of the public and staff, Dallas explained. Trees have been taken out of areas like Big Meadows and Trujillo Meadows campgrounds where they could pose a danger to visitors, for example.
“Everybody hiking or whatever needs to be aware of this. All your old habitats, camping out underneath dead trees, that’s bad business,” Dallas said.
He said trail crews can hardly keep up with the debris, and by the time they have cleaned up a trail, they have to clear it again on their way back out.
Another way the Forest Service is responding to the beetle epidemic is through large-scale planning, Dallas added.
For example, the Forest Service has 10 years worth of timber sales ready to go at any point in time, which was unheard of a few years ago.
Dallas said a group of researchers from the Forest Service will be looking at different scenarios for the forest such as what might happen if the Forest Service does nothing and lets nature take its course or what might happen if some intervention occurs like starting a fire in the heat of summer on purpose.
The researchers are expected to visit the upper Rio Grande on June 17. They are compiling a synthesis before their trip. They will then undertake some modeling exercises to look at what might happen in the forest and what it will look like under different scenarios.
“We have the opportunity now to do some things to change the trajectory of the forest that comes back,” Dallas said. “We want to understand that, not to say that’s something we really want to do.”
He added, “We would have to involve the public, because we are talking about what the forest is going to look like when we are long dead and gone and our kids are long dead and gone.”
If the Forest Service is going to do something, however, now is the time, he added.
Jaminet talked with the roundtable members about fire risks in the forest.
Fire danger depends on the weather and the environment, he said.
If the conditions were such that the weather was hot, dry and windy, “We could have a large fire event in the San Luis Valley,” Jaminet said.
He added that fortunately the Valley does not have many human-caused fires in the forests. The Valley is also fortunate not to have many lightning-caused fires, he added.
“Will there be an increase in fires?” he asked. “Probably not. Will there be an increase in severity? Probably not now but probably later. The fire events are going to be largely weather driven.”
He said some fire could be good for an ecosystem as long as it does not threaten structures and people
One has to wonder whether the reviewers of the NSF studies (in this post) knew that the FS was doing what appears to be addressing the same problem, only with different tools. Seems to me like some folks who study the past, assume that the past is somehow relevant to the best way forward today. I am not against the study of history, but, to use a farming analogy, we don’t need to review the history of the Great Plains before every planting season.
Maybe there should be financial incentives for those who find duplicative research, with a percentage of the savings targeted for National Forest and BLM recreation programs ?
Here’s a post from David Bruggeman about a proposed bill.
The High Quality Research Act is a draft bill from Representative Lamar Smith, Chair of the House Science, Space and Technology Committee. Still not officially introduced, it has prompted a fair amount of teeth gnashing and garment rending over what it might mean. The bill would require the Director of the National Science Foundation (NSF) to certify that the research it funds would: serve the national interests, be of the highest quality, and is not duplicative of other research projects being funded by the federal government. The bill would also prompt a study to see how such requirements could be implemented in other federal science agencies.
There’s a lot there to explore, including how the bill fits into recent inquiries about specific research grants made by the National Institutes of Health (NIH) and the NSF. (One nice place to check on this is the AmericanScience team blog.)
But what this bill has brought to my mind is that it brings the alleged tradeoff between research autonomy and research accountability into stronger relief (at least for those of us who research and analyze these things. The advocates are in combat mode). The goals of the bill – certifying that the research serves the national interests – could be interpreted as being contrary to the notions of blue sky or basic research. If the research must be linked to a national interest, how can it be done without concern for eventual applications?
My opinion…just the non-duplicative aspect would be powerful. Maybe there could be a small incentive for those who identify duplication? Because right now the only check seems to be the research panels, who often have not read the literature relevant to a specific proposal, and there is no mechanism for them to be aware of other government funded research in the area.
Just one example. You can check the NSF database here…just type in the topic you are interested in. I typed in “spruce beetle” and got a list..
This is the information for one study, and below, the abstract for one:
This collaborative research project will address the following questions about interactions between wildfire and spruce beetle outbreaks under varying climate and their consequences for ecosystem services: (1) How does climatic variation affect the initiation and spread of spruce beetle outbreaks across complex landscapes? (2) How does prior disturbance by windstorm, logging, and fire affect the subsequent occurrence and severity of spruce beetle outbreak? (3) In the context of a recently warmed climate, how do spruce beetle outbreaks affect forest structure and composition? (4) How do spruce beetle outbreaks affect fuels and potential wildfire activity under varying climatic conditions? (5) How will climate change and the climate-sensitive disturbances of wildfire and spruce beetle activity affect future ecosystem services in the subalpine zone of the southern Rocky Mountains under varying scenarios of adaptive forest management? The first four questions will be addressed through empirical research, including extensive tree-ring reconstructions of past disturbances, re-measurement of permanent forest plots, field measurements of effects of spruce beetle outbreaks on fuels, fire behavior modeling, and spatiotemporal analyses of the spread of recent spruce beetle outbreaks. The fifth question will be examined through simulation modeling of future forest conditions and their consequences for key selected ecosystem services, including biodiversity, wildlife habitat, and resilience to environmental change.
Not to pick on Kulakowski at Worcester, or even on NSF (which studies everything regardless of what other agencies study it, except perhaps NIH) but it makes me think that perhaps folks at the Forest Service and USGS around here are probably also studying some of these same topics?
It would be interesting to FOIA the peer review documents and see what the reviewers had to say about how this research fits in to ongoing federal research on the topic and how useful it will be. Because after all, there are not a lot of management choices…
Earlier this week I gave a 60-minute talk to a meeting of the Alsea Watershed Council, my “home group,” where I have been giving presentations every few years since they first formed in the 1980s. The audience was a little smaller than usual, but all of the old-timers were there and Elmer Ostling’s wife had baked delicious cinnamon rolls for everyone.
The theme of my talk was to discuss scientific and political “transparency” in this age of Internet communications – and to use the recently completed website report, Oregon Websites and Watershed Project’s (ORWW) “Coquelle Trails,” as a model and framework for the discussion. The Coquelle Trails project covered more than 1,400,000-acres in southwest Oregon, including sizable portions of BLM and USFS lands and hundreds of thousands of acres of marbled murrelet, spotted owl, coho, California condor, wolf, and elk habitat. PowerPoint and PDF versions of the presentation have been put online here:
The original 2-page Press Release for Coquelle Trails was used as a handout. The online version of the handout can be found here:
The discussion was arranged in four parts: 1) a proposed definition of “scientific and political transparency” — at least as it should apply to taxpayer-funded research — for the 21st century; 2) a demonstration of how inexpensive and easy it is to produce baseline data in modern digital formats, by using the Coquelle Trails’ predictive map construction and field verification methodology as an illustration; 3) a brief overview of how the Coquelle Trails’ historical datasets and current findings were formatted for Internet access by using the same standards developed by ORWW with Siletz School 2nd-Grade students 15 years ago; and 4) basic conclusions regarding current opportunities and needs to create better trust and transparency between federal land management agencies and local communities via enhanced research methods and internet communications.
After a brief introduction and background regarding the focus of my talk and the reference materials we would be using, we began with the proposed definition for “Scientific (& Political) Transparency: 2013,” which was also outlined in four parts:
1. Plain English
Acronyms + Jargon + Latin + Metrics x Statistics = Total Obfuscation
Doug Fir vs. Doug-fir vs. PsMe
TMDL vs. turbidity vs. muddy water
2. Research Methodology
A. All taxpayer-funded work is documented.
B. All documentation is made readily available via public websites.
C. Most work is subject to Independent Peer Review.
D. All peer reviews and resulting discussions are made publicly available.
3. Direct Access to all taxpayer-funded research, meetings, reports, correspondence, political decisions, etc.
4. Stable, well-designed (dependable, comprehensive & “easy to use”) Websites: ORWW Coquelle Trails as a model.
The opening discussion of Plain English was illustrated with a philosophical approach as to how Latin had been used to create distance between the Messengers of God and the illiterate masses in the Middle Ages, and how that process was still being used today – via government acronyms, professional jargon, metrics, and obscure statistics (and Latin) – to create distance between government agencies and the public; between the agencies themselves; and even between different generations of scientists within the same disciplines.
I used personal examples of the “evolution” of Douglas Fir (Pseudotsuga taxifolia) to Douglas-fir (Pseudotsuga menziesii) to PsMe (“Piz-Me”) in the agencies and classrooms during the past 60 years – while everyone in town and at the sawmills continued to call it “Doug Fir.” The similar history of TMDL – and why that acronym is not a good fit to discuss with current grade school and high school students – was another example. Same with metrics: the USFS and BLM are US agencies. Our standard of measure, used by all taxpayers, is the English system (chains, links, feet, miles, and acres) — why then do agency personnel try and talk and write in terms of hectares and kilometers in official reports and public presentations (rhetorical question)?
The second part of the discussion involved a series of slides showing how traditional archival research methods and modern technology were used during the Coquelle Trails project to achieve desired results. This was, essentially, a summary of the methodology as described and illustrated by the online report:
Part three of the discussion used a series of slides showing how ORWW has continued to use the same methods and formats developed with Siletz 2nd-Graders in 1998 to present Coquelle Trails research datasets, findings, and conclusions to the present day:
The point was made – pointedly – that government websites to the present time continue to be far less stable, far less comprehensive, and much more difficult to navigate than methods developed by grade-schoolers during the past century – during the very infancy of the Internet. Also, that the more accessible and reliable design was developed and has been expanded and maintained by a tiny non-profit in Philomath, Oregon, entirely funded by local residents, businesses, and organizations – and no federal dollars. And that those works have been continuously available and online for more than 16 years (compare to the life of an average government link or URL).
Which brought us to the Conclusions, also listed in four parts:
Conclusions: How Transparency Saves Money & Improves Decision Making
1. The 1976 Paperwork Reduction Act and the 2010 Plain Writing Act already require the use of Plain English by federal agencies. These acts simply need to be enforced.
2. Modern technology makes automated scanning of documents and GPS-referenced digital photography increasingly cheap and easy. Citizens should insist on such documentation and direct access to all taxpayer-funded research, meetings, etc., affecting local regulations.
3. High-speed Internet communications and the recent proliferation of ipads and smart phones has made universal access to technical information possible, with few limitations to time and location.
4. Increased access to better information is believed to result in improved research, discussion, and decision-making. Stable, well-designed websites make such access possible for almost all citizens, including: students, teachers, scientists, politicians and public resource managers.
So that was my presentation. I would be very interested in other thoughts on this. I think the current lack of transparency in government and in science (and maybe particularly in government-funded science) is doing a great disservice to taxpaying citizens, our voters, and our students and teachers, all of whom deserve clear and complete answers to their questions and requests.
Modern technology and Internet communications have made sharing information more possible, cheaper, and easier than at any other time in history – so why does the government (and its scientists) continue to hide behind secret meetings, foreign languages and measurements, unavailable “findings,” clunky and outdated communications, never-ending acronyms, and other forms of deliberate obfuscation? That’s a rhetorical question with lots of answers, but the bottom line is that there is really no excuse for allowing this type of behavior to continue. It’s way too expensive, totally unnecessary, probably unethical, and counterproductive to most legitimate workings of government and of science. In my opinion. I’m interested in the thoughts of others.
People disagree; scientists disagree, and yet people have to manage them, and data, and interpersonal, supervisor and inter-institutional bad chemistry, dislikes, vendettas and power struggles. It’s not easy to do that.
In my view, real “scientific integrity” involves QA/QC, and data and review that are open and transparent. Anyway, I just saw the Nature article on the Klamath fish fight in Roger Pielke Jrs.’ Twitter feed, though I had seen the PEER document earlier.
Here’s a link to the Nature blog post and below is an excerpt:
Seven US fisheries scientists have raised a formal complaint claiming that a supervisor threatened to eliminate their research division after the team produced controversial model predictions of survival and recovery of the threatened coho salmon (Oncorhynchus kisutch) in the Klamath River Basin in Oregon.
“This falls into the basket of obstruction of science for policy or political ends,” says Jeff Ruch, executive director of Public Employees for Environmental Responsibility (PEER), based in Washington, DC. The watchdog group filed the complaint of scientific misconduct on 7 January to the Department of Interior on behalf of the scientists who work at the US Bureau of Reclamation office in Klamath Falls, Oregon.
For years, federal research on Klamath Basin fish and wildlife has been caught in an intense debate about whether to tear down a series of hydroelectric dams on the Klamath River. Many environmentalists have blamed the dams for salmon die-offs and ecological decline, but some researchers have questioned the magnitude of expected benefits from dam removal.
The letter alleges that Klamath Basin Area Office manager Jason Phillips violated the agency’s scientific integrity policy adopted in 2011 as part of President Barack Obama’s nation-wide initiative to protect science from political interference. According to the letter, the scientists believe Phillips intended to shut down the research group — known as the Fisheries Resources Branch — after perceiving the team’s work on salmon and other fish contradicted the plans and findings of the US Fish and Wildlife Service (FWS) and the National Oceanic Atmospheric Administration (NOAA).
The documents are linked in the Nature article. My point is that the scientists disagree. That’s OK, in fact that’s what makes science go! We just don’t have good scientific conflict identification and resolution mechanisms, IMHO.
Once upon a time, so long ago that I doubt that many people who are still working remember it, I went to a all Region 6 meeting on biodiversity. I remember that Jerry Franklin gave a talk and said something about genetics that the geneticists there disagreed with. So we went up to Jerry afterwards and said “but Jerry, geneticists don’t agree with that” and he said something along the lines of , “well I spoke with geneticist X. Why don’t you all get your stuff together?” (this may sound a bit abrupt but he didn’t actually say it abruptly, more with a tone of exasperation as in “how is anyone else supposed to know?”) Which is really, when you think about it, a darn good question.
At the time I remember thinking, “well, of course, there is actually no mechanism for “getting your stuff together.”
What do we have?
1) Meetings where each scientist does a presentation with 5 minutes for questions. Not much can happen in terms of dialogue on deep subjects in 5 minutes.
2) Journals… well, you can’t really have back and forth.
3) Blogs.. you can but many folks don’t feel that that’s their job to discuss with other scientists who disagree (and everyone is busy, so you understand). These can also degenerate.
4) “Science” panels. Usually they only pick one of each discipline, so you don’t get to hear within-discipline disagreements.
5) Regulator vs. regulated agency science disputes. Goal of managers is to put them to bed and move on… not to understand what is really true scientifically. No patience or public transparency. This is probably also true for private firms with in-house scientists that are regulated by agencies, but I don’t have direct experience with those.)
Anyway, I think a well structured, disciplined, and public discussion of points of view of NOAA, FWS, BOR and anyone else involved could move our mutual knowledge forward. I think “scientific integrity” is a total red herring (so to speak) in this case.
One more thing…if I were Congress I would only let one agency be funded to study one topic. We could save enormous amounts of money if several agencies were not allowed to pursue the same research topics without requirements, management and accountability for coordination (at least, and preferably some kind of utility in minds other than that of those to be funded). I personally have been in climate change research meetings where it appeared that BOR, USGS, and FWS were studying exactly the same thing; and obviously not required to, nor actually doing any, coordinating. Add NOAA and USDA to the mix, and you basically have a research pigpile at the public trough. At some of those research stakeholder meetings, I was embarrassed to be a fed. As in the Klamath case, to have a dispute, three agencies must have been studying the same thing.
For some mysterious reason, my previous post here on the GAO report on the Forest Service has received a great many hits on the blog over the past few weeks. I am interpreting that to mean that at least some readers are interested in the science/policy interface, but who knows? Anyway, this has led to writing a series of posts on my observations on the scientific enterprise. Those who are not interested feel free to flip past.
So here is something I wrote recently, thanks to Bob Zybach and the folks at ESIPRI, when asked what I think a good process would look like for vetting research studies to be used in policy.
Eight Steps to Vet Scientific Information for Policy Fitness
Peer review as currently practiced is probably fine for deciding among proposals or which articles should be published in journals. However, I think that when public funding is at stake for major investments, and people’s lives and property are on the line, we should up our game in terms of review to a more professional level. I was recently asked my thoughts on how to do that, and here they are. It takes all eight steps, outlined below.
1. Is the research structured to answer the policy question? Often the policy question is nuanced.. say, “what should we do to protect homes from wildland fires and protect public and firefighter safety?” This is often where research goes off the rails. Say, historic vegetation ecologists study the past and claim that there was plenty of fire in the past.. but that information is actually not particularly relevant to the policy question. To get funding to do the study (and make the claim that it’s relevant), all they need to do is to pass a panel of scientists who basically use the “it sounds plausible to a person unfamiliar with the policy issues” criterion.
It seems obvious, but for scientific information to be policy relevant, policy folks have to be involved in framing the question. Most, if not all, research production systems that I am aware of do not have this step.
2. Did they choose the right disciplines and/or methods to answer the policy question? Clearly a variety of disciplines could have some useful contribution, as well as an inherent conflict of interest, if you rely on them to tell you if they are relevant or not.
3. Statistical review by a statistician. If you use statistics, this needs to be reviewed, not by a peer, but by a statistician. You can’t depend on journals to do this. The Forest Service used to have station statisticians (and still does?) to review proposals so people worked out their differences in thinking and experimental design before the project was too far down the road.
4. The Quality Assurance /Quality Controls (QA/QC) procedures for the equipment used and data need to be documented (and also reviewed by others). For someone who is unfamiliar with QA/QC applications, you might start with the recent paper attached (lockhart_2009_forest-policy-and-economics), it has a number of citations, and also the implications of the Data Quality Act. What is odd is that the NAPAP program led the way for QA/QC, but it’s not clear how that has been carried forward to today. It might be interesting to take the top-cited papers in forest ecology or management or whatever policy-relevant field you choose and review their QA/QC procedures.
5. Traditional within-discipline peer review.
6. Careful review of the logic path from facts found to conclusions drawn. It is natural for universities or other institutions to hype the importance of research findings. Since people will also use the findings to promote their own policy agenda, and because a paper can be misused even if the scientist is careful (e.g. 4 Mile Fire), it is more important to be specific and careful about your interpretation and conclusions. It is also best that if the findings lead to conclusions that are outside those of the current general consensus, that the authors forthrightly discuss different hypotheses for why their findings are different. Don’t just let the press hype “new findings show” as if the previous studies were irrelevant. The authors know more than anyone else about it, so they should be willing to share what they think about the differences an upfront way. That’s how “science” is supposed to progress, by building on previous work.
7. An important part of professional review for studies involving models should be “what background work did you do?“ Did you use sensitivity analysis for your assumptions? Did you compare model projections to the real world? If not, why not? In fact, the relationship of empirical data to your work should be clearly described, since scientific information derives its legitimacy from its predictive value in the real world, not from being a group hug of scientists within a discipline.
8. Post publication requirements: access by the public to data and open online review. This should be absolutely required for use in important policy discussions.
Do you agree? Do you have additions or deletions or other comments? Why do you think the bar is currently so low in terms of review?”
In my endless, and some may say quixotic, quest for “Things We Can All Agree On” I offer a link to David Bruggeman’s blog Pasco Phronesis) post on the Open Access Petition. Below is a quote, and here is a link to David’s post. I recommend David’s blog to all interested in science policy.
I’m still mildly bemused that the expansion of open access seems to have found some traction, or at least many more vocal proponents, over the last few months. Such enthusiasm has been met by actions in the U.K., internationally, and by many universities and funding groups to increase the incentives to publish scientific research under various forms of open access publishing.
Now we have a petition on the We The People portion of the White House website. The full text (there’s a limit of 800 characters, vagueness of goals and realism of promises is not necessarily an indication of intent):
“Require free access over the Internet to scientific journal articles arising from taxpayer-funded research.
“We believe in the power of the Internet to foster innovation, research, and education. Requiring the published results of taxpayer-funded research to be posted on the Internet in human and machine readable form would provide access to patients and caregivers, students and their teachers, researchers, entrepreneurs, and other taxpayers who paid for the research. Expanding access would speed the research process and increase the return on our investment in scientific research.
“The highly successful Public Access Policy of the National Institutes of Health proves that this can be done without disrupting the research process, and we urge President Obama to act now to implement open access policies for all federal agencies that fund scientific research.”
Uploaded to the petition site on May 13, the petition hit the publicly searchable threshold this past weekend, and thanks to a concerted effort to publicize the petition, there are now over 17,000 signatures as of late on May 25. The petition will need 25,000 signatures by June 19 in order to get a response from the Administration. If the publicity keeps up, I suspect the goal will be met.
The petition was started by Access2Research, a personal campaign of a few open access advocates that has the support of many organizations sympathetic, if not outright supportive of the cause. The publicity campaign has been global, and there is no requirement that signers of We The People petitions be U.S. citizens (they do have to set up an account – no fair signing twice).
Interesting things about blogs.. I decided to look at how people got to this blog and noticed that some were linking from Judith Curry’s climate blog here. Turns out that they are having a discussion of some of the points in the post I wrote for Roger Pielke Jr.’s blog, with many more comments (318) than at Roger’s blog here (2, so far) or when I posted it here (0). So if you are interested in this discussion, check it out.
Here’s a quote from her post.
Sharon… makes the following four recommendations:
Here are my four principles for improving the use of information in policy, (1) joint framing and design of research with policymakers (2) explicit consideration of the relevance of practitioner and other forms of knowledge (3) quality measures for scientific information (including QA/QC, data integrity and peer and practitioner review), and (3) transparency and openness of review of any information considered and its application to policy.
The bolded statement is of particular relevance to this topic. In the politics of climate expertise, which experts should be paid attention to?
Steve Schneider had very clear views on this, as evidenced in this interview with Rick Piltz shortly before his death, about the PNAS paper. It is the elite climate scientists (which includes geophysical scientists, ecologists and economists) as judged by their number of publications and citations. Many reputable scientists such as Syun Akasofu (a solar physicist and climate skeptic) were not included in the statistics because he had not published more than 20 papers that were judged to be on the topic of climate. Seems to me that Akasofu has more knowledge about detection and attribution than nearly all of the biologists and economists included in the “list”?
Given the breadth of the topic of climate change, its impacts, and policy options, it seems that considerable breadth of expertise is needed, i.e. “all hands needed on deck.” But there seems to be a turf battle over “which experts,” as evidenced by the PNAS paper and the continued appeal to the IPCC consensus.
Here’s a link to another of my posts on this topic on Roger Pielke, Jr.’s blog.
My concern is that it is not clear what problem the memo is intended to solve. I am not sure that the authors are aware of the dailiness of using science in a variety of government decisions at different spatial and temporal scales. In clumsily attempting to go after the misbehaving, they are likely to target the innocent for unnecessary work. In this economic climate, one would think that people would be more careful about requiring hordes of federal employees to develop and follow unclear and unnecessary policies.
Here’s my summary of the memo (more on the guidelines later).
1. What if we were to apply the ideas espoused in the memo to the promulgation of the memo (as the memo is policy) itself? We might expect a section describing how the work of noted science policy experts was used in the development of the memo, with peer-reviewed citations. I’d expect to see Jasanoff, Sarewitz and Pielke, Jr., at least, cited.
2. Here are my four principles for improving the use of information in policy, (1) joint framing and design of research with policymakers (2) explicit consideration of the relevance of practitioner and other forms of knowledge (3) quality measures for scientific information (including QA/QC, data integrity and peer and practitioner review), and (3) transparency and openness of review of any information considered and its application to policy.
3. If the DQA (Data Quality Act) and the “Integrity” work are seen to be the result of inchoate longings by many for an improved “science to policy” process; and if they seem each to have become, instead, weapons to slime the opposing political party, then why not establish a bipartisan commission on improving the use of scientific and technical information in policy? Science policy experts would advise the commission, and the deliberations would be transparent and open to public comment. The terrain to be explored would include my four principles above, and add considerations of involving citizens more directly in working with the relevant Congressional committees in developing federal research budgets and priorities.
Some of you may be interested in this piece I wrote for Roger Pielke, Jr.’s blog on the GAO report on FS R&D, and the comments. Roger also put in links to some key work including this article by Sarewitz and Pielke which is well worth a read. If you are not familiar with this literature, which examines how scientific processes work, I recommend this piece as an introduction. In the FS, one of our leaders once described self-awareness as a key component for her selection of individuals for leadership positions. In my view, science policy studies is a field that is key to self-awareness of the scientific community.
Here’s a quote from the Sarewitz/Pielke paper:
The idea that the creation of scientific knowledge is a process largely independent from the application of that knowledge within society has had enormous political value for scientists, because it allows them to make the dual claims that (1) fundamental research divorced from any consideration of application is the most important type of research (Weinberg, 1971) and (2) such research can best contribute to society if it is insulated from such practical considerations, thus ensuring that scientists not only have putative freedom of inquiry, but also that they have control over public resources devoted to science. The continued influence of this perspective was recently asserted by Leshner (2005), Chief Executive Office of the American Association for the Advancement of Science: ‘‘. . . historically science and technology have changed society, society now is likely to want to change science and technology,
or at least to help shape their course. For many scientists, any such overlay of values on the conduct of science is anathema to our core principles and our historic success.’’
I know that many in the natural resource/forestry/public lands community are not aware of, or do not have time to keep up with, the science policy literature. I am curious as to what readers think of my post on the conveyor belt model and the Sarewitz/Pielke paper.