I am just back from the annual PSP conference in Washington, where I had the opportunity to attend a number of stimulating sessions. You can find the full program here. Of particular interest to me was “Plenary #1: A Tale of Two Continents — Open Access in Europe and the US.” There were three outstanding presentations — by Rachel Burley of BMC/SpringerNature, Amanda Click of American University, and Richard Wilder, Associate General Counsel at the Bill and Melinda Gates Foundation — but toward the end I felt a question creeping into consciousness; and that question was,
How do you evaluate the effectiveness of your programs; and when those evaluations are completed, will you make them openly available to the public?
The panelists did not have an answer to this question, and they freely admitted so: their candor was admirable and engaging. But still, I would have liked to know not just what they are doing but why.
Before digging in any further, let’s be clear that this question is not about the panelists. They did a good job. They are obviously people of high accomplishment who communicated with clarity and precision. The panel is not the subject here; rather, the panel is the occasion of my own reflections. Let’s not indict them when all that they are guilty of is intelligence, education, discipline, and attainment. We should leave the ad hominemarguments in Washington where they belong.
It seems to me that Burley had a good response to my question at hand, but she did not reach for it. As the only representative of a for-profit organization, she could have said that the evaluation of her program lay in the financial results: profitability, return on capital, trailing and forecast growth. I think she would have gotten very good grades by these measures, but she noted instead that BMC tracks such things as the number of downloads and evidence of engagement among users. But to what end? What does the number of downloads tell us, and what is the meaning of a tweet? (Please send your answers to @jospehjesposito.) Wherefore open access?
Click is responsible for an APC program at American University. It is a trial program, which is having the intriguing problem of not being able to use all the money allotted to it, as not all Gold OA venues subscribe to the protocols that Click’s fund requires. This is a tactical program; the rationale for the program lies elsewhere in the university. And so I continue to ask, wherefore open access?
With Wilder we move fully upstream to a major source of grant funding. In response to my question, he said that he did not have an answer, but that his area, like all units of the Gates Foundation, attempts to align its activities with the overarching mission of the Foundation. I wonder if I was the only person in the room to note that the Microsoft fortune that is behind the Gates Foundation derived from a philosophical position that is the diametric opposite of that of open access. It’s worth reading about the young Bill Gates’s “Open Letter to Hobbyists.” This does not mean that people can’t or should not change their minds, but it would be nice to know why. In any event, it seems odd that the foundation set up by the most results-oriented businessman of our age would not have developed a way to measure its activities.
If the panelists — and, more generally, the open access (OA) movement — were to offer a basis for evaluation, I think it would sound something like this:
Knowledge proceeds through communication, as one researcher builds upon (stands on the shoulders of) the work of others. Paywalls interfere with this sharing principle and thus slow down the pace of scientific discovery. OA will increase scientific communication and thereby accelerate scientific discovery.
Unfortunately, there is no way to do a controlled experiment to test this. What would the world of cancer therapies look like in ten years if everything were made OA today, and what would it look like if nothing were OA? Can we agree on a proxy for this experiment? Or are we simply going to take for granted an outcome that cannot be tested and for which there appears to be little curiosity about what the appropriate measures would look like?
There are some interesting corollaries to the proposition that OA will accelerate discovery. For example, some fields are moving to OA faster than others, which raises the prospect that in ten years we may have made enormous progress in (say) artificial intelligence, but not in cognitive psychology. I would like to know what the world will look like if the horses all leap from their gates at a different time.
Which brings me to the question that has nagged me since I first encountered Stevan Harnad’s “subversive proposal” in the 1990s: Why does anybody think this is true? It is not surprising that our panelists could not say how their OA programs are to be evaluated. OA has sat outside the realm of accountability since its inception.
I have remarked before that OA is a bad idea whose time has come. As a DNA-level pragmatist, I focus more on its inevitability than on its ostensible virtues or limitations. But I continue to ponder why anyone thinks it will work — what the evidence is for that — and why an entire ecosystem based on reputation and demand is being overturned in the absence of a method of evaluation. I am still waiting for that evaluation to be made freely and openly available.