Volunteers from COPE Council often join workshops, where we listen to the challenges faced by members of the communities COPE serves, and where we share insights from COPE and the resources we make available (like our cases, our flowcharts, our guidelines).
The Symposium der Ombudspersonen für Gute Wissenschaftliche Praxiswas held last month in Berlin. COPE was there, leading a workshop titled “Towards better research authorship”. Chris Graf, Co-Chair, with Sabine Kleinert, Senior Executive Editor at The Lancet (and ex-Vice Chair, COPE) unpacked authorship standards, practices, and problems with 40+ vocal and engaged research ombudspeople.
We asked (and discussed) three questions. Why does good authorship practice matter? What symptoms identify poor practice? And how do we manage problems when they arise? We debated whether authorship is a sharp enough tool for the job some people use it for: recognizing and rewarding research efforts. We explored the not-yet-widely-adopted contributorship model (where authors provide a short explanation of who contributed what), alongside narrative “soft” approaches to enabling this (like those used by The Lancet) and more rigid “controlled vocabulary” approaches (like those enabled by the CRediT taxonomy). We talked about the challenges using author order to assign credit for research.
Together, we played out three case-based scenarios (from COPE’s discussion document “What constitutes authorship”):
- I’m a junior researcher and did a lot of the basic work. My supervisor/department head wrote up the work and hasn’t included me as an author.
- My department head insists on being included as an author on any research paper that comes out of his/her department. But he/she only obtained the grant money. Is this fair?
- In what order should we list the authors to demonstrate the relative contribution of each?
This month’s theme at COPE is our second Core Practice “Authorship and contributorship”, and what we share in our resources that support good practices.
But authorship itself can be a blunt tool… particularly when people try to use it for research assessment. Let’s see what we can do together to change that. Perhaps by implementing contributorship models at journals (softer, like The Lancet, or more rigid, like CRediT). Perhaps by contributing to discussions about how performance is assessed by university reward committees. Perhaps by doing what we can to encourage positive change in how research funders assess research and award funds. All this means it’s critical that we continue and broaden the conversation about research authorship. So let’s do that, together.