Open Process academic publishing
Free software and ‘Open Process: why Open Access, or Open Source is not enough’
Publishing and peer review processes in academia are outdated and closed models. Key flaws are lack of transparency in the pre-publication process, a lack of dialogue in both pre and post-publication phases, and a linear use of digital media that only scratches the surface of possibilities for greater reflexivity and dialogue in order to have more powerful, effective and responsive knowledge production (Cope and Kalantzis, 2009). The history of peer review is closely tied to state and royal censorship, and academics take turn in disciplining each other and providing a sense of order and assurance that good science is produced, so that the contract between the state and science is preserved (Biagioli, 2002: 12-13). Black box seems a correct description:
You submit a study to a journal. It enters a system that is effectively a black box, and then a more or less sensible answer comes out at the other end. The black box is like the roulette wheel, and the prizes and the losses can be big. For an academic, publication in a major journal like Nature or Cell is to win the jackpot. (Smith, 2006)
At least in the areas I operate in (social sciences and humanities), these processes should be far more, if not entirely, open, with a provision for privacy in special cases. I call this model ‘Open-process academic publishing’. The name deliberately distinguishes itself from Open Access (Suber, 2007), which refers only to the outcome of academic knowledge production being open. The suggestion is not to open the process in random ways, but in ways in which this openness – fundamentally based on volunteer participation – brings and enables more structure, more internalized working discipline, more commitment, and more ability to improve cooperation with deliberate precision, all with the goal of improving the outcomes. Since ‘culture of open processes was essential in enabling the Internet to grow and evolve as spectacularly as it has’ (Crocker, 2009), we could call it ‘The Internet Model’ (software + networking). Its potential screams for being reused, hacked, for other areas of production. Academia, especially its publishing side, seems to me capable of embracing such volunteer-core open-process cooperation.
The overall model presented here is new, though some components have been used by journals for a while. Most notable early examples are British Medical Journal (BMJ) and Journal of Interactive Media in Education (Buckingham Shum and Sumner, 2001), whose peer reviewing process contains discussion based reviewing, first private then publicly open, in several stages. Recent ones include Papers in Physics and Geoscientific Model Development. Atmospheric Chemistry and Physics implement multi-stage reviewing model, which includes an eight week period for public comments and another author revision prior to peer reviews (Atmospheric Chemistry, 2009a). It brings with it the following advantages: rapid publication and free dissemination, traceable peer-review, immediate feedback by interactive discussion within the scientific community, and efficient new ways of publishing special issues (no ‘waiting for the last paper’) (Atmospheric Chemistry, 2009b). Several journals use this model, enabled by a proprietary online publishing platform (Copernicus Publications).
In medicine, PLoS One (2009) journal started from scratch in 2006. Today, it is one of the largest journals by volume in the world, peer reviewed, open access and with rich use of commenting tools and automatically generated article metrics. Its primary publishing criteria are data and methodology validity, while they leave the originality and importance for readers to judge. Their downside is a highly problematic principle that authors pay substantial publishing costs, although this is somewhat balanced by a fee waiver system and by the reviewers not knowing whether the authors pay or not. In physics, Papers in Physics publishes the article, the reviewer’s comments and the author’s reply alongside the names of all involved – if the paper is considered original and technically sound: ‘This way, it promotes the open discussion of controversies among specialists that are of help to the reader and to the transparency of the editorial process. [...] reviewers receive their due recognition by publishing a recorded citable report’ (Papers in Physics, 2010).
The difference between the above examples and the model proposed here is threefold. First, we propose a component-based highly configurable model where journals will select what type of production workflow they wish to use, including closed ones, with no external open-process components, if they so choose. Most importantly, Open Process does not stop at being an intellectual exercise; gComm(o)ns is our software whose early users include Historical Materialism and Cultural Studies Association (U.S.). Second, Open Process is not primarily concerned with technology (the field where inspiration comes from) nor with knowledge production (the field of first application) alone. The central point of Open Process is the organization of labour and decision making in general and their relation to society. Since Open Process provides us with the trace, production can be made entirely open to inspection by anyone. In other words, open-process cooperation is a widely applicable paradigm. Third, Open Process aims to follow the example of Free Software ethical axioms and their building of software commons, extending it further. Contrary to Free Software, Open Source was a self-declared capitalist movement whose sole aim was the attraction of investments from large IT corporations, by removing from Free Software and hacker ethics what capitalists did not like:
Our success after would depend on replacing the negative Free Software Foundation stereotypes with positive stereotypes of our own pragmatic tales, sweet to managers’ and investors’ ears, of higher reliability and lower cost and better features. [...] our job was to rebrand the product, and build its reputation into one the corporate world would hasten to buy. (Raymond, 2001: 176)
Now contrast that with Free Software and GNU manifesto commitments: axioms to mandate an ethics of sharing; the creation of a software commons (Stallman, 1999); a post-scarcity world where increased productivity will translate into more leisure and less work to make a living for everyone; a reward system whereby only what can be freely shared will deserve reward (Stallman, 2009b). In short, there is a fundamental political difference between Free Software and Open Source. Since Free Software existed 15 years prior to creation of Open Source, losing Free Software from the theoretical debate, an omission typical for the vast majority of academics writing on this topic, means losing the entire political field of battle, ceding it to the capitalist model called Open Source. Open Process restores this battle ground to its coordinates, adding the work of networking communities, represented by the work of Internet Engineering Task Force (IETF) to the mix.
Critics may see Open Process as too alien to academia. A counter argument, and a strong one in believe, is that open processes, a key component of hacker ethics, have first developed and thrived in academia amongst software and networking communities. Himanen draws many comparisons: in academia, a point of departure for researchers is the problem they personally find interesting. The academic ethic demands that analysis and solutions to problems to be published in order that everyone may use, criticize and develop further. Fulfilment of this is not required by law, but by scientific community internal rules (2001: 63-79). In business, almost all aspects of cooperation – goals, teams, time frames, plans, methods, distribution of results – are typically set by the hierarchy of management. In academia, teams are largely self-selected and self-managed with large levels of autonomy. Hacker ethic, widely shared amongst software and networking communities discussed here, shares all these features of cooperation. That is why I see Open Process in many ways as a good fit to academic knowledge production, and as a logical way to progress.
The model proposed here brings only some new aspects, related to the work done in the Open Organizations project (Geer, Malter and Prug, 2005a). It is an abstraction, a theoretical development of decades of developments in software and networking, and in related concepts and practices, especially in their open process part, that has already been partly reused in news production (Arnison, 2003).
What are my motives, you might ask? I am a PhD student dreading the idea of being drawn into the existing closed and opaque model. In the social sciences and humanities (dozens of journals that I checked), authors mostly have very little idea how long it will take for their submission to be processed, what the stages in the process are and how to engage with it, other than to wait for an unknown length of time. Quite a few journals do have some of these elements stated on their web pages, but it still takes many months and sometimes years, and it does not embrace open processes for better cooperation.
Given what is possible and what we can observe in the production of software and networking, the current practice makes very little sense to me. Geared against innovation, seemingly ‘most appropriate for papers that contain little that is new’, on average with less capable researchers often judging the work by the best ones (Armstrong, 1997: 6) – I find the current state of academic publishing depressing and unacceptable. The most unacceptable element is that we are supposed to produce new knowledge. And yet, with all the existing tools and processes for communication and cooperation, processes that gave us the Internet, the Web, and most of what is good about them, in academia, in terms of our working processes, ways of cooperation, we still mostly operate as if very little of this open volunteer based cooperation has actually happened – we mostly ignore it.
In many ways opposite to this explosion of open-process cooperation that largely originated and developed in academia by hackers, the culture of doing safe work is widespread in academia today. Information Systems is not isolated in ‘leaders explicitly advising new faculty not to innovate if they want a career’ (Whitworth and Friedman, 2009a), an anti-innovation culture starts earlier. I was part of a class of twenty, first year PhD students at the Sociology department, London School of Economics in 2008, given the same advice. To increase our chances of being published, we were advised not to innovate, but instead stick to what is familiar, in order to make it easy for editors to accept our work. Avoidance of innovation and risk taking and conformance to the publishing system which discourages it, is now part of the academic training in some disciplines. It seems that changing the publishing system might be the first key step towards changing the knowledge production.
Instead of enabling better cooperation, which is the key for knowledge production, the Internet and electronic tools are used in academic institutions increasingly to enlarge and multiply bureaucratic procedures, regulations and managerial control, radically changing the university in the process (Dyer-Witheford, 2005). That seems to be the trend (Sievers, 2008: 242-3). While managers are imposing more control in many aspects (Bousquet, 2008: 12-13, 59-70), we need to ask why it is that academics are so slow in adapting those new tools and processes. One aspect, which this paper does not deal with, and which requires a separate study, is the possible use for the improvement of internal processes within the university departments: self-governance, labour relations, and organization of work in all aspects. The other aspect is the production of knowledge, most of it revolving around writing and publishing in journal papers. Is the situation as dreadful as this recent paper boldly states?
Academics are now gate–keepers of feudal knowledge castles, not humble knowledge gardeners. They have for over a century successfully organized, specialized and built walls against error. [...] As research grows, knowledge feudalism, like its physical counterpart, is a social advance that has had its day. (Whitworth and Friedman, 2009a)
The Open Access movement and academic blogging are examples of the positive adoption. However, blogging is limited to individuals working on their own, linking and having discussion through comments. It does not apply to the full software-networking Internet model, which is not a surprise – it is not meant to be about collective, organised, prolonged production work. Still, I am tempted to argue that blogs, pingbacks (Langridge and Hickson, 2002), discussions in comments (Adio et al., 2009), intense circulation of new posts and comments via RSS (RSS Advisory Board, 2009) amongst clusters of inter-linked blogs, are all elements of an early form of open-process cooperation developing in academia. Not developing in an institutional setting, but, for now, in a self-administered, out-of-institutions, way. Which is a good thing; it carries the volunteer-core spirit, an essential part of the Internet Model open-process aspect.
That said, I would not fully agree that ‘science is already a wiki [...] just a really, really inefficient one – the incremental edits are made in papers instead of wikispace’ (Wilbanks, 2009). However, there are several aspects of wikis, blogs and comments that could lend itself well to the creation of new forms of scientific production that could be a step forward from the current journal model.
While I fully agree with Open Access (OA) movement goals, OA falls much too short of what, given the models and tools we have at our disposal, could and should be done in academia. The primary limitation of OA is focusing on the Open Source paradigm and its central attribute: openness of the final product. Which is not a surprise, given that this was the most dominant concept signifying the success of software and networking communities at the time of the creation of the OA ideas.
Today, I claim, we need a paradigm shift. Open Source is a very limited subset of the methodology that made software and networking communities so successful. To re-capture what was lost in the Open Source, to go back to the Free Software and early hacker ethic, we need an Open Process and The Internet Model to replace it, and thus to expose the world to the potential of the re-use of these models in many spheres of society, starting with knowledge production.
Although it seems that most academics on journal editorial boards are already employed in universities, labour of editing and peer reviewing is unaccounted for in their university jobs. This in turn leads to lack of financial support from universities and mixed feelings about the work in journals, which makes journals easy prey for corporate publishers seeking profit alone. Open Process makes work visible, traceable and hence easier to argue for the necessity of institutional support and recognition. This could lead to more university funds for labour in journals, with the result of journals being able to offer articles for free, taking them out of expensive corporate subscriptions.
According to Iain Pirie, there is no doubt that the current model is based on the private appropriation of public labour (2009). Corporations are a significant source of funding, but hold no qualitative role in the production of journals. In the process they make significant profits, year after year rising subscription rates for libraries, with almost entire production based on volunteer labour of academics. This conflict of interest between the academic community and corporations – who fund the fraction of the total cost, yet control almost entirely the price and profits – came to prominence with the clash of titans between University of California and Nature Publishing Group (Howard, 2010). The solution Pirie proposes is sensible: why not have those funds contributed by the state, all other parts of the system are largely state funded already – why not bring publishing in-house. With Open Process and gComm(o)ns platform we are addressing lack of organizational models and tools to make such move possible.
Open-process publishing and reviewing advantages
My claim on how to use the Open Process cooperative model, how to develop it further, is partly a speculation much inspired by the practice and theoretical work in the Open Organizations project, coming out of over two years of intense work in a distributed networked organization (Indymedia), largely modelled on hacker (and partly academic, as demonstrated by Himanen et al, 2001) organizational elements: radical openness to participation, sharing of results, peer review, working groups, extensive use of email lists and online chat, rough consensus as decision making principle and intense documentation.
Based on that experience and my reading of software and network communities, i.e. based on The Internet Model and The Open Process that came out of it, I speculate that journals implementing these processes could benefit in the following ways. Structure and visibility of tasks, processes and work done to complete them will be clearer, which contributes to the easier recognition of the workers who contribute the most work that matters to the organization. As a result of this visibility, focus on implementation work and continuously carried out processes will increase, which keeps the organization alive and developing. Project management will become easier, while decision making will be placed into the hands of those who matter most, who contribute most to the implementation work, work whose progress defines the organization and ensures its continued existence. All of this will attract new volunteers and reduce the impact of the existing counter-productive internal participants. Today, given the structure of organizations across society, given our time based obligations to work place, and our waged labour, it is no surprise that it is difficult to see how these new processes of work, this hacker culture, especially its volunteer aspect, could be applicable. I’ll try to argue here how it might impact academic journal publishing.
The following benefits could be gained with open-process publishing and peer reviewing:
1) The quality of submissions would increase over time – because new authors would see the history of the entire process and learn from it (from the archive of all submissions, peer reviews, editorial board comments, etc). In addition, quality would increase because new authors would be less likely to submit badly written texts with no adjustments to publicly stated journal guidelines – a big problem for editors, I am told repeatedly, is the large amount of low quality initial submissions. In the current system, with externally invisible submissions, the reputation cost of submissions for authors is too low: they can submit any rubbish without adjusting it to the journal’s guidelines and without caring for the quality of what they submit. The only people who see these disrespectful acts (towards work of editors, especially volunteer work), and who associate it with the author’s name, are editors. If submissions were openly visible, the cost of submitting random, unadjusted, low quality, undeveloped papers would be far higher, since such disrespectful behaviour would be publicly linked to the author. Furthermore, in the case of highly ranked journals their editorial decisions – which directly and strongly influence the chances of authors in the academic job market – would become visible and a possible matter of external scrutiny. Atmospheric Chemistry and Physics journal has been operating an open, two-stage peer review process for years, and the results do confirm the logic of my hypothesis:
Public peer review and interactive discussion deter authors from submitting low-quality manuscripts, and thus relieve editors and reviewers from spending too much time on deficient submissions. [..] The deterrent is particularly important, because reviewing capacities are the most limited resource in the publication process. (Koop, 2006)
As both referees of this text claimed, and as proponents of anonymous and blind peer reviewing process claim, open-process peer reviewing might be a significant deterrent to many, if not to the vast majority of referees. The logic is the following: younger academics, or those with a lower career profile, or simply those whose work might be affected by any aspect of the reaction of the author who is being reviewed – none of these academics are likely to be willing to review a paper of a big academic star, if their names are revealed. According to this logic, anonymity protects the referee and gives her the freedom to respond without any possible retaliation by the author. Following the advice of referees and editors to put myself in the shoes of reviewers, rethinking the issues once more, I cannot identify with this logic. I tried. It does not work for me. If anything comes to my mind in such role playing, it is the desire to always have my name associated with the work I do. Anything less feels wrong to me, and rather unethical (Godlee, 2002). A critique behind the veil of anonymity, the key purported positive feature of the current system, seems also entirely at odds with how the new writing is produced. In writing, everything has to be referenced, the more, the better. Ideas are critiqued, improved or abolished, and it works not only because we see rational arguments in relation to each other, but because by knowing the name, the history, the previous work and intellectual, sometimes even business and political, associations of the author, we can put those ideas, both original and their critiques, in context.
Especially in the social sciences, name and biography of the author are essential ingredients without which it is impossible to evaluate the ideas. This core logic of academic production is dropped by the current reviewing system. After giving it a second thought, I hold that the current system is flawed and destructive to the open battle and cooperation of ideas that academia relies on through sharing, referencing, quoting. These key features could and therefore should be upheld by a new peer reviewing and publishing system, replacing the current one. Open Process gives us an option to consider.
Peer review has a long history (Spier, 2002), but it was not much researched until 1990s. A large scale randomized controlled trial with 420 reviewers by The British Medical Journal (BMJ) – introducing eight areas of weakness in a paper accepted for publication, giving it to five separate groups of reviews under different conditions of anonymity and blinding of reviewers – found out that ‘neither binding reviewers to the authors and origin of the paper, nor requiring them to sign their report had any effect on rate of detection of errors’ (Godlee, Gale and Martyn, 1998). A follow up BMJ randomized trial, examining effects of revealing reviewers names to authors of the paper, found out that although identified reviewers produced slightly, but not significantly, better quality reviews, there was a significant difference in the number of reviewers refusing the review, with 12% more rejections (35% v 23%) for reviewers with a revealed identity. They concluded:
open peer review is feasible in a large medical journal and would not be detrimental to the quality of the reviews. It would seem that ethical arguments in favour of open peer review outweigh any practical concerns against it. The results of our questionnaire survey of authors also suggest that authors would support a move towards open peer review’ (van Rooyen et al., 1999: 26).
The British Journal of Psychiatry randomized trial a year later, with a goal to ‘evaluate the feasibility of an open peer review system’ through 498 reviews, found that the quality of signed groups was significantly higher, with a more courteous tone, but took significantly longer to complete. Surprisingly high number of reviewers, 76%, decided to sign their reviews (Walsh et al. 2000).
Finally, Open Process will challenge current notions of the quality of peer reviewing system. Only when we can see the full process – from the initial submissions, through the reviews and revisions after the peer reviewing process to the published version – we will be in a position to compare the two models. As it stands today, stating that peer review contributes to quality seems a wild guess that lacks a proper argument and any scientific basis. Moreover, as I suggest here, only if we make all submissions visible at the time of the submission we are making editorial boards and reviewers accountable and fully rewarded for their work:
a number of important questions about peer review can only be answered, however, by studying rejected manuscripts as well as those that are accepted. Until such research is undertaken, peer review should be regarded as an untested process with uncertain outcomes. (Jefferson et al., 2002: 2786)
Richard Smith, editor of the BMJ and chief executive of the BMJ Publishing Group for 13 years wrote one of the most damning articles on peer review and is worth a lengthy quote:
People have a great many fantasies about peer review, and one of the most powerful is that it is a highly objective, reliable, and consistent process. [...] it is little better than tossing a coin [...] ‘it is based on faith in its effects, rather than on facts’ [...] it is not a reliable method for detecting fraud because it works on trust [...] it is slow, expensive, profligate of academic time, highly subjective, something of a lottery, prone to bias, and easily abused [...] it is probably unreasonable to expect it to be objective and consistent [...] Sometimes the inconsistency can be laughable. Here is an example of two reviewers commenting on the same papers. Reviewer A: ‘I found this paper an extremely muddled paper with a large number of deficits’; Reviewer B: ‘It is written in a clear style and would be understood by any reader’. [...] (Smith, 2006: 179)
During his work at BMJ, several international conferences on peer review were organized, eventually leading to the BMJ switch to open peer reviewing as their default policy. Together with The Journal of the American Medical Association (JAMA) and The New England Journal of Medicine, they were leaders in the introduction open peer reviewing (Smith, 2005). As a reading through the peer review history to the latest experiments suggests: ‘the core assumptions inherent in the process must be evaluated and adapted to the changing environment.’ (Benos et al., 2007: 151).
2) The quality and innovation in published texts would increase – because of the above point one, and because opening up the whole, or most, of the publishing process would improve the quality of peer and editorial board reviews. Doing low quality, superficial peer or editorial reviews would be publicly exposed and vice versa – the possibility of a lost, or gained reputation as an editor or peer reviewer would be a motivating factor. In the current model, all of that work is visible only to those few who participate. In one of the most comprehensive studies, a review of 68 papers concerning peer review, a rather depressing picture is painted. At the time of writing it, Scott Armstrong had been professor for over thirty years, founding two journals and acting on fourteen editorial boards. He draws attention to the anonymity aspect of reviewing and the lack of reward, thus confirming what I concluded speculatively: ‘reviewers generally work without extrinsic rewards. Their names are not revealed, so their reputations do not depend on their doing high quality reviews’. Although ‘reviewers typically have less experience with the problem than do the authors’, they do not contribute with any new data nor analyses, they spend between two and six hours doing it, often after waiting for months to do it. Overall, reviewers use their opinion against the scientific work of authors, often differing from other reviewers (Armstrong, 1997: 5). To complicate the whole thing further, academics are impressed by and prefer ‘complex procedures’ and ‘obscure writing’.
Amongst several suggestions Armstrong makes is to have authors nominate one of the reviewers. This is especially important for innovative work, type of work that provides ‘useful and important new findings that advance scientific knowledge [...] which typically conflicts with prior beliefs’, and requires a paradigm shift (Armstrong, 1997: 2). Another suggestion he makes is open peer reviewing, since ‘disclosure of reviewer identity allows for a deeper dialogue among interested parties [...] while once the article is pronounced “peer reviewed” and published, there is little record of the process and no means of further development’ (Phillips, Bergen, and Heavner, 2009). Such open process would create lasting relationships and build a reputation for good reviewers. The logic of reputation works well in life in general, it can work well via online tools too – Ebay is a good example of quite a successful model of closely attaching behaviour to a name. Peer reviewers could still easily stay anonymous, if they choose so – they could send their review to editors who could forward it to the open-process system. In that case, they lose the reputation they could have gained for a signed well done reviewing. Neylon argued that reviewers should be ‘held accountable for the quality of their work. If we value this work we should also value and publicly laud good examples. And conversely poor work should be criticised.’ Recognizing that most of us do bad review at times, he states clearly why is this the case: ‘After all, why should we work hard at it? No credit, no consequences, why would you bother?’ Regarding the reciprocity argument, that we can only expect good quality peer reviews if we do the same ourselves, the author concludes that this may be true ‘only in the long run, and only if there are active and public pressures to raise quality. None of which I have seen.’
Another couple of key points the author makes are portability of reviews between journals and the loss of opportunity for journals which could ‘demonstrate the high standards they apply in terms of quality and rigor – and indeed the high expectations they have of their referees’, only if reviews were open. Finally, ‘virtually none track the changes made in response to referee’s comments enabling a reader to make their own judgement as to whether a paper was improved or made worse’, thus setting editorial boards of journals up as judges who can pass an infallible judgement on every aspect of publishing on the behalf of their readers (Neylon, 2010).
3) Journals who implement this process well would attract more agile and risk taking authors – because through the open-process publishing it makes more sense for authors to take more risks (which might sound counter-intuitive at first), to situate themselves less within the known/accepted knowledge boundaries, since they can rely on the peer and editorial assessments of their work done in public. This in turn can lead to less politically correct, career-opportunist position taking from both authors and reviewers and to an opportunity for more bold leaps from both sides. In short, openness would steer the reviewing assessment towards a focus on the merit of the work assessed. Hence the authors can be more confident in submitting more risky, less compromise driven works. This would lead us away from ‘the modern academic system that has become almost a training ground for conformity’ (Whitworth and Friedman 2009a), and away from the ‘publish or perish’ devaluing model whose low-risk, but well-referenced style of writing has made overall research difficult to assess. It would encourage ground-breaking authors to publish their new research early and suppress mediocre authors who often, by the sheer number of low-risk publications, prosper in the current play-it-safe system. Armstrong’s research again confirmed this: as a wide variety of research points out, it is common for reviewers to reject ground-breaking papers, as ‘it is more rewarding (for researchers) to focus on their own advancement rather than the advancement of science. Why invest time working on an important problem if it might lead to controversial results that are difficult to publish?’ (Armstrong, 1997: 15)
If open-process publishing were widely spread, re-writing of the same papers for different journals, again for the sake of careerism, to get research points and an extra publication would be far easier to spot and expose. The current opaque system makes it easy for low-risk careerists. Whereas Open Access is contributing to this changing for better, Open Process would reduce it drastically. We could use any good web search engine to check for key paragraphs, concepts with the author’s name, and it would be soon clear whether the author has already published on the topic, where and exactly what. Simultaneously, participation of the wider community of reviewers would increase the chance of innovative, risk taking, work being spotted and it would help to develop it and publish it (Beel and Gipp, 2008).
4) Journals that implement this process well would significantly raise the dynamics/pace of research – because some of the most in-depth debates that now happen on academic blogs could develop thanks to the faster and open-process peer reviewing and commenting being integrated into journals in some form. The form could be shorter, still referenced as academic papers are, and arguments even more focused than those in an average 8000 words paper. My impression is that most journal papers revolve around few core ideas (often a single one), not necessarily always connected as closely as to require a single paper. Today, I believe that some of these ideas originate in blog posts. We could enable those high quality 700-800 words blog posts to be submitted, first as rough drafts, and then in a fully referenced short, still burst alike form of 1500-2000 words. Since the argument would be shorter and more focused, it would be easier to evaluate, which would mean shorter turn around peer reviewing and publishing, and hence a sooner possibility for those whose work relates to it to respond. Let us call this ‘early screening’. The cycle of publishing would thus follow more closely the way we research, especially for senior academics for whom ‘research is often done when a few precious hours can be salvaged from a deluge of other responsibilities’ (Weber, 1999). It would also contribute to possibly avoiding the fact that ‘many journal papers are out of date before they are even published’; with a rather frustrating truth that many experience personally, namely that ‘in the glacial world of academic publishing one rejection can delay publication by two–four years’ (Whitworth and Friedman, 2009a). In addition to this, there are situations when a rapid response of scientists could be immensely beneficial (Varmus, 2009). PLoS Currents is a recently started project to provide a platform for fast publishing of scientific papers on specific issues (worldwide H1N1 influenza A virus outbreak is the first one (Public Library of Science, 2009)), using a board of expert moderators instead of in-depth peer review in order to get papers shared as rapidly as possible.
5) Journals would gain readership and reputation – because of all the above and because of internal benefits and their public visibility. That is, given that they remain in a form which still justifies calling them journals. Several authors consider that the future of academic publishing will be focused on articles, with a possibility of moving towards ‘public research environments’ (Mietchen, 2009b) that will be displacing the notion of journals. One thing is more certain, that journals do not have a single future (Nielsen 2009c). Different platforms are already emerging and we will be seeing more of it in the near future. Scientific blogs are places where emerging models are discussed. There are big problems for a more collaborative model to emerge. Academic journal publishing is a hugely profitable industry (Cope and Kalantzis, 2009) achieving its profits by a paradox of the privatization of the work done by communities funded mostly by the state, selling the access to it back to those who produce it via library subscriptions. In health sciences and within most established institutions ‘the current publication and review process is controlled and fiercely defended by those who benefit from it’ (Phillips, Bergen, and Heavner, 2009). For Nielsen, for radically open collaboration, science lacks both tools (infrastructure) and incentives: why would one write and comment on blogs if that does not count when grants and jobs are given (Nielsen 2009a). Perhaps that is true in physics, where he works, although I doubt it. I believe cooperation on blogs and comments, and the existing journal system, can and do co-exist for the benefit of the participants in both producing better work and in enhancing their careers. For example, early exposure of this piece on the blog resulted in the text being significantly improved (in dialogue with Benjamin Geer and others), a presentation at a conference accepted, and an invitation to give a lecture. Clearly, I benefited a lot from an early exposure and from developing it in the open. It also did not limit my publishing options, quite the opposite, it has increased them.
It is important to note that this type of open work and early releasing is not always possible, and I realized it immediately, while simultaneously writing another political text. This confirms that there will be different platforms, writing and cooperative scenarios and methodologies, for different situations, scientific fields and communities. Our thinking has to be open, if we are to increase the possibility of benefiting from the rupture of the centuries old model of scientific collaboration and publishing.
Internal benefits for journals
In addition, there could be enormous internal benefits for journals, all of which would contribute to their increased organizational health and development:
1) A clearer structure and visibility of tasks and processes contributes to recognizing its most important workers – due to the breaking up of a large task (publishing a new issue) into a set of defined and openly recorded smaller steps, a more precise and transparent allocation of tasks and responsibilities exposes who does what, how and when. This is crucial, since such practice, system, structure of work, rewards those who do more, better and timely work. In organizations, especially in voluntary ones (most editorial boards/collectives in social sciences and humanities), recognizing contribution, and lack of it, is one of the keys for the survival and improvement of the project. Often, in projects where the structure of openly defined, recorded and visible smaller tasks does not exist, it happens that the majority of the recognition for the work collectively done falls to the wrong people i.e. to those who have better social connections, who are in a more visible position within the communities in which the journal/project operates. This default mode of disorganization is a source of constant damage for the project. It kills the spirit, rightly, of harder working, more if not the most important, participants. In addition, it frequently makes them either imitate the behaviour of those who collect the recognition (contribute less, collect more reputation towards your career progress), or it makes them leave the project. This in turn requires constant recruitment of new project members either who will be blind to the unjust distribution of rewards (reputation), or who will accept it as it is. If we can take is as relevant, given the differences in the fields of operation, a recent study has shown that contributors to popular websites (Youtube.com, Digg.com) are motivated by the attention they get. The attention comes from the volume of contributions. Users who get no attention tend to stop (Wu, Wilkinson and Huberman, 2009). Although the work of a contributor to Youtube.com is significantly different from a volunteer in a collectively produced journal, there are some parallels. Translated in our context here, it suggests that making the work on tasks visible (open-process publishing’s key point) is likely to award most attention to those who do most of it, which is a positive outcome for any project that relies on retaining the most productive members.
2) An increased focus on implementation work and continuously carried out processes. Defining the workflow steps and stages exposes what is the necessary implementation work that has to be continuously carried out. It puts emphasis on organizations, groups, and collectives as a set of ongoing processes. It also exposes other kinds of work as less important, and hence those who do it as less essential for the existence of the project and the group. Many voluntary loosely structured groups suffer from participants who talk and communicate a lot, often object a lot as well, but contribute little to the implementation of work tasks. Frequently, these participants hinder other key group members – on whose contribution the project and group rely – from getting on with their tasks. Reducing the influence of talk and communication intensive participants who do not contribute much to the implementation work is highly positive for the survival, development and quality of the work produced.
In other words, structured open processes make it possible for an organization, collective, group to not be open and welcoming to any kind of participation, internally nor externally, but be selective instead. More of this kind of openness means more structure, more internalised working discipline, more commitment, and more ability to improve cooperation with precision. In a slightly more abstract terms, the more a whole is exposed, defined, and its workings and operations known and visible, the more likely we can adjust it, reshuffle it, to make it do what participants in the whole want it to do. Open processes enable this. Closed processes allow more corruption of organizational goals: the less we know about the processes, components and their relations, the more individuals can utilise the results of collective work, or of the work of others, for own goals and benefits (in academia, for their career first).
In Free Software terms, long-term freedoms to act and produce collectively do not come cheaply, and have to be defined, developed and defended. The key pre-requisite for the four Free Software freedoms (defined as ethical demands) to cooperate and share is universal free access to software source code. What is missing from the Free Software definition to give us an accurate picture of the cooperative model discussed here – although it was present in Richard Stallman’s work as a hacker developing software cooperatively, and in the work of software and networking communities – is what is visible from the Internet Engineering Task Force principles. In short, to explain the success of the Internet Model, having the source code is not sufficient. Other key components must be present: defined goals, open participation (anyone can join) and work processes, respect for and focus on competence, volunteering core, rough consensus and running code decision making principle (voting used only in extreme circumstances) and defined responsibilities. This is precisely why Open Access concept and movement are not enough, nor was it their goal to implement a successful open cooperation on the trail of the Internet software-networking model. A specific organizational model is necessary too. Using the Open Source paradigm, a business friendly and self-declared ethics-free version of Free Software, is even more misleading, because of its emphasis on the source code alone. Open Source is the least useful model and concept to help here, since it lacks both explicitly defined ethics – which makes it possible in the first place to define, develop and defend sharing and cooperation in Free Software – and a defined organizational model. To explain the successful model of software and networking communities, I propose a following formula: The Internet Model = Free Software + IETF. In other words: software + networking. Or, even better: ethics + organization.
To the existing Internet Model, I would add the following attributes as highly beneficial: first, a mapped workflow of all working groups, components and their relations, and second, a defined exclusion process. The first one can be done through splitting of the work in stages (recognizable, definable points in collaboration), designating working groups with known tasks and participants, and mapping their relations, their inter-processes, so that dependencies between the stages, working groups and other components of total group activity are known and visible. All of this is geared towards enabling and focusing on the openness of processes and on the contributions of those who carry out most of the implementation work. Since such type of work is the blood stream of collective work: without its movement, groups, collectives, organizations cannot produce. With open processes at each stage of work, possibility for new workers joining and participating in only selected parts of the overall production opens up.
3) Easier project management – increased task modularity and real-time visibility of the status and article history (anyone can anytime check the state, comments, versions, reviews of any submission on the web system used) allows for better project management, easier allocation, delegation of tasks, and a more precise sense of progress and problems. All beneficial for the general work spirit, time and resource assessments, and to keep authors who submit papers, and all other parties involved, informed correctly at all times about the full status of the submission.
4) Decision-making into the hands of the people who matter most – because who does what, when and how becomes visible, and because those who carry out implementation work continuously matter most for the organization, decision making can be more in their hands. For example, the Marxists Internet Archive (MIA) addresses this by defining a volunteer, and hence defining decision makers, through work contributions: ‘MIA volunteers are people who have, in the most recent six-month period, made at least three separate contributions over a period of three weeks to six months’ (Marxist Internet Archive Admin Committee, 2009) – not far from the above definition of the implementation work.
5) Attract new volunteers and reduce the impact of the existing counter-productive internal participants – utilizing the above task and process openness and visibility, journal editorial boards could use decision making rules similar to the MIA to attract volunteers. Through linking of decision making rights and defined implementation work, it would be recognized that a certain type of work that could be done by external participants matters more than the mere presence of existing internal talk and communication intensive participants. To reduce risk, only certain decision making rights can be given to new participants to start with, until the existing board is assured they are fit to carry out editorial work according to the journal's long term goals and strategies. This opens up groups and projects for new participants who would from the beginning adopt the culture (habits) of doing the implementation work, while simultaneously reducing detrimental influence. It could also lead to justified exclusion, or sidelining, of existing internal talk and communication intensive participants. In the context of volunteer self-managed groups, this is a positive culture to develop.
Modular process: workflows, states, actions and transitions
To summarise, fully open-process academic publishing would amount to the following being open: initial submissions, editorial collective and individual comments, peer reviews, further peer comments, author comments back to reviewers, all the subsequent drafts, and the final published or rejected text. One objection is that authors would want only their final version clearly marked, used and quoted. A way to ensure this is to map and implement the entire production in software: modularise and define the workflows, roles, states, their actions and transitions. This gives us a good control of what is exposed publicly, when and how, thus enabling us to address wishes and concerns of involved parties: authors, editorial boards, reviewers. A workflow diagram (see figure 1) will explain this best.
As the submission moves through the states of the publishing process, its status changes accordingly. The system has different roles (Author, Editor, Reviewer, Copyeditor) which require to take actions at a specific article state. Comments can be left at the time of state change. Once a state change is made, additional automated actions (called
Figure 1: gComm(o)ns article workflow with peer review (Grigera and Prug, 2010).
transitions) can be configured and programmed to run. Logged in editors can see queues with the latest articles in a given stage (imagine it like an RSS feed on the side bar of a website). Editors can also assemble a personal dashboard, creating an individual view of the production process, selecting article queues and other available components. This provides a complete view of all the latest articles in the publishing process. Here is one such dashboard showing six state queues (Submissions contains submitted Drafts) with the latest articles (figure 2):
Figure 2: dashboard.
This is a highly flexible system, and with relatively small changes, available mostly through configuration files and with small additional programming, each journal can configure its own workflow. It would be interesting to see differences in editorial models becoming visible as different journals start adjusting the platform to their needs.
The picture I present here is quite a developed system. However, existing tools, as simple as a blog software and freely available wikis, blogs and content management systems (Drupal, Joomla, Wordpress) can be customized well enough to enable us to start working using open-process collaborative practices with a significant degree of labour saving automation and other benefits now. Some of these are available in commercial hosting packages with customizable point-and-click installation, backup and administration for less than few hundred pounds per year. It is the human element – seeing the potentially positive benefits, seeing them being larger than the risks associated with those changes and the risk of remaining in current closed models, changing the habits of editorial boards – that is the biggest obstacle.
A simple transition model, issues and research threads
The above elaboration is perhaps too complex to implement straight away, to be the next step in a move from a closed access journal, to an open-process one. Ideally, we need simpler transition models. One option is borrow from the software development model through email lists. Authors could be asked to provide a summary to start with (up to 1000 words), and editors would comment at the prospect of the central idea developing into a full article of acceptable quality. They could do this much more quickly than traditional peer review, because they would not have to read an 8,000 words article to find out that there is a serious problem: it is well known that it is much cheaper and quicker to fix bugs at the design stage. This is what I proposed we call ‘early screening’. Indeed, to improve peer reviewing, some of Armstrong’s suggestions are very close to ours:
With an early acceptance procedure, researchers could find out whether it was worthwhile to do research on a controversial topic before they invested much time. An additional benefit of such a review is that they receive suggestions from reviewers before doing the work and can then improve the design. (Armstrong, 1997: 17)
The author would also get a good idea of how receptive the reviewers are to the article, and thus how likely it is to be published. This helps everyone avoid wasting time on submissions that have no chance of being accepted, and yet, most important, the quality control role of the peer reviewing process is maintained. Network of peer reviewers used by the journal could also be invited to in this process (another email list could be used for editors and peer reviewers network early screenings). It also means development of a community of peer reviewers whose interest becomes to increase the reputation of a journal in which they publish. Instead of publishing issues on a regular basis, the journal can publish each article electronically whenever it is ready. Articles get published when the community consensus is that they are good enough to publish. At any given time, if there are no finished articles, the journal does not have to publish anything; thus there is no pressure to lower standards or to rush the process in order to meet a deadline. A print issue can be treated as the ‘Best of’, or a special/themed issue, containing only a selection of what has been published on-line.
Parallel with issues and special issues, a new form of bundling articles together could be even more suitable. Let us call it research threads (Prug, 2010). I find that calls for special issues are often a frustrating experience. I frequently find the ones I like, but cannot interrupt what I am at that moment working on and switch to write for the special issue. This happened on many occasions. Every single academic colleague I asked confirmed it frequently happens to them too. Here’s the reasoning: a call for special issues often goes out 6-12 months in advance. By the time an author hears about it, the author can be faced with only a few months left until the deadline – this frequently seems to be the case – unless the author is already part of circles through which she/he will get informed directly (this also seems to be the pattern). Researching and writing a good academic article requires several months, often much longer. By the current dynamics of special issues, many authors that do think they have something to contribute and are willing to write on the topic end up missing the opportunity. This is why special issues are more collections of articles of existing clusters/mini-networks of academics, rather than collections of the best and most suitable work that is being done on the given topic at given moment in time. I do not believe that special issues, as a form, are suitable any more for the best possible production of knowledge. Instead, I propose a model of research threads, with two distinct features: 1) long deadlines – minimum of two, perhaps even three years ahead; 2) ongoing publishing – submitted articles are published within a research thread as soon as they are accepted and peer reviewed, in order to present new research as soon as possible, to make the research thread alive (with possible responses to published articles) and to not make authors wait for a long time before their accepted article gets published.
Several parallel research threads running simultaneously would give the journal a distinct identity and a sense of constant development through the current research threads; since papers could be posted closer to a form of dialogue, responding closely to each other. In the simplest and most open form, all of this could be done on a publicly visible email list, thus enabling journal readers to get engaged through this open process. There are significant problems with the processes as open as we are suggesting here. Although authors might like the more extensive peer reviewing that is likely to happen on an open mailing list, it is to expect that most of them would not want to have their work cited, nor used anywhere, before the final version accepted by the journal is not ready. A way to alleviate this (also suggested by Armstrong) is to have the right cultural safeguards. There would have to be some principle like ‘respect for peer review’, which meant that citing journal-mailing-list messages and preliminary drafts in academic articles would be considered a huge taboo. Academic ethics would have to include the idea that you can criticise preliminary drafts as much as you want, but only on the journal-mailing-list. If you want to criticise them anywhere else, you have to wait until the final version, or a draft version approved by the author, is published. Another problem is competition. In academia, ideas are essential starting elements in the chain of valorization. Authors might avoid early screening or draft peer reviewing on open mailing list because of fear of their ideas being stolen. A closed email list might be more appealing to authors. However, what is lost with closed email lists are several aspects that make software model successful. First, with no visibility of the reviewing process, readers cannot judge the quality of the work done by reviewers and hence reviewers do not gain reputation (assigned to them by the readers) from doing it. Second, new reviewers cannot join the project, nor can author submit, on the basis of reading and judging first how its reviewing processes operate. Third, knowledge produced in the process of reviewing cannot be reused and applied elsewhere. Fourth, the process of reviewing itself cannot be studied freely, thus reducing the possibilities of it being improved. Software model thrives on all of these four features. With the Free Software model, even a tiny amount of code can always be traced and attributed to the author. A single character changed in the code can fix an important bug. With ideas, early rough versions cannot be easily traced and attributed: they can be stolen and developed in different directions.
When I started writing this article, I thought there are multiple risks, drawbacks, significant additional labour investments, transition plans, and other reasonably raised issues to be addressed, in order for this proposal to make sense to the editorial boards who will be making decisions whether to try adopting elements of open-process academic publishing and peer reviewing, or not. What I found through research surprised me. I have been convinced that successful journals that do not take risks and change towards open-process participatory publishing in some way, risk losing most. They risk losing relevance to new journals that could capture the attention of the academic community in a given field if they embrace elements of open-process possibilities as their competitive advantage.
Implementing open processes widely would also present an opportunity to challenge the logic behind journal ranking tables and other existing metrics. A demand could be formulated to open the processes of all ranked journals. Seeing editorial work – what gets rejected, what accepted, and on the basis on which arguments and reviews – could provide us with the arguments to open up debates and pose a challenge to the ranking tables, rendering them less authoritative.
Furthermore, open-process publishing would make labour in journals visible, thus enabling the demand to account for it in academic employment contracts. This would reduce the rate of unaccounted free labour. Most important for universities that have to buy back expensive access to journals edited, peer reviewed and written by their own staff and other academics on university salaries: it would enable us to quantify financial investment of universities in the production of journals and academic books, thus forming the clearer argument for re-negotiating, or perhaps abolishing the control of corporate publishers over the access to journals and books. In other words, Open Process would provide a strong financial argument for more open access journals and books.
Finally, implementing open processes would also open up the biggest paradox involved in the academic knowledge production. Namely, both labour and processes through which works of academics are selected for publishing are mostly opaque, erratic, unreliable (BMJ trials evidence) and not accounted nor directly paid for. Since allocation of academic jobs is to a large degree closely related to this labour and these processes (authors that publish in highly regarded journals get jobs and best position), it follows that the allocation of jobs in academia is to a large extent based on opaque, erratic, and unreliable basis, i.e. on journal publishing processes. This is the bitter truth of academic knowledge production that Open Process aims to disrupt.
 All of Historical Materialism 2010 and Cultural Studies Association (U.S.) 2011 conferences submissions have been taken through gComm(o)ns Conference system.
 Sharing, studying and changing the source (access to source is a precondition) and mandatory distribution of changes (Stallman, 2009a).
 IETF’s ‘cardinal principle’ is called open process: ‘any interested person can participate in the work, know what is being decided, and make his or her voice heard on the issue. Part of this principle is our commitment to making our documents, our WG mailing lists, our attendance lists, and our meeting minutes publicly available on the Internet’ (Alverstrand, 2004). See Kolkman (2010) for an excellent introduction to IETF’s work.
 The best account of hackers’ early MIT days (later University of California and Stanford) is in Levy’s recently reissued book (2010).
 There are reputable journals already allowing comments directly in texts – blue squares in the text are user made comments.
 By successful I mean inspiring hundreds of thousands of international volunteers in cooperatively creating software and sets of ground breaking network protocols, and further inspiring even larger number of people in other spheres to reuse and adopt some of their methods and ethics.
 In the Open Organizations project we defined implementation work: ‘anyone doing implementation work in the group, or has done such work in the recent past (e.g. within the past two months), can participate in its decision making’ (Geer, Malter, and Prug, 2005b)
 Richard Smith’s argument that ‘some readers, particularly researchers, will want to follow the scientific debate that goes on in the peer review process’ (1997) is the essential feature through which software and networking communities improve their work: decisions and changes debated and commented on email lists, blogs, and even in the source code (Kotula, 2000).
 A political elephant in the room is here the question of quality. Open-process production of knowledge would provide additional ways to open up the often unspoken political aspects of quality assessments.
 See Kaplan (2005) as an example of a proposal to make reviewers to account for their comments.
 See Fitzpatrick (2010) book draft for an extensive analysis of the problems of anonymity in peer reviewing.
 See how peer review functions could be developed and improved with cooperative approach, through a new system Scienstein. For a more technical explanation of Scienstein, see Gipp, Beel and Hentschel.
 See Nielsen (2009b), especially the part where he discussed how New York Times cannot compete in providing scientific writing with plenty of top scientists and their blogs.
 Armstrong (1997: 22-23) suggests alternative forms of articles, including publishing electronically peer reviews.
 See Gura (2002: 258-260) for an open peer-reviewing model which starts with fully finished articles.
 In the spirit of open process, Mietchen (2009a) provided several excellent comments and references soon after I posted first version of this text on my blog, some of which I incorporated.
 CSA 2010, 18th-20th March 2010, Berkeley, USA
 Protocol ownership, in the IETF case (Alverstrand, 2004), maintainer in the FS case, or package maintainer in the case of GNU/Linux distributions (Michlmayr and Hill 2003).
 Elsewhere I have argued that the ethics-free claim is false: the ethics of Open Source is a capitalist one (Prug, 2007).
 Critical Studies in Peer Production journal, following an alternative evaluation model (democratic knowledge exchange system design) developed by Whitworth and Friedman (2009b), implement the concept of signals (2010), where reviewers signal what they think of the article in eleven categories, instead of rejecting or accepting it.
 Sending an email to inform author about an article being accepted is an example. Transition can also be time defined (Ohtamaa, 2009), providing a mechanism to send emails about overdue tasks.
 One of the reasons we choose Plone content management system to implement gComm(o)ns is its inbuilt support for workflows and states (Stahl, 2008).
 Although Open Journal System (OJS), a widely used journal publishing system, has the concept of workflow (Public Knowledge Project, 2008:12), we found it rigid, too difficult to adjust to the needs of different journals and open process optional aspects we required.
 The idea for this section comes from Benjamin Geer, who commented substantially on an early version of this text on my hackthestate.org blog. Unexpectedly, developing open process in academia ended up being done in the open.
 See Atmospheric Chemistry (2009a), where the first draft gets posted on a website for an eight week long open discussion, after which it gets edited, to finally enter the peer reviewing process.
 Lateral, a new online publishing platform by the Cultural Studies Association (U.S.) embraced research threads (Burgett and Martin, 2010).