1. Introduction

Since the beginning of the Open Access movement there has been a huge increase in the number of Open Access journals. The increase is mostly visible on the article-level, as Open Access publishers such as PlosOne publish thousands of articles (Ware & Mabe, 2012, pp. 25–26). This increase in Open Access publishing has resulted in the fact that government policy makers are exploring the options for a transition from subscription based to Open Access (Ware & Mabe, 2012, pp. 72–74). However, surveys such as the Study of Open Access Publishing (Dallmeier-Tiessen et al., 2011, p. 7) and the Taylor and Francis Open Access Survey (Frass et al., 2014, pp. 7, 11) show that scholars are concerned about the quality of Open Access journals and peer review. Most of the scholars who have these concerns did not have experience with Open Access publishing, and many of them were not planning on publishing their work this way in the future. One reason for these concerns might still be ignorance, many scholars do not know where or how to find Open Access journals which meet their standards. For this reason there was a need for a service that ranks journals based on quality and value for money. According to the Quality Open Access Market (QOAM)1 the best way to do this is via crowd-sourcing, let the academic community rank the journals. On QOAM there are more than 17,000 journals that can be ranked, this includes hybrid Open Access journals. QOAM is a service to the authors of scientific articles to help them to get a clear overview of the available journals and the value for money that these journals offer. The idea is that librarians, editors, peer reviewers and authors can fill out score cards that together result in a Base Score and a Valuation Score. These scores show how high these journals are evaluated by the scorers and are an indicator of quality and value for money. Currently, QOAM is a project of the Radboud University in Nijmegen; it is supported by three major Dutch funding agencies for scientific research, namely: The Netherlands Organisation for Scientific Research (NWO), Centre for Science and Technology Studies (CWTS) and The Royal Netherlands Academy of Arts and Sciences (KNAW).2 QOAM is not the only website that aims to make an easy overview of trustworthy journals for scholars, by scholars through crowd-sourcing. In this essay seven other services will be compared to QOAM.

2. Other Initiatives

2.1. Directory of Open Access Journals (DOAJ)www.doaj.org/

Lars Bjørnshauge founded the Directory in 2003, he is now the managing director, the website is operated by Sonja Brage (editor), Dominic Mitchell (community manager) and Rikard Zeylon (editor). The Directory is sponsored by among others: Lund University, Springer, Wiley, Taylor and Francis, and BioMedCentral. Mission statement:

The aim of the DOAJ is to increase the visibility and ease of use of Open Access scientific and scholarly journals, thereby promoting their increased usage and impact. The DOAJ aims to be comprehensive and cover all Open Access scientific and scholarly journals that use a quality control system to guarantee the content. In short, the DOAJ aims to be the one-stop shop for users of Open Access journals.3

2.2. JournalReviewerwww.journalreviewer.org/index.php

JournalReviewer is an independent website that is operated by two academics (Malte Elson and James D. Ivory) who want to provide this information, it is not funded or otherwise affiliated with academic organisations. Mission statement:

JournalReviewer is an independent site that aggregates information users provide about their experience with academic journals’ review processes so that others can be as informed as possible as they consider journal submissions. Our goal is to provide users with detailed information to help them choose the best journal for the specific details of their unique submission situation.4

2.3. SciRevwww.scirev.sc/

SciRev is an initiative by Jeroen Smits, an associate professor in the department of economics at the Radboud University in Nijmegen, and Janine Huisman who is an associated researcher at the Centre of International Development Issues Nijmegen. Huisman obtained her PhD in the department of economics of the Radboud University. Together they form the team that operates SciRev. Mission statement:

The idea for this website was born from our own experience with the scientific review process. This experience is similar to that of many colleagues: endlessly waiting for an uncertain outcome. Through SciRev we aim to improve this situation by offering researchers the opportunity to share their experiences and select an efficient journal to submit their work.5

2.4. Journalysiswww.journalysis.org/

Journalysis was created by Dr. Neal Haddaway, a researcher in conservation biology. It was funded by a Higher Education Funding Council for Wales (HEFCW) grant in the period July 2013 until July 2014. Mission statement:

Our hope is that by reporting positive and negative publishing experiences in an open way we will increase transparency and accountability in the industry, ultimately improving the publication process and value-for-money.6

2.5. JournalGuidewww.journalguide.com/

JournalGuide is a division of Research Square. It was developed as a tool that was used within the organisation, now it is free for all to use, but it is still part of an independent for-benefit organisation. Mission statement:

Our goal at JournalGuide is to help researchers publish faster by helping them to choose the right journal. We want to arm researchers with the best information to make data-driven decisions about which journal to choose. Unlike other search tools, JournalGuide also allows researchers to share their own experiences with colleagues and to learn from others’ experiences.7

2.6. PRE-valwww.pre-val.org/

PRE-val (Peer Review Evaluation) is part of STRIATUS, a company that produces print and online journals, learning products and data services. It is a service that publishers can subscribe to, and it is managed by Adam Etkin and Eric Hall. Mission statement:

PRE (Peer Review Evaluation) is a suite of services designed to support and strengthen the peer-review process – the cornerstone of scholarly communication – on behalf of researcherspublishers, and libraries. PRE’s flagship service, PRE-val, verifies for the end user that content has gone through the peer review process and provides information that is vital to assessing the quality of that process.8

2.7. Eigenfactor.orgwww.eigenfactor.org/openaccess/

Eigenfactor is a research project that is co-founded by Jevin West and Carl Bergstrom and sponsored by the Bergstrom Lab in the Department of Biology at the University of Washington. The meaning of the word ‘eigen’ in Eigenfactor is captured in the combination of the English words ‘own’, ‘particular’ and ‘appropriate’ in this context. Eigenfactor looks for a factor in a journal that makes it unique and sets it apart from others, so to say. Mission statement:

We aim to use recent advances in network analysis and information theory to develop novel methods for evaluating the influence of scholarly periodicals and for mapping the structure of academic research.9

3. How does QOAM Work?

It is impossible to compare the other services to QOAM without first introducing it and explaining how QOAM works. To start off with a citation from the website:

When scientific and scholarly publishing is no longer seen as copyright exploitation but as a service, as is the case in the OA paradigm, there is a need for a market where quality of the service can be matched against price. Quality Open Access Market – QOAM – aims to be that place.10

To find out if the quality of the service matches the price QOAM needs scholars to share their experience. This way the academic community provides the information and ranks the journals all by itself, QOAM only provides the means to do this; scholars will have to do the rest. They can log on to QOAM with their institutional accounts and score a journal under their own name, it is not possible to do this anonymously.

QOAM offers Journal Score Cards (JSCs) in which all questions are asked that are needed to determine whether a journal is trustworthy and offers good quality for money. JSCs exist of two parts, a Base Score Card and a Valuation Score Card. Base Score Cards are usually filled out by library staff, thus resuming their traditional role in journal quality control; a role that was marginalised in the era of the big deals. Valuation Score Cards can only be filled out by authors, reviewers and editors who have actual experience with the journal’s publishing process. This results in a Base Score and a Valuation Score on the basis of which the journals are ranked in QOAM.

For the Base Score Card four questions are asked about each of the following aspects: Editorial Info, Peer Review, Governance and Process. Most questions are about information that should be represented on the website of the journal, such as aims and scope, expected readership and the names and affiliations of the members of the editorial board. Other questions are about the more practical information, for example whether the editorial information will be published alongside the article, or if post-publication commenting and rating is possible. In the peer review section questions are asked about the criteria that are used during the peer review, whether all articles that are submitted are sent out for peer review, and if authors have a say in who is going to review their article. To get a clear overview of how the manuscript is handled the score card also contains questions about the time it took from submission to publication. Finally there is a question about the publication charges; that kind of information should be presented on a journal’s website, and the height of the fee is important to be able to see if a journal gives value for money.11

The Valuation Score Card is a lot less extensive, here only four questions need to be answered. The first question is the choice of either ‘I have published an article in this journal less than a year ago’ or ‘I am an editor of this journal’. The other questions ask about the transparency of the peer review process, the value for money and whether the scorer would recommend scholars to submit their work to this journal.12

The SWOT matrix above shows how the Base Score and the Valuation Score can be interpreted. The colour that the journal gets based on the scores gives an indication of the quality and value for money in one glance. This makes it easier for scholars to find a journal that gives them value for money. Publishers can also search their own journals in the list and see which aspects of their journal should be improved to attract more authors.

4. Comparison of Services

In the table below a schematic comparison is made between the eight different Open Access journal ranking services that are discussed in this essay: Directory of Open Access Journals (DOAJ), JournalReviewer (JR), SciRev, Journalysis, Journalguide (Jg) PRE-val (PRE), Eigenfactor (Ef) and Quality Open Access Market (QOAM). Under ‘general features’ some of the basic information of how the service works and how much effort it is for scholars is compared. The sections below are about the specific information that is used by the service or that the service asks users to provide in the case of a crowd-sourced based service.

4.1. Services that do not Use Crowd-sourcing

The most important distinction that is visible in this table is that between crowd-sourced services and others. Services that do not use crowd-sourcing­ as a means to find information ask the least of effort from the scholars themselves.

The Directory of Open Access Journals (DOAJ) is the best known service that is represented in this essay. The DOAJ aims to index all trustworthy Open Access journals and make them more visible and accessible in the directory. DOAJ intends a housecleaning of their database that requires all of the journals that used to be in it to re-apply for a place in the directory. The DOAJ now has new criteria that journals have to meet, so that trustworthiness can be ensured. To apply to be taken up editors or publishers have to fill out a form with extensive information about the journal. The form contains over fifty questions, divided in the following categories: basic journal information, quality and transparency of the editorial process, the openness of the journal, content licensing, copyright and permissions issues, and personal details of the publisher or editor who fills out the form. Some of the questions are very basic, such as the title of the journal, URL, publisher, ISSN number and contact person. Other questions ask for more extensive information, provided with a URL of the page where the information can be found on the website of the journal, for example the names and affiliations of the editorial board.13 This information is then checked by the staff of the DOAJ and if all information checks out the journal deserves a place in the directory.tbl1

DOAJ JR SciRev Journalysis Jg PRE Ef QOAM
General features
Crowd-sourced X X X X X
Anonymous X X X X
Registration required X X X X
Open Access only (incl. hybrid) X X X (X)
Possibility to add comments X X X ? X
Little effort for scholars X X X X ? X X
Works together with libraries X X X
Review process
Criteria were available on journals website X
Quality of feedback X X ?
Quality of comments X X ?
Duration of review process X X X X X X
Rounds in review process X X X X X
Peer reviewers and editors involved X X
Information about the journal
Aims and scope X X X X
Editorial board X X
Digital archiving policy X X X
Content license terms X X
Instructions to authors X
Authors preferences
Author recommends journal to colleagues X X X

Another service that does not use crowd-sourcing to find information is PRE-val. This organisation is mainly focussed on the quality and transparency in the peer review process. Publishers can subscribe to this service. PRE-val is a service that gives information about the way peer review has been conducted, using publishers’ metadata that contains all the information about the peer review process. Publishers can decide how much of this metadata they want to share with the scholarly community.14 Information that the PRE-val systems extract from the publishers’ metadata are for example: the number of review rounds prior to acceptance, screening for plagiarism, date of submission and date of acceptance. If the publisher allows it, PRE-val can also show who reviewed the article, how many reviewers were involved, and whether the associate editor or the editor-in-chief had a look at it.15 This service makes it easy for authors to see factual information about the peer review process, and it helps publishers to show their trustworthiness.

The third service that does not use crowd-sourcing is Eigenfactor.org. The Eigenfactor Index of Open Access Fees shows the value for money that authors can receive in the form of prestige and readership when they publish in an Open Access journal. At Eigenfactor.org the expected Article Influence (citation rate) of a journal is measured and set off against the price in the following formula: 1000* Article Inluence / publication charges. This results in a plot with Article Influence set on one axis against Publication Fees on the other axis, and an index with the 761 Open Access journals represented in the Thomson-Reuters Journal Citation Reports with Cost Effectiveness scores.16

All of these services have something in common with QOAM. The DOAJ asks for extensive information about the journal and its website, as does QOAM. However, the DOAJ gets this information from publishers and editors themselves and in QOAM the users of the journal need to go on the internet to search for this information. PRE-val has the concern for transparency in common with QOAM, however PRE-val is entirely focussed on this and does not trust on the experience of the authors but goes to the source namely the publishers’ metadata. Eigenfactor.org has the value for money aspect that is also very important in QOAM. But Eigenfactor.org does not include all Open Access journals that exist at the moment, while the value for money in QOAM is not based on metrics as it is at Eigenfactor.org.

4.2. Services that do Use Crowd-sourcing

The other services, namely JournalReviewer, SciRev, and Journalysis do use crowd-sourcing to find information that can help authors to choose where to publish. Journalguide partially uses crowd-sourcing and partially receives information from publishers.17 It is not clear which information comes from which source, so it is not possible to compare Journalguide to QOAM or the other services in this respect. JournalReviewer, SciRev and Journalysis all have the same goal, that is to inform authors about the speed and the quality of the peer review process and at the same time show publishers what they can change to improve. JournalReviewer and Journalysis ask authors to send in a review in which they give data about the duration and the number of rounds of the review process, including their opinion about the quality of the comments they received from the reviewers. SciRev is mainly focussed on the time it takes for an article to be accepted and published, this results in a list and in journal pages that contain this information and are comparable to each other.18 On JournalReviewer the data is only presented per journal19 and Journalysis is still so new that it was not yet possible to see a published review.

Just like QOAM these services aim to make Open Access publishing more transparent and try to give information that they think is relevant for authors. But these three services all aim primarily on the time that the review process takes and the comments that authors might have about this process. JournalReviewer and Journalysis are not as systematic as QOAM, they do not list or rank the journals that are reviewed. SciRev is very systematic and makes a clear list, but is almost exclusively focussed on the time the publishing process takes. QOAM includes time as one of the aspects that is important in Open Access publishing, but in QOAM quality and value for money are more important.

5. Conclusion

QOAM is in the middle of the crowd-sourced and the non-crowd-sourced services when it comes to the extensive information they are looking for. On the one hand they want to weigh as much information as possible, just as much as what the non-crowd-sourced services have to offer. This gives QOAM a unique position between the crowd-sourced services. However, this extensive information needs to come from members of the academic community, on top of their own work. This fact potentially means that QOAM falls behind with the other crowd-sourced services as those are more concise and specific. The Valuation Score Card in QOAM is concise but it only offers very general information, it needs to be complimented with the Base Score Card to make the information valuable. So, in theory QOAM has a very unique model that could offer a very complete overview of the quality of a journal, but time will have to show if this is practically feasible for the academic community.