Editorial

Published in volume 28, issue 7, October 2018

This paper contains two terrific papers on testing. A hybrid approach to testing for nonfunctional faults in embedded systems using genetic algorithms, by Tingting Yu, Witawas Srisa-an, Myra B. Cohen, and Gregg Rothermel, investigates challenging aspects of testing for nonfunctional faults in embedded software. (Recommended by Gordon Fraser.) Approaches for computing test-case-aware covering arrays, by Ugur Koc and Cemal Yilmaz, presents several novel approaches to correctly compute test-case-aware covering arrays. (Recommended by Mauro Pezzè.)

 


How can we recognize facade journals?

This editorial continues a series about what I call “facade journals.” In the 28(5) issue, I pointed out that peer reviews are the most important mechanism research journals use to ensure scientific quality of published papers [1]. Then in the 28(6) issue, I defined “facade journals” to be journals that publish papers that look like research but that do not truly advance human knowledge [2]. In this editorial, I ask the question of how do we recognize them. It’s not only harder than we might think, it’s gotten harder over time.

Like facades on physical buildings, it is hard to recognize a facade journal. Facade journals used to be published online and scientific journals were published on paper. But we lost that discriminator when excellent scientific journals start publishing online. Facade journals used to make decisions by the editor-in-chief without reviews or an editorial board. Now facade journals have editorial boards and reviewers.

The most important distinction now is the quality of the reviews [3]. But reviews are anonymous and confidential, thus not subject to public scrutiny. This is true in all journals, including scientific journals such as STVR and IEEE Transactions, making it hard to recognize facade journals from the outside.

But reviews for facade journals differ from scientific journals. Reviewers are expected to accept with few or no comments. Most only have two grounds for rejection. First, the paper is not written in English. Bad English is okay, by the way. Second, facade journal editors have realized that particularly high acceptance rates are red flags. So they recruit people to submit nonsense papers specifically to reduce the acceptance rate.

Another measure of journals is the impact factor, which measures numbers of citations. The journal impact factor is based on citations to published papers in the past two years, so long term impact is ignored. Even though the impact factor is problematic at best (and nonsense in my opinion [4]), it is widely used. It is reasonable to assume that the impact factor for facade journals would be extremely low. However, this is now being faked as well. One of the more common minor revision suggestions is to add citations to papers published in that journal to artificially inflate the journal’s “impact” factor. (It also, as a side benefit, inflates the authors’ h-index values.)

Some helpful scientists have put together lists of journals that do not publish real research. Perhaps the best known is Jeffrey Beall [5]. Beall published a blog criticizing “predatory open access journals.”1 He focused largely on the “pay to publish” model, which he described as inherently corrupting. Whether the authors paid publication fees was a good discriminator at one time, but that line is being blurred now as the economics of publishing changes. Some journals are trying to create a legitimate pay-to-publish model that avoids the corruption of the past. Other journals keep papers behind a paywall, unless the authors pay a fee to make them open. And most conferences now refuse to publish papers unless an author pays full a registration fee. These are both forms of pay-to-publish, although different from Beall’s model.

After years of harassment from publishers of facade journals, Beall scrubbed his blog and list. You can find an updated version of his list at https://predatoryjournals.com/journals/.

So ... back to my original question ... How can individual scientists recognize facade journals? I found a good list of suspicious characteristics, reminiscent of code smells in software engineering [6], but to me, the definitive answer to me is the reviews. Unfortunately, the only way to get reviews is to submit a paper, which could be at best a waste of time, and at worst, damage our reputation (some journals will accept and publish before authors have a chance to withdraw). Without reviews, it’s a social process. Experienced and well-trained scientists know their field. They know the major publishers, the major journals, and many of the key people in the field. If I see a journal I don’t recognize from a publisher I don’t know, that sets of an alarm. I check the editor-in-chief to see if I know her personally, by reputation, or can find a strong research record. If not, I look at the editorial board. If I do not know anyone, the journal is highly suspect. Next, I read some papers. Are they really advancing the state of knowledge? If not, the journal is almost certainly facade and I will not submit.

Some of you may be thinking “Professor, that’s cheating! You’ve been doing this for 30 years, I’m still a student!” This is why we have mentors and advisors, so we can use their experience. Reading papers can help in the absence of good advice. But when all else fails, evaluate the review process. I suggest starting with a paper that you know is flawed—if the reviewers do not spot the flaws, the review process is broken and the journal is not worthy of your valuable scientific papers.

[1] Jeff Offutt. What is the value of peer review? (Editorial), Wiley’s journal of Software Testing, Verification, and Reliability, 28(5), July 2018.

[2] Jeff Offutt. What is a facade journal? (Editorial), Wiley’s journal of Software Testing, Verification, and Reliability, 28(6), August 2018.

[3] Publish and don’t be damned: Some science journals that claim to peer review papers do not do so, The Economist, June 2018.

[4] Jeff Offutt. The journal impact factor (Editorial), Wiley’s journal of Software Testing, Verification, and Reliability, 18(1), March 2008.

[5] Jeffrey Beall, https://en.wikipedia.org/wiki/Jeffrey_Beall, Wikipedia (last access October 2018)

[6] How to Identify Predatory/Fake Journals, Enago Acadamy, May 2018, https://www.enago.com/academy/how-to-identify-predatory-fake-journals/ (last access October 2018)

1 I don’t use the term “predatory” because I don’t think that’s descriptive enough. I’ll focus on that in my next editorial on why facade journals exist.

Jeff Offutt
George Mason University
offutt@gmu.edu
8 October 2018