Academic journals and scientists are sometimes at odds with each other. This is understandable and is even beneficial at times. But I think it also helps to know when those opposing viewpoints can raise issues.
Checking original sources is important
We consider three examples from our own teaching in which much was learned by critically examining examples from books. Even influential and well-regarded books can have examples where more can be learned with a small amount of additional effort.
One example he lists concerns a schematic from a textbook 2. This schematic was representing the data about menstrual cycle and driving accidents, which can be seen below.
After going back to the data from the original paper 3, Gelman found the original figure, which looked at 84 women with regular 28 day menstrual cycles, when they were involved in an accident relative to their cycle, and split into women who had or hadn’t given birth (parous and nulliparious, respectively):
You can see that the schematic may get the gist of the original paper, but if you read too much into the schematic, you’ll infer too many things that aren’t in the original data. This highlights the importance of going back to original data sources for verifying the claims that others have made about them.
Original sources and citation counts in journal articles
While writing this manuscript, I made a habit of carefully reading the original sources of claims that I referenced in the introduction and discussion. This often led to having 3-7 citations for a single claim. I used many review papers to find these sources, but I cited the original articles instead of the reviews since it’s possible the conclusions from the original articles were misrepresented in some way. Before I submitted this manuscript, we had about 110 references in total. This is where journals come in and how they can be at odds with researchers.
Journals often have limits on the number of references you can include in a manuscript. For example, at the time of writing, here are the guidelines listed for Nature and Science.
As a guideline, Articles allow up to 30 references in the main text, but can go up to 50 references if needed and within the allocated page budget. Only one publication can be listed for each number.
Research Articles include an abstract, an introduction, up to six figures or tables, sections with brief subheadings, and about 40 references.
Limitations on references require you to slim down what you cite. To cover a number of topics, it is often easier to cite a review paper, which contains many ideas and facts that you reference, instead of the original articles themselves. This is what my co-authors and I did to fit the submission criteria.
This limitation is understandable, from the journal’s perspective. References are necessary, but can also take up too much valuable real estate in their articles. Printing physical pages with more and more references eventually meets a point of diminishing returns, for the the journal. It’s also in their best interests to not overwhelm readers with too much information at once, less people stop reading them.
If journals let authors put as many references as they want, authors could easily pad their manuscript with hundreds of references to make it look like it has a stronger foundation than it actually does. Or you could end up with authors pulling stunts where they cite their friends and colleagues to artificially inflate citation counts. So it makes sense to put some soft guidelines on how many references an article should include and to allow the authors more references in supplementary materials if needed.
An issue with limiting citations
My concern, here, is that this limiting of citations guides both readers and authors towards only reading and referencing the review papers instead of the original research articles themselves. By not going back to the originals, you run the risk of not interpreting the actual data correctly. And who knows, you may even pick up on something that wouldn’t have been recognized at the time of the original article. You may find something important that was glossed over, before.
If the journals you read are of a high quality, the editors and peer reviewers should be able to ensure that original articles are represented accurately in review articles. But of course, some mistakes slip through the cracks so one should be careful. I don’t think this is an existential threat to scholarship. But it should be considered due dilligence to check the original articles themselves, and not take the easy way out by just skimming the reviews and taking them at their word. Journals can’t enforce that, but you as a researcher should cultivate that mindset.
Review papers are a great source of knowledge that is aggregated and curated by experts in the field. They’re a great place to start investigating a new area. Introductions and discussions of important papers are also a good place to see what the field around them looks like. But if they contain information that is crucial to your work, make sure that you evaluate the original sources of those statements. Don’t rely on the second hand information provided by other authors.
You will likely cite many review papers in your work itself, but put in the effort to make sure you’re not misrepresenting facts or building your work on a house of cards.