How to Read a Journal Article (Made Easy)

Education Team

Jul 3, 2025

Let's learn how to read a journal article the right way!

So YOU want to Read a Journal Article?

Have you ever wondered what makes a paper “good”? Does the thought of presenting at your next journal club make you sweat? Do you find yourself struggling to make sense of all the figures and confidence intervals, only to end up skipping straight to the discussion section? 

Worry not! By the end of this series, you will have a foolproof guide on how to read any journal article - and pretty soon you’ll be quoting papers at your next ward round without a second thought. 

There are dozens of tools out there to help you critically appraise a journal article, including the Centre for Evidence-Based Medicine checklists, JBI critical appraisal tools, and CASP checklists. These checklists are often classified according to the study design used (i.e., systematic review, randomised-controlled trial, cohort study, and so forth). Make sure you select the proper checklist for the study design used in the paper you’re appraising! The approach below draws on the aforementioned tools; check them out and see which one works best for you. 

STEP 1: What’s the Question, Doc?

A good paper should give you an outline and background, highlighting the context of the problem and the clinical question it is trying to answer. In the first minute of reading the paper, you should ideally have a grasp of what the research question is, according to PICOST - Population, Intervention, Comparison, Outcome, Setting, and Timing. 

Identifying what the specific question is allows you to read the rest of the paper with a specific frame of reference, and also allows you to decide if the methodology used was justified and appropriate. 

STEP 2: Is the Study Design Valid? 

For instance, if you were evaluating an RCT, you would hope that at the very least there was randomisation (but you would be surprised!). When reading the methodology section of the paper, be on the lookout for any flaws in study design that may contribute to bias. Using the example of an RCT: 

  • How did they randomise participants, and was this truly random, or was there potential for selection bias? 

  • Was the randomisation sequence concealed from participants and investigators?

  • Who was blinded to the intervention given? 

Also, have a look at Table 1, which outlines the characteristics of your study population. Are the baseline characteristics of both study arms reasonably similar, or could there be inherent selection bias at play that could affect the outcome? For instance, if the lost to follow-up numbers are higher in one arm compared to the other, could there be a reason for this that could bias your final results? 

STEP 3: Time to Crunch the Numbers 

It’s easy to get overwhelmed with this section, and all the different figures, percentages and p-values listed. Keep your research question in mind and evaluate if they have given you complete results for all the outcomes that were outlined in the beginning. Keep a lookout for missing data, which may affect the final result, as well as any sources of bias that could have influenced the results given. 

Sample size

Size matters in clinical trials - is the sample size large enough? This is typically determined through a power calculation undertaken before recruiting study participants. This is important, as it not only ensures that there are enough participants in the trial to detect a true effect, but also avoids wasting resources on a study that is too small to detect a statistically significant effect.  Most power calculations are set to 80% power, meaning an 80% chance of detecting a true effect, and a significance level of 0.05, which is the risk that you are willing to accept for a type I error or false positive result. Some studies report on their power calculation; however, it is essential to note that studies are typically only powered for the primary outcome and may not be adequately powered for secondary outcomes. 

Effect size 

Ideally, you want confidence intervals and p-values reported for your important outcomes to give you an idea of the precision of the estimate and whether results were statistically significant. The wider the confidence interval, the more uncertainty. As you may have heard before, if a confidence interval crosses 0 (for mean differences) or 1 (for odds/risk ratios) then this result is considered statistically non-significant. The p-value gives you a measure of how likely it would be to observe the results obtained if the null hypothesis were true (i.e. that there is no difference between study arms). For example, a p-value of 0.03 means a 3% chance of observing the results obtained if there were no real effect (i.e. due to random chance). Typically, a p-value <0.05 is considered to be strong evidence against the null hypothesis i.e. that the results observed are due to a true effect, and not just random chance. Don’t worry - we will be releasing a statistics crash course soon!

Analysis

Consider what statistical analysis was done. A per-protocol analysis means they have analysed only the participants who were treated according to the study protocol. This would exclude anyone who has dropped out of the study or did not receive the intervention correctly. In other words, you get an idea of the effect of the intervention under the most optimal conditions. The caveat is that this may overestimate the treatment effect, as real life does not always follow a planned course. This can also introduce bias, as you are only including participants who followed the protocol perfectly. 

On the other hand, an intention-to-treat analysis includes all participants who were initially assigned to a study arm, even if they later drop out of the study or do not adhere to the protocol. This gives you a more realistic and generalisable estimate of treatment effect under ‘real-world’ conditions, where patients may not always be adherent to treatment for several reasons.

Keep an eye out for post-hoc analyses as well - these are analyses done without a prespecified hypothesis, after data collection has taken place. With post-hoc analysis, there is sometimes an increased risk for type I error (false positive findings) and a risk that researchers may have unintentionally (or intentionally) chosen analyses that would lead to significant results, thus undermining the validity of those findings. While it can help explore unexpected associations and analyse subgroups, it should be treated as exploratory rather than confirmatory, and viewed as helpful for generating new hypotheses for further research. 

STEP 4: Will These Results Change Anything?

After sifting through the results like the math genius you are, make sure to think about them in your context. If you want to know whether one treatment is better than another for your patient, consider if the context of the study is generalisable to your context. Is the study population similar enough to your patient population? Are the outcomes investigated in the study essential to you in your clinical practice? Is there value in changing your current practice to align with the study’s results? 

STEP 5: Go Forth and Conquer 

With your newfound hack to reading journal articles, the best way to exercise this skill is to frequently read articles and try to critically appraise them as you go along, until this becomes second nature. In an age where more and more publications are being churned out (and not always according to best practices), and misinformation is rising, being able to critique any piece of information you are presented with is an invaluable skill. 

Check out the links below for a list of critical appraisal tools. If you want a deeper dive, we have also linked a few guidebooks on critical appraisal. 


Resources

  1. CEBM Critical Appraisal Tool

  2. JBI Critical Appraisal Tool

  3. CASP Critical Appraisal Tool

  4. A Doctor's Guide to Critical Appraisal 

  5. Pocket Guide to Critical Appraisal 

How to Read a Journal Article (Made Easy)

Education Team

Jul 3, 2025

Let's learn how to read a journal article the right way!

So YOU want to Read a Journal Article?

Have you ever wondered what makes a paper “good”? Does the thought of presenting at your next journal club make you sweat? Do you find yourself struggling to make sense of all the figures and confidence intervals, only to end up skipping straight to the discussion section? 

Worry not! By the end of this series, you will have a foolproof guide on how to read any journal article - and pretty soon you’ll be quoting papers at your next ward round without a second thought. 

There are dozens of tools out there to help you critically appraise a journal article, including the Centre for Evidence-Based Medicine checklists, JBI critical appraisal tools, and CASP checklists. These checklists are often classified according to the study design used (i.e., systematic review, randomised-controlled trial, cohort study, and so forth). Make sure you select the proper checklist for the study design used in the paper you’re appraising! The approach below draws on the aforementioned tools; check them out and see which one works best for you. 

STEP 1: What’s the Question, Doc?

A good paper should give you an outline and background, highlighting the context of the problem and the clinical question it is trying to answer. In the first minute of reading the paper, you should ideally have a grasp of what the research question is, according to PICOST - Population, Intervention, Comparison, Outcome, Setting, and Timing. 

Identifying what the specific question is allows you to read the rest of the paper with a specific frame of reference, and also allows you to decide if the methodology used was justified and appropriate. 

STEP 2: Is the Study Design Valid? 

For instance, if you were evaluating an RCT, you would hope that at the very least there was randomisation (but you would be surprised!). When reading the methodology section of the paper, be on the lookout for any flaws in study design that may contribute to bias. Using the example of an RCT: 

  • How did they randomise participants, and was this truly random, or was there potential for selection bias? 

  • Was the randomisation sequence concealed from participants and investigators?

  • Who was blinded to the intervention given? 

Also, have a look at Table 1, which outlines the characteristics of your study population. Are the baseline characteristics of both study arms reasonably similar, or could there be inherent selection bias at play that could affect the outcome? For instance, if the lost to follow-up numbers are higher in one arm compared to the other, could there be a reason for this that could bias your final results? 

STEP 3: Time to Crunch the Numbers 

It’s easy to get overwhelmed with this section, and all the different figures, percentages and p-values listed. Keep your research question in mind and evaluate if they have given you complete results for all the outcomes that were outlined in the beginning. Keep a lookout for missing data, which may affect the final result, as well as any sources of bias that could have influenced the results given. 

Sample size

Size matters in clinical trials - is the sample size large enough? This is typically determined through a power calculation undertaken before recruiting study participants. This is important, as it not only ensures that there are enough participants in the trial to detect a true effect, but also avoids wasting resources on a study that is too small to detect a statistically significant effect.  Most power calculations are set to 80% power, meaning an 80% chance of detecting a true effect, and a significance level of 0.05, which is the risk that you are willing to accept for a type I error or false positive result. Some studies report on their power calculation; however, it is essential to note that studies are typically only powered for the primary outcome and may not be adequately powered for secondary outcomes. 

Effect size 

Ideally, you want confidence intervals and p-values reported for your important outcomes to give you an idea of the precision of the estimate and whether results were statistically significant. The wider the confidence interval, the more uncertainty. As you may have heard before, if a confidence interval crosses 0 (for mean differences) or 1 (for odds/risk ratios) then this result is considered statistically non-significant. The p-value gives you a measure of how likely it would be to observe the results obtained if the null hypothesis were true (i.e. that there is no difference between study arms). For example, a p-value of 0.03 means a 3% chance of observing the results obtained if there were no real effect (i.e. due to random chance). Typically, a p-value <0.05 is considered to be strong evidence against the null hypothesis i.e. that the results observed are due to a true effect, and not just random chance. Don’t worry - we will be releasing a statistics crash course soon!

Analysis

Consider what statistical analysis was done. A per-protocol analysis means they have analysed only the participants who were treated according to the study protocol. This would exclude anyone who has dropped out of the study or did not receive the intervention correctly. In other words, you get an idea of the effect of the intervention under the most optimal conditions. The caveat is that this may overestimate the treatment effect, as real life does not always follow a planned course. This can also introduce bias, as you are only including participants who followed the protocol perfectly. 

On the other hand, an intention-to-treat analysis includes all participants who were initially assigned to a study arm, even if they later drop out of the study or do not adhere to the protocol. This gives you a more realistic and generalisable estimate of treatment effect under ‘real-world’ conditions, where patients may not always be adherent to treatment for several reasons.

Keep an eye out for post-hoc analyses as well - these are analyses done without a prespecified hypothesis, after data collection has taken place. With post-hoc analysis, there is sometimes an increased risk for type I error (false positive findings) and a risk that researchers may have unintentionally (or intentionally) chosen analyses that would lead to significant results, thus undermining the validity of those findings. While it can help explore unexpected associations and analyse subgroups, it should be treated as exploratory rather than confirmatory, and viewed as helpful for generating new hypotheses for further research. 

STEP 4: Will These Results Change Anything?

After sifting through the results like the math genius you are, make sure to think about them in your context. If you want to know whether one treatment is better than another for your patient, consider if the context of the study is generalisable to your context. Is the study population similar enough to your patient population? Are the outcomes investigated in the study essential to you in your clinical practice? Is there value in changing your current practice to align with the study’s results? 

STEP 5: Go Forth and Conquer 

With your newfound hack to reading journal articles, the best way to exercise this skill is to frequently read articles and try to critically appraise them as you go along, until this becomes second nature. In an age where more and more publications are being churned out (and not always according to best practices), and misinformation is rising, being able to critique any piece of information you are presented with is an invaluable skill. 

Check out the links below for a list of critical appraisal tools. If you want a deeper dive, we have also linked a few guidebooks on critical appraisal. 


Resources

  1. CEBM Critical Appraisal Tool

  2. JBI Critical Appraisal Tool

  3. CASP Critical Appraisal Tool

  4. A Doctor's Guide to Critical Appraisal 

  5. Pocket Guide to Critical Appraisal 

© 2025 SAMTRA. All Rights Reserved.

Privacy

Terms & Conditions

© 2025 SAMTRA. All Rights Reserved.

© 2025 SAMTRA. All Rights Reserved.

Create a free website with Framer, the website builder loved by startups, designers and agencies.