|Year : 2005 | Volume
| Issue : 4 | Page : 210-213
Better reporting of studies of diagnostic accuracy
M Pai1, S Sharma2
1 Division of Epidemiology, School of Public Health, University of California, Berkeley, USA
2 Jhaveri Microbiology Centre, L.V. Prasad Eye Institute, Banjara Hills, Hyderabad-500 034, India
Jhaveri Microbiology Centre, L.V. Prasad Eye Institute, Banjara Hills, Hyderabad-500 034
Source of Support: None, Conflict of Interest: None
|How to cite this article:|
Pai M, Sharma S. Better reporting of studies of diagnostic accuracy. Indian J Med Microbiol 2005;23:210-3
As a leading journal in medical microbiology published from India, the Indian Journal of Medical Microbiology (IJMM) receives several manuscripts that report on the evaluation of existing and novel diagnostic tests. Such reports are critically important for microbiologists and clinicians because they contribute to the evidence base of diagnostic test research, and enable clinicians and laboratory scientists to make informed decisions on whether a test should be used in medical practice or not.
Unfortunately, in our experience, a substantial proportion of the manuscripts fail to survive the peer review process. At least two major factors are responsible for the significant "manuscript mortality rate": poor study quality, and poor reporting of the study. It is important to appreciate the difference between the two. Poor study quality pertains to flaws in study design and conduct that lead to invalid (biased) results. ,, Poor reporting, on the other hand, refers to incomplete or inadequate reporting of the design, conduct, analysis and results of a study., A poorly reported study may have actually been well designed and executed. But it is impossible to know that without contacting the authors for information that was missing in the published paper.
In epidemiological terms, poor quality studies are those that lead to biased estimates of test accuracy. For example, if the interpretation of the index test is influenced by knowledge of the results of the reference standard ("gold standard"), this bias, often called "review bias," can result in overestimation of the sensitivity and specificity., There are several other threats to validity and their importance and impact on research findings have been reviewed elsewhere. ,,, There is empiric evidence that certain biases are a greater threat to validity than others., Biased results from poorly designed studies can lead to premature adoption of diagnostic tests that may have little or no benefits, and result in adverse clinical consequences related to misleading estimates of test accuracy.
What if a study has been designed and conducted well (i.e., with minimal bias), but poorly reported? Several empiric studies and reviews suggest that this is a major concern with published diagnostic studies.,,,,, Authors often fail to explicitly report all the critical components of a diagnostic study, making it impossible for the peer reviewers and the readers to evaluate the scientific validity of the study. For example, in a meta-analysis on nucleic acid amplification (NAA) tests for tuberculous meningitis, 74% of 49 studies did not report on whether the NAA test results were interpreted blindly, without the knowledge of culture, the reference standard. When authors of several of these studies were contacted, the proportion with missing information on blinding reduced from 74% to 31%., Evidently, some authors had incorporated blinding in their study design, but had failed to mention it in their manuscripts. As another example, in a meta-analysis on bacteriophage assays for diagnosis of tuberculosis, only 2 of 13 (15%) studies reported blinded comparison of phage assays with the reference standard. Authors often fail to report the kind of study design employed (e.g., cross-section versus case-control), whether the study was prospective or retrospective, whether patients were randomly or consecutively sampled, and whether all patients underwent the reference standard test, irrespective of the results of the index test.
Poorly reported studies are frustrating for editors, peer reviewers, researchers, and, ultimately, readers and users of medical literature. Lack of transparency in reporting makes it hard to judge the validity of a study, and this greatly diminishes its clinical impact and relevance. It is likely that poorly reported studies will be rated as poor quality studies by peer reviewers and editors, leading to a higher likelihood of manuscript rejection. We suspect poor reporting may be one reason why many studies might fail to make the cut with high impact international journals.
What can we do to improve reporting of diagnostic studies? A recent initiative in this regard is noteworthy. To improve quality of reporting of diagnostic studies, the Standards for Reporting of Diagnostic Accuracy (STARD; pronounced "STAR-D") initiative was launched by an international consortium of investigators., The objective of this initiative is to improve the quality of reporting, and to encourage authors and editors to use a more standardized and transparent format for reporting manuscripts of diagnostic accuracy studies. The STARD statement has been simultaneously published in several journals, and a few journals (e.g., JAMA , Annals of Internal Medicine , Lancet, Clinical Chemistry ) have already made it mandatory for authors to use the STARD checklist and flow diagram as a template for submitting diagnostic study manuscripts. The STARD initiative follows an earlier effort called CONSORT, designed to improve reporting of randomized controlled trials. Several journals now require reports of randomized controlled trials to follow the CONSORT guidelines. Similar efforts are also underway to improve the quality of reporting of meta-analyses and other types of publications (see http://www.consort-statement.org for an overview of such initiatives).
The table is a reproduction of the STARD checklist. This checklist has 25 items covering all the major sections of a well-written manuscript. The rationale and justification for each of the items can be found elsewhere. Authors could copy and paste these items as subheadings in their manuscript and create a nicely structured manuscript template. The STARD flow diagram [Figure - 1] can be used by authors to provide information on patient recruitment, the order of test execution, number of patients who underwent various tests and the number of patients who had indeterminate, invalid, or missing test results.
As members of the IJMM editorial team, we strongly encourage our contributors to read the STARD guidelines, and use the STARD checklist and flow diagram in their submissions to our Journal . The use of STARD guidelines, we anticipate, will facilitate the peer review process, potentially increase manuscript acceptance rates, and ultimately improve the readability and clinical impact of IJMM articles. In the long run, we also hope the STARD guidelines will encourage investigators to design better quality studies that are more likely to have a global impact on clinical and laboratory practice. Lastly, we invite feedback from our readers on their practical experiences with using the STARD guidelines, and how such guidelines can be adapted for IJMM contributors[Table - 1].
| ~ References|| |
|1.||Lijmer JG, Mol BW, Heisterkamp S, Bonsel GJ, Prins MH, van der Meulen JH, et al . Empirical evidence of design-related bias in studies of diagnostic tests. JAMA 1999; 282: 1061-6. |
|2.||Whiting P, Rutjes AW, Reitsma JB, Bossuyt PM, Kleijnen J. The development of QUADAS: a tool for the quality assessment of studies of diagnostic accuracy included in systematic reviews. BMC Med Res Methodol 2003; 3: 25. [PUBMED] [FULLTEXT]|
|3.||Whiting P, Rutjes AW, Reitsma JB, Glas AS, Bossuyt PM, Kleijnen J. Sources of variation and bias in studies of diagnostic accuracy: a systematic review. Ann Intern Med 2004; 140: 189-202. [PUBMED] [FULLTEXT]|
|4.||Bossuyt PM, Reitsma JB, Bruns DE, Gatsonis CA, Glasziou PP, Irwig LM, et al . Towards complete and accurate reporting of studies of diagnostic accuracy: the STARD initiative. Standards for Reporting of Diagnostic Accuracy. Clin Chem 2003; 49: 1-6. |
|5.||Bossuyt PM, Reitsma JB, Bruns DE, Gatsonis CA, Glasziou PP, Irwig LM, et al . The STARD statement for reporting studies of diagnostic accuracy: explanation and elaboration. Clin Chem 2003; 49: 7-18. |
|6.||Pai M, Flores LL, Hubbard A, Riley LW, Colford JM, Jr. Quanlity assessment in meta-analyses of diagnostic studies: what difference does email contact with authors make? XI Cochrane Colloquium, Barcelona, Spain 2003. |
|7.||Pai M, Flores LL, Hubbard A, Riley LW, Colford JM, Jr. Nucleic acid amplification tests in the diagnosis of tuberculous pleuritis: a systematic review and meta-analysis. BMC Infect Dis 2004; 4: 6. |
|8.||Pai M, Flores LL, Pai N, Hubbard A, Riley LW, Colford JM, Jr. Diagnostic accuracy of nucleic acid amplification tests for tuberculous meningitis: a systematic review and meta-analysis. Lancet Infect Dis 2003; 3: 633-43. |
|9.||Kalantri S, Pai M, Pascopella L, Riley L, Reingold A. Bacteriophage- based tests for the detection of Mycobacterium tuberculosis in clinical specimens: a systematic review and meta- analysis. BMC Infect Dis 2005; 5: 59. |
|10.||Moher D, Schulz KF, Altman DG. The CONSORT statement: revised recommendations for improving the quality of reports of parallel-group randomised trials. Lancet 2001; 357: 1191-4. |
[Figure - 1]
[Table - 1]
|This article has been cited by|
||A systematic review of the diagnostic accuracy of prostate specific antigen
| ||Harvey, P., Basuita, A., Endersby, D., Curtis, B., Iacovidou, A., Walker, M. |
| ||BMC Urology. 2009; 9(1): Art 14 |