| Abstract|| |
Background: Clinical research forms the cornerstone of efforts to improve the lives of patients. African Journal of Paediatric Surgery has been a major vehicle for dissemination of information to paediatric surgeons in Africa since 2004. Most studies in the paediatric surgical literature are observational studies. This study aims to assess the adequacy of clinical research reporting in African Journal paediatric surgery. Materials and Methods: The authors analyzed all observational studies published in African Journal of Paediatric Surgery from 2006 to 2010 (n = 73). Studies were assessed using a validated tool for 16 baseline criteria essential for the non-biased reporting of clinical data (details regarding surgeons, cases, interventions, and statistical methods). Seven additional criteria pertaining to comparison methods were assessed in studies using controls. Results: Sixty-seven percent of all studies were retrospective, and only 5.5% utilized a control group. There were 72 (98.6%) case series. Most studies met less than half of the essential reporting criteria (mean, 7.3 of 16 baseline criteria). Reporting deficiencies were found in all major aspects of study design and statistical analysis. There is no statistical difference between prospective and retrospective studies. Conclusions: This study has identified deficiencies in the fundamental elements essential to non-biased reporting of clinical research in African Journal of Paediatric Surgery. We recommend that the Journal adopt the validated standard reporting criteria for these studies to improve the ability of its readers to interpret the relevance of clinical research findings to their own practice.
Keywords: African Journal of Paediatric Surgery, clinical research, observational studies, reporting
|How to cite this article:|
Nasir AA, Lakhoo K. Evaluation of clinical research reporting in African Journal of Paediatric Surgery. Afr J Paediatr Surg 2013;10:13-6
| Introduction|| |
Randomized Clinical Trials (RCTS) form the cornerstone of evidence-based practice, but these studies remain scarce in the paediatric surgical literature.  The difficulty with conducting prospective studies in paediatric surgery has resulted in the dependence on non-evidence-base clinical decisions. Together, single-institution case-series reports and other retrospective data make up more than 97% of the published clinical evidence in support of practice standards for paediatric surgery.  Observational retrospective clinical reporting, therefore, has long played an important role in the progress of contemporary clinical medicine. In many fields of medicine, these studies have permitted discovery of new diseases and unexpected effects and have provided the preliminary evidence necessary to guide the direction of high-quality prospective trials.  The significant role of observational studies in shaping clinical practice within the field of paediatric surgery spurred Rangel et al. to develop a standardized methodology for the critical appraisal of these data. African Journal of Paediatric Surgery has been a major vehicle for dissemination of information to paediatric surgeons in Africa since 2004. Most studies in this paediatric surgical literature are observational studies. This study aims to assess the adequacy of clinical research reporting in African Journal paediatric surgery as a basis for recommending adoption of a standardized reporting guideline.
| Materials and Methods|| |
The authors analyzed all observational studies published in African Journal of Paediatric Surgery from 2006 to 2010 (n = 73). Studies were assessed using validated tools for criteria essential for the non-biased reporting of clinical data: , These included 16 "baseline" criteria believed essential to the reporting of all clinical studies divided into 4 main categories: (1) description of participating surgeons and institutions (5 criteria), (2) description and definition of cases (3 criteria), (3) description of the intervention and related care (3 criteria), and (4) the use of non-comparison statistical methods relating to outcomes (5 criteria). Seven additional "comparison" criteria were examined for observational studies reporting the use of control groups. These included 4 criteria pertaining to the reporting of statistical methods used for comparing groups and 3 criteria pertaining to measures of baseline similarity.  Studies with less than 5 patients or outcomes attributable to non-operative therapy were excluded. Editorials, reviews, tutorials, and duplicate clinical reports were also excluded. Data was analyzed with SPSS 16.0. Chi-squared or Fisher's exact test was used for categorical variables, as appropriate and continuous variables were compared with student's t test. P ≤ 0.05 was regarded as significant.
| Results|| |
Seventy-three clinical studies met the criteria for analysis. Ninety-three percent (68) were single institutional reports, case series constituted 98.6% (n = 72) with only one study being a prospective randomized clinical trial, and 5.5% (n = 4) reported the use of control groups through a cohort design. Sixty-seven percent (n=49) of all clinical studies were retrospective. Only 1 study was found to be true randomized prospective design. Seventy percent (50) of studies were conducted in Africa. Thirty-four percent (25) of studies met more than half the number of baseline criteria (mean, 7.25 ± 2.5 [SD] ranges 1 to 13) [Figure 1]. There was no difference in the reporting of baseline criteria when comparing retrospectively and prospectively designed studies (retrospective, 7.1 ± 2.6; prospective, 7.5 ± 2.4; P = 0.48; [Table 1]). Reporting deficiencies were found in all major aspects of study design and statistical analysis [Table 2]. Half of studies with control group met more than half the number of comparison criteria (median, 3.0; range, 0-7). There was also no difference in reporting baseline criteria between controlled and non-controlled studies (controlled, 8.25 ± 2.22 and non-controlled, 7.19 ± 2.50; P = 0.41). We found no difference in the reporting of baseline criteria between studies originating from within or outside Africa (7.48 ± 2.51 Vs. 6.74 ± 2.40, P = 0.24).
|Table 2: adequacy of reporting for individual criteria deemed essential for accurate and non-biased clinical reporting|
Click here to view
|Figure 1: Distribution of baseline reporting criteria by study design as published by african journal of Paediatric Surgery from 2006-2010|
Click here to view
| Discussion|| |
Quality of scientific work is judged at several stages from the initiation of research projects to the application of their results in clinical practice.  Standardized instruments designed to measure the quality of published clinical evidence have had a significant impact on shaping evidence-based practice in many areas of medicine. These tools have been used in a multitude of functions, including peer-review, systematic literature reviews, and as tutorials for practicing clinicians. 
From the perspective of sound epidemiologic principles, it is believed that the "methodology" components of study design should carry the greatest weight given their importance for effectively interpreting retrospective data.  The development of a standardized, reliable quality assessment instrument for retrospective data potentially could improve evidence-based practice (EBP) within paediatric surgery. A good example is the universal reporting guidelines dictated by the CONSORT (Consolidated Standards of Reporting Trials) statement for randomized trials, which has a positive impact on the quality of published trials. 
The influential role of observational study on practice of paediatric surgery and the need to rigorously evaluate the effectiveness of new intervention before they become standards of care encouraged Rangel et al. to develop a standardized methodology for the critical appraisal of these data.  The 23 essential reporting criteria was used to assess 300 observational studies in Journal of paediatric surgery (JPS) in 2003, and they found the practice of clinical reporting in paediatric surgery to be inadequate across all fundamental aspects of study design.  As part of the Journal of paediatric surgery commitment to excellence in providing high quality of scientific evidence to the reader, and based on the observation that there is currently limited practical applicability of the CONSORT guidelines to the core paediatric surgical literature, ,, JPS adopted new submission and publication guidelines for clinical research reports in 2006. 
In this study, case series constituted 98.6% of all clinical research evaluated with only one study being a prospective randomized clinical trial. This is consistent with the previous observation that less than 1% of all clinical research data in paediatric surgery is randomized study designs. ,, Observational studies have and will continue to play a much more significant role as the first and only line of available evidence on which to base clinical decisions. Until resources and greater collaboration allow for the broader application of high-quality prospective clinical research, a strategy to improve the quality and interpretation of case- series data may have the single greatest impact on improving the state of evidence-based practice within paediatric surgery. 
The present review shows that only thirty-four percent (25) of studies met more than half the number of baseline criteria for clinical research reporting. This is similar to the findings of Rangel et al. in a review of 300 observational studies published in JPS, where only 39% of the studies met more than half of the number of baseline criteria.  Major deficiencies were found in the descriptions of cases, interventions, and outcome measures as well as in critical details regarding the comparison of treatment groups. This is also consistent with previous published reports. ,
The results of this study are consistent with those of previous reports evaluating qualitative clinical evidence in paediatric surgery. ,,, In a review of complication reporting in 119 published clinical research by Martin et al.,  no study met all the 10 evaluating criteria, and only 34 % of studies met 5 or 6 of the criteria. In 2001, Moss et al. characterized substantial deficiencies in the design and conduct of the majority of clinical trials published for paediatric surgical conditions. Thakur et al. also documented qualitative deficiencies in a broad range of clinical study designs from a small cross-section of the paediatric surgery literature.
In this study, we found that the reporting of essential methodological details was not substantially different between the retrospective and prospective study designs. Furthermore, we found no statistically significant in the reporting of baseline criteria between controlled and non-controlled studies, although the small sample size of controlled studies precluded any meaningful statistical inference of reporting between the controlled and non-controlled groups. These findings collaborate report by Martin et al. that type of study has no influence on reporting standard.  The location of institution where study was carried out also has no influence on reporting baseline criteria as documented in other report. 
We focused on reporting in the African Journal of Paediatric Surgery because it is the journal of Pan African Paediatric Surgical Association (PAPSA) and Association of Paediatric Surgeons of Nigeria (APSON), and we have the view that implementation of standardized reporting guidelines by this journal could have a substantial impact on improving evidence-based practice within paediatric surgery in Africa. Pan African Paediatric Surgical Association (PAPSA) and other national paediatric surgical association will need to follow the example of The American College of Surgeons in taken a major initiative in training surgeons to do trials, supporting trials, and developing a multicenter clinical research consortium. Standardized assessment of papers submitted for publication can improve the adherence of authors to these standards.
The limitation to the present study is that scoring of reporting of criteria was not duplicated, and inter-rater reliability could, therefore, not be evaluated.
| Conclusions|| |
This study has identified deficiencies in the fundamental elements essential to non-biased reporting of clinical research in African Journal of Paediatric Surgery. We recommend that the Journal adopt the validated standard reporting criteria for these studies to improve the ability of its readers to interpret the relevance of clinical research findings to their own practice.
| References|| |
|1.||Rangel SJ, Kelsey J, Henry MC, Moss RL. Critical analysis of clinical research reporting in pediatric surgery: Justifying the need for a new standard. J Pediatr Surg 2003;38:1739-43. |
|2.||Hardin WD Jr, Stylianos S, Lally KP. Evidence-based practice in pediatric surgery. J Pediatr Surg 1999;34:908-12. |
|3.||Rangel SJ, Kelsey J, Colby CE, Anderson J, Moss RL. Development of a quality assessment scale for retrospective clinical studies in Pediatric Surgery. J Pediatr Surg 2003;38:390-6. |
|4.||Moss RL, Grosfeld JL. A new standard for reporting clinical research in the Journal of Pediatric Surgery. J Pediatr Surg 2006;41:4-6. |
|5.||Junker CA. Adherence to Published Standards of Reporting A Comparison of Placebo-Controlled Trials Published in English or German. JAMA 1998;280:247-9. |
|6.||Moher D, Jadad AR, Nichol G, Penman M, Tugwell P, Walsh S. Assessing the quality of randomized controlled trials: An annotated bibliography of scales and checklists. Control Clin Trials 1995;16:62-73. |
|7.||Begg C, Cho M, Eastwood S, Horton R, Moher D, Olkin I, et al. Improving the quality of reporting of randomized controlled trials. The CONSORT statement. JAMA 1996;276:637-9. |
|8.||Moss RL, Henry MC, Dimmitt RA, Rangel S, Geraghty N, Skarsgard ED. The role of prospective, randomized clinical trials in Pediatric Surgery: State of the art? J Pediatr Surg 2001;36:1182-6. |
|9.||Rangel SJ, Moss RL. The need for critical analysis of case-series reporting in pediatric surgery. Semin Pediatr Surg 2002;11:184- 9. |
|10.||Thakur A, Wang EC, Chiu TT, Chen W, Ko CY, Chang JT, et al. Methodology standards associated with quality reporting in clinical studies in pediatric surgery journals. J Pediatr Surg 2001;36:1160-4. |
|11.||Schumm LP, Fisher JS, Thisted RA, Olak J. Clinical trials in general surgical journals: Are methods better reported? Surgery 1999;125:41-5. |
|12.||Martin RC, Brennan MF, Jaques DP. Quality of complication reporting in the surgical literature. Ann Surg 2002;235:803-13. |
|13.||Hall JC, Hall JC. Baseline comparison in surgical trials. ANZ J Surg 2002;72:567-9. |
Abdulrasheed A Nasir
Department of Surgery, Division of Paediatric Surgery, University of Ilorin Teaching Hospital, PMB 1459, Ilorin
Source of Support: None, Conflict of Interest: None
[Table 1], [Table 2]