Finally This Research Was Limited to Published Studies Did Not Employ Peer Review Processes

  • Loading metrics

Systematic review finds that report data not published in full text articles take unclear impact on meta-analyses results in medical research

  • Christine M. Schmucker,
  • Anette BlĂŒmle,
  • Lisa K. Schell,
  • Guido Schwarzer,
  • Patrick Oeller,
  • Laura Cabrera,
  • Erik von Elm,
  • Matthias Briel,
  • Joerg J. Meerpohl,
  • on behalf of the Open consortium

PLOS

10

  • Published: Apr 25, 2017
  • https://doi.org/10.1371/journal.pone.0176210

Abstract

Background

A meta-analysis as office of a systematic review aims to provide a thorough, comprehensive and unbiased statistical summary of data from the literature. However, relevant report results could exist missing from a meta-analysis because of selective publication and inadequate broadcasting. If missing outcome information differ systematically from published ones, a meta-analysis volition be biased with an inaccurate assessment of the intervention consequence. Equally part of the EU-funded Open projection (www.open-project.eu) we conducted a systematic review that assessed whether the inclusion of data that were not published at all and/or published merely in the greyness literature influences pooled effect estimates in meta-analyses and leads to different interpretation.

Methods and findings

Systematic review of published literature (methodological research projects). Four bibliographic databases were searched up to Feb 2016 without restriction of publication year or language. Methodological enquiry projects were considered eligible for inclusion if they reviewed a cohort of meta-analyses which (i) compared pooled upshot estimates of meta-analyses of wellness care interventions co-ordinate to publication status of information or (2) examined whether the inclusion of unpublished or grey literature information impacts the result of a meta-assay.

Seven methodological research projects including 187 meta-analyses comparing pooled handling effect estimates according to unlike publication status were identified. Ii research projects showed that published data showed larger pooled handling effects in favour of the intervention than unpublished or grey literature data (Ratio of ORs 1.15, 95% CI 1.04–1.28 and 1.34, 95% CI ane.09–ane.66). In the remaining research projects pooled event estimates and/or overall findings were non significantly inverse by the inclusion of unpublished and/or grayness literature data. The precision of the pooled estimate was increased with narrower 95% conviction interval.

Conclusions

Although we may conceptualize that systematic reviews and meta-analyses not including unpublished or grey literature study results are probable to overestimate the treatment effects, current empirical inquiry shows that this is only the example in a minority of reviews. Therefore, currently, a meta-analyst should particularly consider time, effort and costs when adding such data to their analysis. Future inquiry is needed to place which reviews may do good almost from including unpublished or grayness data.

Introduction

A meta-analysis as part of a systematic review aims to provide a thorough, comprehensive and unbiased statistical summary of data from the literature.[one] However, relevant written report-results could be missing from a meta-analysis because of selective publication and inadequate dissemination (non-dissemination or bereft dissemination). Even the most comprehensive searches are probable to miss written report data which are not published at all such as supplemental unpublished information related to published trials, data obtained from the Food and Drug Administration (FDA) or other regulatory websites or postmarketing analyses hidden from the public. In addition, study data that are not published in conventional journals and, therefore, are not indexed in electronic databases are as well probable to be not identified. This then called 'gray literature' is not controlled past commercial or bookish publishers. Information technology includes non-indexed conference abstracts oft published in journal collections, dissertations, press releases, authorities reports, policy documents, book capacity or data obtained from trial registers (Table one). If the results from missing study data (unpublished and/or study data published in the grayness literature) differ systematically from the published data available, a meta-analysis may become biased with an inaccurate assessment of the intervention result.[2–4]

There is some show that indicates that published randomized controlled trials tend to exist larger and show an overall greater treatment effect in favor of the intervention than grey literature trials or unpublished data.[5–viii] However, the identification of relevant unpublished study information or information published in the grey literature and their inclusion in meta-analyses can be particularly challenging regarding excessive time, effort and costs. In that location is also some controversy regarding whether unpublished study data and data published in the grey literature should be included in meta-analyses at all, because they are by and large not peer reviewed and their internal validity (risk of bias) may be difficult to assess due to poor reporting of the trials. On the other hand, particularly briefing proceedings may take a separate role in the gray literature as they ofttimes provide preliminary results or results following intermediate follow-up. A publication past Cook and colleagues showed that 78% of authors of meta-analyses felt that unpublished studies should be included in meta-analyses compared to only 47% of journal editors.[ix] Therefore, inquiry is needed to assess the potential impact of inclusion of 'grey literature' study information and unpublished information in meta-analyses of wellness intendance interventions.

We investigated the impact of report information that were not published in full text articles in scientific journals on pooled outcome estimates and the overall interpretation of meta-analyses.

The current review was office of the EU-funded OPEN project (To Overcome failure to Publish nEgative fiNdings; www.open-projection.eu) which aimed to investigate non-publication of study data and related dissemination bias through a series of systematic reviews[10–14] following a protocol published previously.[xv]

Methods

Systematic literature search

We initially searched Medline (Ovid), Embase (Ovid), The Cochrane Library and Spider web of Scientific discipline from inception until February 2012. An update search was performed in February 2016. The search strategy was based on combinations of medical subject headings (MeSH) and keywords and was non restricted to specific languages or years of publication. The search strategy used in Medline (Ovid) is presented in S1 Search Strategy. Search strategies for other databases were modified to meet the requirements of each database. The searches were supplemented past checking the bibliographies of whatever eligible manufactures for additional references.

Patient involvement

This research is based on empirical work. Therefore, there was no patient interest in this methodological systematic review of reviews (so called umbrella review).

Study selection

Titles and abstracts were reviewed using pre-defined inclusion criteria. Full papers of all methodological research projects which included a cohort of meta-analyses (i.due east., more than one meta-analysis) and (i) compared pooled issue estimates of meta-analyses of wellness care interventions according to publication status (i.e., published vs. unpublished and/or greyness written report data) and/or (2) examined whether the inclusion of unpublished and/or grey study data impact the overall findings of a meta-analysis (i.e., from negatively significant to positively pregnant; from not clinically relevant to clinically relevant) were obtained for detailed evaluation.

All stages of study selection, data extraction and quality assessment were done independently by ii reviewers (study option and data extraction: PO and LC, quality assessment: CS and LKS). Any disagreement during the selection, extraction, and assessment process were resolved by word and consensus or with aid of a third reviewer (JJM).

We considered a study 'published' when information technology appeared in a peer-reviewed journal. The definition of unpublished and/or grey literature study data had to be in accordance with the definition of 'unpublished studies' and 'grey literature' described above (see Introduction).

A meta-analysis was defined as mathematical calculation of a weighted summary judge of a treatment effect by pooling results of two or more studies.

Outcomes

First, we focused on the extent to which the pooled effect judge in a meta-assay changes with the inclusion of unpublished and/or report information published in the grayness literature in comparison to published study data. Where possible, we calculated as our primary written report result a ratio of risk ratios (RRR) or odds ratios (ROR) between the results of published data and the results of unpublished and/or grey literature data and gauge the percentage change (pooled risk ratio from published information divided by pooled chance ratio from unpublished data and/or grey literature data).[15] Thereby, a ratio greater than one.0 would indicate that published study data showed a greater treatment effect; likewise a ratio beneath 1.0 would signal that published data would show a smaller treatment outcome. Nosotros also intended to calculate a single weighted pooled RRR or ROR to combine ratios from the unlike methodological enquiry projects to guess an overall pooled outcome, which as well takes into account factors such every bit number of studies, patients and events. For the intended analyses (to calculate a ratio of adventure or odds ratios (RRR, ROR) between the results of published study data and unpublished and/or grey study data), the single upshot estimates (RR, OR) estimated by the included meta-analyses would be the 'unit of analyses'.

Second, we aimed to investigate the impact of the inclusion of unpublished or grayness literature study data on the estimation of meta-analyses. This touch on tin be estimated by calculating the proportion of meta-analyses which showed a change in their interpretation (due east.yard., from negatively meaning to positively significant; from not clinically relevant to clinically relevant).[15]

Information extraction

We extracted master characteristics of (i) the methodological research projects (e.g., baseline data, expanse of health care, number of meta-analyses included); (ii) the meta-analyses (east.g., purpose and scope of meta-analyses, number of studies and participants included); and (iii) the studies included in meta-analyses (e.g., number of studies depending on publication status). For more detail encounter our published protocol.[15]

Assessment of take chances of bias and generalizability of results

No quality assessment tool exists for these types of methodological inquiry projects. Hazard of bias (internal validity) and generalizability (external validity) were therefore assessed according to pre-divers criteria which were developed considering empirical evidence on broadcasting bias[10, 16] and internal word.[15] The assessment of risk of bias was based on (i) the pick procedure; i.due east., whether and to which extent the search criteria were reported to identify unpublished and/or grey and published study data; (ii) definition of the publication status; i.e., whether explicit criteria were reported for the definition of unpublished or gray literature and published data; (iii) part of confounding factors; i.eastward., whether the divergence of the results between unpublished/grey and published study data may be explained by differences in study designs, blazon of participants or intervention characteristics and not past a true difference in the results between unpublished/greyness literature and published information; therefore nosotros investigated whether analyses were stratified or results adjusted for possible confounders. In add-on, nosotros investigated the reliability of the data extraction process; i.due east., whether information extraction was performed by ii researchers independently. Generalizability cess was based on (i) the status of the sample of meta-analyses included; i.e., whether a random, consecutive or selected sample was included and (ii) whether the research project selected a broad-ranging sample of meta-analyses that presents the current literature in the field of involvement (e.thousand., in terms of size or diversity of topic).

For data extraction and risk of bias assessment, we relied on information provided in publications of the methodological research projects.

Statistical analysis and data synthesis

The sparse data did not let u.s. to apply the predefined statistical analyses neither for the principal assay nor the subgroup analyses.[xv] Instead, results of this systematic review are presented descriptively using text and tables.

Results

Literature search and selection process

The searches identified 8464 citations, including 3301 duplicates (Fig ane). Amid the 5163 unique references screened, 10 references[3, 17–25] corresponding to 7 methodological research projects[three, 19–24] were eligible for inclusion in this systematic review.

Characteristics of included research projects

Primary characteristics of the 7 research projects are presented in Tables 2 and iii. In brief, 5 research projects included conventional intervention reviews[iii, 20–23], 1 research project was solely based on safety aspects,[24] while another research projection included private participant information meta-analyses.[19] Dissimilar medical specialties were displayed in 4 research projects[3, 22–24], while iii focused on a unmarried medical field.[xix–21] In total, 187 meta-analyses with 1617 primary studies (373 unpublished/grey literature studies and 1244 published studies) enrolling a total of 428762 participants (58786 participants in unpublished/grey literature studies and 369976 in published studies) were included. It has to be taken into account that the given numbers of included studies and participants are underestimated because Hart et al[23] and Golder et al[24] did not provide these study characteristics in detail. The publication dates of the latest meta-analyses included in the inquiry projects ranged betwixt 1995[3] and 2014.[24]

Assessment of risk of bias and generalizability of results

Tabular array 4 presents the assessment of risk of bias and generalizability of results for each enquiry project. Regarding risk of bias, each inquiry project reported how unpublished or grey literature study data were identified inside meta-analyses. Unpublished or grey literature data (e.thou., in terms of conference abstracts, dissertations or editorials) were sufficiently defined in all research projects. The main limitation of the inquiry projects was that most of them (except for Golder et al[24]) did not allow united states to judge whether grey literature or unpublished study information in comparison to published data were adequately matched (e.g., in terms of study aim or sample size) or adjusted for confounders.

Generalizability of results was low or unclear in iv enquiry projects.[iii, 19, 20, 24] It means that the results of these enquiry projects were either based on a selected sample of meta-analyses (e.m., meta-analyses from one enquiry group only were used) or the medical field of involvement was not sufficiently represented (e.k., only few rare sorts of cancers or a pocket-sized range of interventions were considered).

Effect of unpublished or greyness literature written report information on pooled estimates in meta-analyses

The effects of unpublished or greyness literature studies on pooled estimates in meta-analyses are shown in Tabular array v. One research project (including 467 randomized controlled trials) showed that published studies had a larger pooled treatment outcome in favor of the intervention than unpublished studies (ROR 1.fifteen, 95% CI one.04–ane.28).[iii] In the remaining research projects pooled result estimates were non significantly inverse by the inclusion of unpublished or greyness literature data. Yet, Egger et al[22] presented the pooled effect gauge beyond different medical specialties (ROR 1.07, 95% CI 0.98; 1.15)–merely likewise separated effect estimates for selected medical fields. In the field of obstetrics and gynaecology this pooled assay showed that published results are more positive than unpublished results (ROR 1.34, 95% CI ane.09–ane.66). In psychiatry there was a similar trend but pooled estimates did not attain statistical significance (ROR 1.61, 95% CI 0.ix–2.9). The combination of estimates across methodological research projects was not possible due to differences in the definitions of result estimates (some research projects reported hazard ratios, other odds ratios or adventure ratios, or even weighted mean differences) and clinical heterogeneity (dissimilar aims of the inquiry projects regarding safety and efficacy outcomes).

Impact of unpublished or grayness literature written report data on the interpretation of meta-analyses

Five research projects provided additional information on the overall bear upon of unpublished or greyness literature written report data on the interpretation of the results. The results are descriptively summarized in Table five. Hart and colleagues[23] reported that the addition of unpublished data to their sample of meta-analyses caused in 46% lower, in 7% identical and in 46% greater effect estimates than published data. In the inquiry project from Egger et al[22] removal of grey literature data resulted in a change in pooled estimates from a 28% decrease to a 24% increase in benefit. McAuley and colleagues[3] reported that removal of grey literature data changed the estimate past at least 10%. Thereby, significance of the results was affected in 3 out of 41 meta-analyses.

On the other hand, Fergusson and colleagues[21] and Golder and colleagues[24] stated that 'effect estimates were not substantially changed'[21] or that 'the direction and magnitude of the difference varies and is not consistent'[24] when unpublished or grey literature data are added.

Discussion

Main findings

Although it was shown that some example samples of meta-analyses non including gray literature or unpublished data conspicuously overestimate treatment furnishings,[6–8] quantifying this effect by considering all meta-epidemiological studies (so called methodological enquiry projects) reveals that this affects but a minority of reviews. In the majority of meta-analyses over a wide range of medical fields, excluding unpublished trials had no or only a small effect on the pooled estimates of treatment effects. However, in some instances more substantial, statistically meaning changes were observed (overestimating the effect between 9 and 60%)[22] There may be a trend in research areas involving new drugs or technologies to publish the well-nigh exciting and positive results more rapidly, and negative ones less quickly, if at all.[10] Also sponsorship of drug and device studies by the manufacturer leads to more than favourable results and conclusions than sponsorship by other sources.[26] Consequently, the problem of broadcasting bias could be more pronounced in medical areas in which relevant innovations are being adult at quick pace or when trials are published close to drug approval. This supposition, however, could not be proven with the available empirical data.

Our inquiry and other reviews[five, 27] revealed that unpublished trials are often smaller (e.g., Tabular array iii, differences in medians between unpublished or grey literature study data and published data: xi,[22] 534,[21] and 29[3] patients, respectively). Small sample sizes may be one of the reasons that unpublished or grayness literature study data are less likely to produce statistically significant results than published data. However, if study size was the merely factor impacting on the likelihood of publication this would non result in bias, only a lack of precision with wider confidence intervals of effect estimates.

Methodological inquiry projects included in this review used different statistical methods to determine the contribution of unpublished data in meta-analyses. For example, Egger and colleagues[22] used the statistic called by the original reviewers of the meta-analyses to calculate pooled effect estimates separately for unpublished and published trials. Thereafter, weighted averages for all these ratios were calculated using random effects models. McAuley and colleagues[3] chose a stock-still outcome logistic regression model which requires private patient information from each trial. This approach ignores heterogeneity between trials and between meta-analyses. In general, too little consideration has been given to appropriate statistical methods for this type of meta-epidemiological research then far. This may lead to an underestimation of the doubtfulness of effect estimates due to unpublished data in meta-analyses.[28]

None of the methodological inquiry projects addressed the trouble of multiple journal publications.[29] Unaccounted indistinguishable publication may inflate the number of participants and/or events leading to increased precision and, plain, causes broadcasting bias.

Show from a Cochrane review has shown that only almost half of all trials reported every bit abstracts and presented at conferences are subsequently published in full.[16] In add-on, it takes, on average, three years for a trial reported as an abstract to exist eventually published in a peer-reviewed journal. Therefore, excluding them seems an arbitrary act that may bias the results. On the other hand, McAuley and colleagues showed that the inclusion of abstracts had no relevant impact on pooled estimates of meta-analyses over different medical fields.[3] Moreover, concerns take been raised regarding the methodological and reporting quality in unpublished studies, because grey or unpublished literature is often not peer reviewed. We believe that abstracts may accept a separate office in the grey literature as they oftentimes provide preliminary study results, results following intermediate follow-up, or unexpected findings. Consequently, when a researcher decides to include unpublished or grey literature study data in meta-analyses, it is important to run sensitivity analyses to identify possible differences between results from unpublished or greyness literature studies and from fully published papers. While there is no dubiousness that studies that take positive results are afterward published as total-length journal articles more oftentimes than studies with negative results,[x] lack of time of the authors may exist a major reason for non-publication of research—independent of the management of results.[xxx]

Strengths and weaknesses of this review

This systematic review sought to comprehensively synthesize the trunk of research on the impact of including unpublished written report information and information published in the greyness literature in meta-analyses. By discussing multiple study characteristics and potential confounders related to unpublished studies and studies published in the grayness literature, we could not identify sufficient prove to conclude whether or to which extent inclusion of unpublished and grey study data have an impact on the pooled upshot estimates and the conclusions from meta-analyses. Nevertheless, the bachelor research projects demonstrates that availability of unpublished and grayness literature data leads to a more than precise risk estimates with narrower 95% conviction intervals, thus representing higher testify strength according to the GRADE evaluation (Grades of Recommendation, Assessment, Development and Evaluation).[31] In addition, we developed criteria to appraise both run a risk of bias and generalizability for this specific type of empirical research which may exist of high value in future methodological enquiry. Our strategy was non focused on the results of single meta-analyses including published and unpublished information, but on meta-epidemiological studies. Nosotros expected that theses research projects would allows us to estimate the "average" overestimation of treatment effects due to dissemination bias.

Still, we are aware that our findings have several limitations: Get-go, we could not identify sufficient inquiry projects to conclude whether or to which extent inclusion of unpublished and greyness written report data take an affect on the conclusions from meta-analyses. 2nd, the risk of bias assessment revealed that the internal validity may be hampered due to the lack of advisable adjustment for potential confounders between published and unpublished or gray literature information in the identified methodological research projects. 2nd, our inquiry is mainly limited to selected samples of medical literature (eastward.thousand., rare sorts of cancers or a small range of adverse effects), and hence the findings may not be generalizable to other medical fields. Yet, nearly medical fields assessed were large and permitted evaluation of a large number of studies.[19, 21] Another weakness of our study relates to its retrospective nature and its reliance on what authors described as comprehensive literature searches. We did non appraise whether the sample of trials identified by these authors was in fact consummate and whether searches were truly comprehensive. If searches were inadequate, so that many unpublished or grey literature studies with negative results were consciously or unconsciously omitted, and then our review may underestimate the impact of dissemination bias. Roughly the same would be true, if predominantly unpublished or grey literature studies with similar results to published studies were identified by inadequate searches. But we could not gauge how often this happened. On the other manus, we are concerned about the possibility of dissemination bias (in item reporting bias), where investigators may have chosen not to write up their results (east.g., for a subgroup of patients) if they did non notice whatever significant differences betwixt published and unpublished study data. Nosotros believe that the impact of unpublished or gray literature information on pooled estimates could be assessed more thoroughly if the intention to compare data sources according to publication condition was congenital in at the protocol stage of these meta-analyses.

Time, endeavor, and price involved in locating and retrieving unpublished information and grey literature makes its inclusion in reviews challenging. Legal obligation to prospectively register trials and make results available later completion of the trial in many countries (including the The states and Europe), different registries for clinical trials such every bit the International Clinical Trials Registry Platform (ICTRP) or the database ClinicalTrials.gov, cyberspace-based grey literature resource, journals devoted to negative trials, or efforts taken by various groups, including Cochrane (through trial registries), may further ease the identification and inclusion of unpublished data in meta-analyses.

We acknowledge that more than half of all published systematic reviews are not including meta-analyses.[32] Despite our focus on the impact of unpublished and grey literature study data on pooled effect estimates in meta-analyses, we believe that our findings are also applicable to systematic reviews with qualitative/descriptive summaries.

Comparison with other systematic reviews

We are aware of one methodological Cochrane review which addressed the impact of unpublished and grey literature data in meta-analyses on the basis of meta-epidemiological studies.[v] This review was published in 2007 and concluded that gray literature trials show an overall greater handling event than published trials. The authors acknowledged that the evidence is sparse and that more efforts are needed to identify a consummate and unbiased set up of trials irrespective of whether they take been published or not. In contrast to our review, this methodological review is nearly 10 years old, did not apply methods to address risk of bias and generalizability of the results of the included studies covering the given research question.

Our findings suggest that dissemination bias is a very serious threat to the results of meta-analyses, but not always impacts their results. This finding is supported by other studies (not meeting the inclusion criteria for this review) based on unpublished FDA data and published information.e.chiliad., [6, 33] One of these meta-analyses investigating selective publication of antidepressant trials constitute a bias toward the publication of positive results, resulting in an effect size well-nigh one third larger than the effect size derived from unpublished FDA data.[half dozen] Controversially, MacLean and collegues[33] reported that risk ratios for dyspepsia did not significantly or clinically differ using published or unpublished FDA data.

Implications for policy makers and further research

This piece of work has implications for researchers and those who use meta-analyses to assistance inform clinical and policy decisions. (i) Investigators should ensure a comprehensive systematic literature search to avoid or at least attenuate the effect of broadcasting bias. Such searches can be resources-intensive particularly when unpublished and grayness literature data need to exist identified. If the available resources do not permit comprehensive searches to identify unpublished or grey literature information, we strongly recommend (at least) a search in trial registries (such as the ICTRP and ClinicalTrials.gov) and websites of regulatory authorities which is less resource-intensive than searching for briefing proceedings or dissertations, contacting experts, the industry and authors. When including unpublished or greyness literature data sensitivity analyses should be carried out taking into account that this research may provide just preliminary results, is usually non peer reviewed and/or at college risk of bias. Information technology is obvious that fifty-fifty a thorough literature search cannot eliminate dissemination bias. Therefore, it is also of great importance to apply boosted methods for detecting, quantifying and adjusting for dissemination bias in meta-analyses.[14] Such methods include graphical methods based on funnel plot asymmetry, statistical methods, such as regression tests, selection models, and a corking number of more recent statistical approaches. [2] [34–36] However, the empirical research piece of work of Mueller et al 2016 concluded that it remains hard to advise which method should exist used equally they are all express and only few methods have been validated in empirical evaluations using unpublished studies obtained from regulators (e.g., FDA studies).[14] Selective outcome reporting in clinical studies is also an indicator for subconscious or missing data, especially when merely selective slices of the complete clinical trial are published or when studies testify huge drop-out rates without providing reasons for these patients who left the study.[vii, 37, 38] Overall, researchers should carefully consider the potential risk of dissemination bias when interpreting their findings. (ii) Those using meta-analyses to assist with clinical and policy decisions should also exist aware of broadcasting bias, considering dissemination bias may take directly impact for patient care.[39] (iii) Major improvements accept been made in the accessibility of data past initiatives such as the AllTrials entrada (world wide web.alltrials.net) calling for all trials to be registered and the methods and results to be reported, the European Medicines Agency (EMA) policy on publication of clinical information on request since 2015, the obligatory release of results in trial registries by the European police force (Clinical Trial Regulation), the FDA Amendment Deed in 2007 and advancement from the Cochrane Collaboration to fully implement such policies. Although progress has been made, there are withal major issues related to unrestricted data admission. Even when data are released, they can be incomplete, selective or non in compliance with the results reported in study registers such equally ClinicalTrials.gov.[xl] [41] [42] Therefore, further action is required to progress toward unrestricted data access. Particularly the full release of clinical study reports (CSR) may contain more data than other unpublished sources and, therefore, may have the potential to overcome existing problems.[43] (four) Our research indicates that it seems that information technology will not be possible for a meta-analyst to guess before-manus whether the addition of unpublished and grey literature study data impacts the pooled effect estimates and leads to a change in the overall conclusions. (v) Finally, fifty-fifty the nigh comprehensive search for grey and unpublished data volition not allow a final judgment whether the identified sample is in fact complete and representative for all of the hidden information.

Supporting data

Acknowledgments

The authors thank the members of the Open consortium Gerd Antes, Vittorio Bertele, Xavier Bonfill, Marie-Charlotte Bouesseau, Isabelle Boutron, Silvano Gallus, Silvio Garattini, Karam Ghassan, Carlo La Vecchia, Britta Lang, Jasper Littmann, Jos Kleijnen, Michael Kulig, Mario Malicki, Ana Marusic, Katharina Felicitas Mueller, Hector Pardo, Matthias Perleth, Philippe Ravaud, Andreas Reis, Daniel Strech, Ludovic Trinquart, Gerard UrrĂștia, Elizabeth Wager, Alexandra Wieland, and Robert Wolff.

The authors also thank Edith Motschall for conducting the comprehensive systematic literature search.

Author Contributions

  1. Conceptualization: CS JM.
  2. Formal analysis: GS.
  3. Funding acquisition: JM.
  4. Investigation: CS LKS AB PO LC JM.
  5. Methodology: CS JM.
  6. Supervision: JM.
  7. Writing – original typhoon: CS.
  8. Writing – review & editing: CS EvE MB JM.

References

  1. ane. Higgins JPT, Green S. Cochrane handbook for systematic reviews of interventions version 5.1.0 [updated march 2011]. The cochrane collaboration, 2011. www.cochrane-handbook.Org.
  2. two. Sterne JA, Sutton AJ, Ioannidis JP, Terrin Due north, Jones DR, Lau J, et al. Recommendations for examining and interpreting funnel plot disproportion in meta-analyses of randomised controlled trials. BMJ. 2011; 343: d4002. pmid:21784880
  3. iii. McAuley Fifty, Pham B, Tugwell P, Moher D. Does the inclusion of grey literature influence estimates of intervention effectiveness reported in meta-analyses? Lancet. 2000; 356: 1228–1231. pmid:11072941
  4. 4. Song F, Eastwood AJ, Gilbody S, Duley L, Sutton AJ. Publication and related biases. Health Technol Assess. 2000; 4: 1–115.
  5. 5. Hopewell Southward, McDonald S, Clarke MJ, Egger M. Grey literature in meta-analyses of randomized trials of health care interventions. Cochrane Database Syst Rev. 2007; 2: MR000010.
  6. 6. Turner EH, Matthews AM, Linardatos E, Tell RA, Rosenthal R. Selective publication of antidepressant trials and its influence on apparent efficacy. N Engl J Med. 2008; 358: 252–260. pmid:18199864
  7. 7. Eyding D, Lelgemann Grand, Grouven U, Harter M, Kromp M, Kaiser T, et al. Reboxetine for acute handling of major low: Systematic review and meta-analysis of published and unpublished placebo and selective serotonin reuptake inhibitor controlled trials. BMJ. 2010; 341: c4737. pmid:20940209
  8. eight. Driessen Due east, Hollon SD, Bockting CL, Cuijpers P, Turner EH. Does publication bias inflate the apparent efficacy of psychological treatment for major depressive disorder? A systematic review and meta-analysis of u.s. national institutes of health-funded trials. PLoS Ane. 2015; 10: e0137864. pmid:26422604
  9. 9. Cook DJ, Guyatt GH, Ryan G, Clifton J, Buckingham L, Willan A, et al. Should unpublished data be included in meta-analyses? Electric current convictions and controversies. Jama. 1993; 269: 2749–2753. pmid:8492400
  10. 10. Schmucker C, Schell LK, Portalupi Southward, Oeller P, Cabrera L, Bassler D, et al. Extent of non-publication in cohorts of studies approved by research ethics committees or included in trial registries. PLoS One. 2014; nine: e114023. pmid:25536072
  11. xi. Mueller KF, Briel M, Strech D, Meerpohl JJ, Lang B, Motschall E, et al. Dissemination bias in systematic reviews of animal research: A systematic review. PLoS Ane. 2014; 9: e116016. pmid:25541734
  12. 12. Mueller KF, Meerpohl JJ, Briel Thousand, Antes G, von Elm Eastward, Lang B, et al. Detecting, quantifying and adjusting for publication bias in meta-analyses: Protocol of a systematic review on methods. Syst Rev. 2013; ii: 60. pmid:23885765
  13. 13. Portalupi Southward, von Elm E, Schmucker C, Lang B, Motschall E, Schwarzer 1000, et al. Protocol for a systematic review on the extent of non-publication of research studies and associated study characteristics. Syst Rev. 2013; 2: 2. pmid:23302739
  14. 14. Mueller KF, Meerpohl JJ, Briel M, Antes One thousand, von Elm East, Lang B, et al. Methods for detecting, quantifying, and adjusting for dissemination bias in meta-analysis are described. J Clin Epidemiol. 2016; 80: 25–33. pmid:27502970
  15. 15. Schmucker C, Bluemle A, Briel Chiliad, Portalupi S, Lang B, Motschall Due east, et al. A protocol for a systematic review on the impact of unpublished studies and studies published in the gray literature in meta-analyses. Syst Rev. 2013; 2: 24. pmid:23634657
  16. 16. Scherer RW, Langenberg P, von Elm E. Total publication of results initially presented in abstracts. Cochrane Database Syst Rev. 2007; MR000005. pmid:17443628
  17. 17. McAuley LM, Moher D, Tugwell P. The office of grey literature in meta-analysis [abstract]. Third International Congress on Biomedical Peer Review and Global Communications; 1997 Sept 18–20; Prague, Czech Republic. 1997.
  18. 18. Burdett Southward, Stewart LA. Publication bias and meta-assay: A applied instance [abstract]. 8th Annual Cochrane Colloquium; 2000 Oct 25–29; Cape Town, S Africa. 2000; 12.
  19. 19. Burdett South, Stewart LA, Tierney JF. Publication bias and meta-analyses: A practical example. Int J Technol Assess Health Care. 2003; 19: 129–134. pmid:12701945
  20. 20. Martin JLR, Perez V, Sacristan K, Alvarez Due east. Is greyness literature essential for a amend command of publication bias in psychiatry? An example from three meta-analyses fo schizophrenia. European Psychiatry. 2005; 20: 550–553. pmid:15994063
  21. 21. Fergusson D, Laupacis A, Salmi LR, McAlister FA, Huet C. What should be included in meta-analyses? An exploration of methodological issues using the ispot meta-analyses. Int J Technol Assess Wellness Care. 2000; 16: 1109–1119. pmid:11155831
  22. 22. Egger Chiliad, Juni P, Bartlett C, Holenstein F, Sterne J. How of import are comprehensive literature searches and the assessment of trial quality in systematic reviews? Empirical written report. Health Technology Assessment. 2003; seven: 1–76.
  23. 23. Hart B, Lundh A, Bero L. Consequence of reporting bias on meta-analyses of drug trials: Reanalysis of meta-analyses. BMJ. 2012; 344: d7202. pmid:22214754
  24. 24. Golder S, Loke YK, Wright K, Norman K. Reporting of agin events in published and unpublished studies of health care interventions: A systematic review. PLoS Med. 2016; 13: e1002127. pmid:27649528
  25. 25. Golder S, Loke YK, Bland Thou. Unpublished data tin be of value in systematic reviews of adverse effects: Methodological overview. J Clin Epidemiol. 2010; 63: 1071–1081. pmid:20457510
  26. 26. Lundh A, Sismondo S, Lexchin J, Busuioc OA, Bero L. Industry sponsorship and inquiry outcome. Cochrane Database Syst Rev. 2012; 12: Mr000033. pmid:23235689
  27. 27. Hopewell Southward. Impact of greyness literature on systematic reviews of randomized trials [phd thesis]. Oxford: Wolfson College, University of Oxford. 2004.
  28. 28. Sterne JA, Juni P, Schulz KF, Altman DG, Bartlett C, Egger M. Statistical methods for assessing the influence of report characteristics on handling effects in 'meta-epidemiological' research. Stat Med. 2002; 21: 1513–1524. pmid:12111917
  29. 29. Rennie D. Fair bear and fair reporting of clinical trials. Jama. 1999; 282: 1766–1768. pmid:10568651
  30. 30. Scherer RW, Ugarte-Gil C, Schmucker C, Meerpohl JJ. Authors report lack of fourth dimension as main reason for unpublished enquiry presented at biomedical conferences: A systematic review. J Clin Epidemiol. 2015; 68: 803–810. pmid:25797837
  31. 31. Guyatt GH, Oxman AD, Vist GE, Kunz R, Falck-Ytter Y, Alonso-Coello P, et al. Grade: An emerging consensus on rating quality of evidence and strength of recommendations. BMJ 2008; 336: 924–926. pmid:18436948
  32. 32. Clarke Thou, Hopewell Southward, Chalmers I. Clinical trials should brainstorm and terminate with systematic reviews of relevant prove: 12 years and waiting. Lancet. 2010; 376: 20–21. pmid:20609983
  33. 33. MacLean CH, Morton SC, Ofman JJ, Roth EA, Shekelle PG. How useful are unpublished data from the food and drug assistants in meta-analysis? J Clin Epidemiol. 2003; 56: 44–51. pmid:12589869
  34. 34. Kepes S, Banks G, McDaniel M, Whetzel D. Publication bias in the organizational sciences. Organizational Enquiry Methods. 2012; xv: 624–662.
  35. 35. Langhorne P. Bias in meta-assay detected by a unproblematic, graphical test. Prospectively identified trials could be used for comparing with meta-analyses. BMJ. 1998; 316: 471.
  36. 36. David SP, Ware JJ, Chu IM, Loftus PD, Fusar-Poli P, Radua J, et al. Potential reporting bias in fmri studies of the brain. PLoS Ane. 2013; 8: e70104. pmid:23936149
  37. 37. Loder Due east, Tovey D, Godlee F. The tamiflu trials. BMJ. 2014; 348: g2630. pmid:24811414
  38. 38. Chan AW, Song F, Vickers A, Jefferson T, Dickersin K, Gotzsche PC, et al. Increasing value and reducing waste product: Addressing inaccessible research. Lancet. 2014; 383: 257–266. pmid:24411650
  39. 39. Meerpohl JJ, Schell LK, Bassler D, Gallus S, Kleijnen J, Kulig M, et al. Evidence-informed recommendations to reduce dissemination bias in clinical research: Conclusions from the open (overcome failure to publish negative findings) projection based on an international consensus meeting. BMJ Open. 2015; 5: e006666. pmid:25943371
  40. 40. Boutron I, Dechartres A, Baron Thousand, Li J, Ravaud P. Sharing of data from industry-funded registered clinical trials. Jama. 2016; 315: 2729–2730. pmid:27367768
  41. 41. Miller JE, Korn D, Ross JS. Clinical trial registration, reporting, publication and fdaaa compliance: A cross-exclusive assay and ranking of new drugs approved by the fda in 2012. BMJ Open. 2015; 5: e009758. pmid:26563214
  42. 42. Cohen D. Dabigatran: How the drug company withheld important analyses. BMJ. 2014; 349: g4670. pmid:25055829
  43. 43. Maund E, Tendal B, Hrobjartsson A, Jorgensen KJ, Lundh A, Schroll J, et al. Benefits and harms in clinical trials of duloxetine for treatment of major depressive disorder: Comparison of clinical written report reports, trial registries, and publications. BMJ. 2014; 348: g3510. pmid:24899650

herrerasudionew.blogspot.com

Source: https://journals.plos.org/plosone/article?id=10.1371%2Fjournal.pone.0176210

0 Response to "Finally This Research Was Limited to Published Studies Did Not Employ Peer Review Processes"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel