In this study, called Testing Reliability of Using Summaries to Inform Treatment decisions (TRUST IT) we aim to provide empirical evidence about to what extent the highly accessed EBM textbooks provide evidence-based recommendations according to current standards for trustworthy guidelines (Institute of Medicine report 2011). We are planning to do this study in 2013, pending funding for a new Ph.D student in MAGIC (application to Helse Sør-Øst Regional Hospital Trust 2012).

In our MAGIC research program we have identified online EBM textbook-like resources (Summaries hereafter) such as UpToDate, Dynamed and BMJ Best Practice as key information resources for clinicians and we aim to facilitate use of trustworthy guideline content (outputs from MAGIC) in Summaries.

Advocates for evidence-based practice currently recommend such Summaries as the first place to look for answers to clinical questions and refer to the 6 S pyramid which delineates a hierarchy of information resources. Summaries typically refer to a range of different guidelines of variable quality with different systems for grading evidence and recommendations, in addition to their development of recommendations “in house”. There is limited evidence to support that Summaries conform to recent standards for Trustworthy guidelines. We believe such evidence is of paramount importance to inform clinicians relying on such Summaries to inform their practice and to inform developers of Summaries about areas of improvement. The abstract below outlines our approach in this methodological study.

Here is an abstract of the TRUST IT research protocol:


Busy clinicians need reliable, useful and updated answers to their clinical questions at the point of care. A plethora of evidence-based clinical practice-summaries (Summaries hereafter) claim to meet these needs but their methodological rigor remains to be established. New definitions of trustworthy guidelines set high standards for methodological rigor, of equal relevance to developers of Summaries as for guideline organisations.1


We aim to identify the 5 most used Summaries and evaluate their methodological rigor (i.e., processes of collecting and summarizing the evidence and of moving from evidence to recommendations) for a set of clinical questions.


In this cross-sectional comparative survey we will select 5 Summaries according to predefined criteria and evaluate their methodological rigor (as defined above) according to existing definitions of trustworthy guidelines. We will select a set of 100 structured clinical questions in the PICO format considered of high relevance to clinicians across topics where we anticipate variance in methodological rigor (e.g. internal medicine, psychiatry, ENT and drug abuse). We will develop criteria and a scoring system to assess Summaries’ methodological rigor for the following domains: research evidence identification, citing and availability, rating of quality of evidence, reporting of treatment effects, grading of strength of recommendations, reporting of conflict of interest and updating of content. 5 pairs of clinicians with methodology expertise will screen the selected Summaries for answers to individual questions and perform duplicate data extraction. Agreement analyses and other quantitative comparisons between Summaries will be performed for scores within the separate domains.


We have performed extensive piloting of the extraction forms and are ready to conduct this trial in 2013, headed by a new Ph.D student recruited in our research group.