Cookies on this website
We use cookies to ensure that we give you the best experience on our website. If you click 'Continue' we'll assume that you are happy to receive all cookies and you won't see this message again. Click 'Find out more' for information on how to change your cookie settings.

© Springer-Verlag Berlin Heidelberg 2001. Decision support system developers and users agree on the need for rigorous evaluations of performance and impact. Evaluating simple reminder systems is relatively easy because there is usually a gold standard of decision quality. However, when a system generates complex output (such as a critique or graphical report), it is much less obvious how to evaluate it. We discuss some generic problems and how one might resolve them, using as a case study Design-a-Trial, a DSS to help clinicians write a lengthy trial protocol.


Conference paper

Publication Date





453 - 456