Improving evaluation utility

Evaluation utility and utilization

In this article we consider how to improve evaluation utility (usefulness) and utilization (use). While evaluators might disagree on how evaluation is defined, there is agreement on the importance of utility. For this reason, there has been a considerable amount of research about the use of evaluation findings.(2)

“The role of evaluation is to provide answers to questions about a program that will be useful and will actually be used.”

Rossi, Lipsi and Freeman (2004) (1)

In what ways can evaluation findings be used?

Evaluation is intended to be useful. Carol Weiss(3) describes four ways evaluation findings can be used in the decision-making process: a warning that something is wrong; guidance for improving a program; a new way of looking at a problem; and to mobilize support for a program.

Do evaluations have utility?

Although some evaluators are pessimistic about the extent of evaluation utilization,(4) research studies have found that many evaluations conducted (in the US public sector) can be influential in leading to program changes.(5,6)

Types of evaluation utilization

Three broad types of utilization are documented in the evaluation literature: direct or instrumental, conceptual and persuasive.(7,8,9,10)

What affects utilization?

Many evaluators have discussed the factors that limit the use of evaluation findings. The key factors that affect evaluation utilization are:(11,12)

  • Relevance
  • Communication between evaluators and users
  • Characteristics of the evaluator (commitment to utility)
  • Information processing by users
  • Trustworthiness and plausibility of findings
  • Involvement of potential users in the evaluation
  • Circumstances in which the evaluation is carried out

How can we improve evaluation utility?

From the social research and evaluation literature, we can summarise a number of steps we can take to maximise the utility, and utilization, of evaluations.(13,14,15)

(1) Rossi, P.H., Lipsi, M.W. and Freeman, H.E. (2004) Evaluation: A Systematic Approach (7th ed.) Thousand Oaks, CA: Sage Publications.

(2) Clarke, A. (1999) Evaluation Research: An Introduction to Principles. Methods and Practice. Thousand Oaks, CA: Sage.

(3) Weiss, C.H. (1998) “Evaluation for decisions: Is anybody there? Does anybody care?”, Evaluation Practice, 9(1):5-19.

(4) Clarke, A. (1999) ibid.

(5) Leviton, L.C. and Boruch, R.F. (1983) “Contributions of Evaluations to Educational Programs.” Evaluation Review, 7(5):563-599.

(6) Chelimsky, E. (1991) “On the Social Science Contribution to Governmental Decision-Making.” Science, 254 (October): 226-230.

(7) Leviton, L.C. and Hughes, E.F.X. (1981) “Research on the Utilization of Evaluations: A Review and Synthesis”, Evaluation Review, 5(4):525-548.

(8) Rich, R.F. (1977) “Uses of Social Science Information by Federal Bureaucrats.” In C.H. Weiss (ed.) Using Social Research for Public Policy Making (pp.199-211). Lexington, MA: D.C. Health.

(9) Weiss, C.H. (1998) ibid.

(10) Rossi, P.H., Lipsi, M.W. and Freeman, H.E. (2004) ibid.

(11) Leviton, L.C. and Hughes, E.F.X. (1981) “Research on the Utilization of Evaluations: A Review and Synthesis”, Evaluation Review, 5(4):525-548.

(12) Alkin, M. (1985) A Guide for Evaluation Decision Makers. Beverly Hills, CA: Sage.

(13) Rossi, P.H., Lipsi, M.W. and Freeman, H.E. (2004) ibid.

(14) Clarke, A. (1999) ibid.

(15) Alkin, M. (1985) ibid.

Kim Morral

Kim Morral is a freelance researcher, Credentialed Evaluator (CE), and Owner of Qualitas Research Inc.  Qualitas Research provides evaluation services to organizations across Canada.

Email: kim@qualitasresearch.ca

1+