WHY IMPACT ANALYSIS SHOULD NOT BE USED FOR RESEARCH EVALUATION AND WHAT THE ALTERNATIVES ARE

0
403

Abstract

Many impact studies relate changes in impact indicators to research investments. This is valid only if an implicit assumption is true: that the link between indicators and investments dominates all other relationships that influence the impact indicators. However, this is only true for minor improvements along stable technological paths. In most cases, other factors, such as policies and markets, influence adoption and, consequently, impact. The problem is compounded because impacts often appear after many years and usually cannot be measured. Since many factors influence adoption, research impacts should be analyzed as part of a complex adaptive system that depends on external forces (e.g., markets), the direct and indirect interactions among agents (e.g., researchers, input suppliers and farmers), and the technology’s nature and evolution. The complexity framework has broad consequences for agricultural and research policies. Since impacts result from the actions of the whole network, they cannot generally be attributed to individual agents. In evaluating networks, the relevant parameters to study are the rules for generating, collecting and sharing information, financing procedures, intellectual property-rights regulations and availability of human and financial resources. For individual agents the relevant indicators are their patterns of participation in particular networks, benefits and costs of participation, evaluation criteria, financial arrangements and institutional cultures.

WHY IMPACT ANALYSIS SHOULD NOT BE USED FOR RESEARCH EVALUATION AND WHAT THE ALTERNATIVES ARE