Evaluation of STI policies over five decades

German and European evaluation practice through the decades

Since the early days of science, technology and innovation (STI) policies, the objectives, concepts and methods of evaluation in this field have undergone a significant evolution. The developments within evaluation research and practice should always be seen in the context of research and innovation policy as a whole, since evaluation concepts and instruments tend to mirror the respective "fashions" and waves in STI policies.

Highlights here include the establishment of competence centers at the end of the 1990s, the new mission orientation, and the introduction of New Public Management approaches in public research organizations.

Professionalization from the 1960s onward

The article in the anthology focuses on evaluations of funding measures, but also addresses questions about the evaluation of research institutions. It focuses mainly on German and European evaluation practice, but also includes the Anglo-American literature on evaluation theory.

In the 1960s, 1970s, and 1980s, the specific evaluation of public research and innovation activities and policies took shape, encompassing a growing range of topics and gradually becoming more professionalized.

In the 1990s, the key approaches and work of the previous decades were developed further, but there were also important new advancements such as the evaluation of multi-actor, multi-measure programs, and the strengthening of a national and international evaluation community.

More evaluations and increased self-reflection

There was an increasing number of evaluations in the decade from 2000-2010 that sought to capture system-level dynamics and provide an understanding of the role of policy and policy portfolios in system development, combined with increased efforts to support policy making with formative approaches.

Conversely, the policy-making system continued to demand a summative evaluation; a quantitative figure to legitimize RTI policy spending. The divergent demands of STI evaluation led to increased self-reflection in the evaluation community with major peer group events, reviews, and analysis.

The years from 2010 to the present have been dominated by discussion on how to measure the impact of research, with a focus on the non-academic aspects of it. The evaluation community is now increasingly exploring the possibilities offered by Big Data analytics.

Layout: Renata Sas; Icons: Anatolii Babii/creativemarket, Renata Sas