SIPER (Science and Innovation Policy Evaluation Repository): Developing models for automated coding of evaluation studies

Aim

SIPER, the Science and Innovation Policy Evaluation Repository, is a repository of science and innovation policy evaluations. Currently the database consists of approx. 900 evaluation studies. Its aim is to categorize evaluation reports according to the major evaluation dimensions and funding features. So far, researchers have done this categorization manually.

The aim of the ISDEC-SIPER project is to test the possibilities and limitations of automated procedures for the content analysis of evaluation reports. The challenge relates to the specific nature of this type of document: Evaluation reports differ strongly with regard to their structure, language and content (especially addressed evaluation dimensions) and there is no common reporting structure.

Research questions

The following questions will guide the research:

  • Can evaluation reports be classified by machine learning procedures with regard to
    • a) the characteristics of the evaluated intervention (objectives; target groups; funding instruments)
    • b) the characteristics of the evaluation (purpose and timing of the evaluation, evaluation criteria addressed, data collection and data analysis methods used)?  
  • To what extent is it possible to assess the quality of the evaluation study (with regards to evaluation standards)?
  • To what extent is it possible to assess the performance of the evaluated policy interventions?

© Fraunhofer ISI
Workflow – Analysis of SIPER Evaluation Reports