6. Output Analysis and Knowledge Synthesis

Inputs driving a simulation are often random variables, and because of this randomness in the components driving simulations, the output from a simulation is also random, so statistical techniques must be used to analyse the results.


Introduction and definition

Inputs driving a simulation are often random variables, and because of this randomness in the components driving simulations, the output from a simulation is also random, so statistical techniques must be used to analyse the results. In particular, the output processes are often non-stationary and auto-correlated and classical statistical techniques based on independent identically distributed observations are not directly applicable. In addition, by observing a simulation output, it is possible to infer the general structure of a system, so ultimately gaining insights on that system and being able to synthesise knowledge on it.

There is also the possibility to review the initial assumptions by observing the outcome and by comparing it to the expected response of a system, i.e. performing a modelling feedback on the initial model. Finally, one of the most important uses of simulation output analysis is the comparison of competing systems or alternative system configurations.

Visualisation tools are essentials for the correct execution of this iterative step. The present research challenge deals with the issue of output analysis of a policy model and, at the same time, of feedback analysis in order to incrementally increase and synthesise the knowledge of the system.

Why it matters in governance

Output analysis is a specific aspect of simulation. According to the general need for policy assessment and evaluation, there are some specific issues stemming from the output analysis, which are strongly related to governance:
  1. Acceleration of policy assessment process: automated output analysis tools would help policy makers to efficiently and effectively analyse the impacts of a policy even if the large number of simulation data must be taken into account
  2. Citizen engagement: user-friendly automated tools for output analysis can be offered to citizens in order to share the simulation results and better engage them in policy-making process.
Current Practice and Inspiring cases

In the current practice a large amount of time and financial resources are spent on model development and programming, but little effort is allocated to analyse the simulation output data in an appropriate manner. As a matter of fact, a very common way of operating is to make a single simulation of somewhat arbitrary length run and then treat the resulting simulation estimates as being the "true" characteristics of the model. Since random samples from probability distributions are typically used to drive a simulation model through time, these estimates are realisations of random variables that may have large variances. As a result, these estimates could, in a particular simulation run, differ greatly from the corresponding true answers for the model. The net effect is that there may be a significant probability of making erroneous inferences about the system under study.

Historically, there are several reasons why output data analysis was not conducted in an appropriate manner. First, users often have the unfortunate impression that simulation is just an exercise in computer programming. Consequently, many simulation studies begun with heuristic model building and computer coding, and end with a single run of the program to produce "the answers." In fact, however, a simulation is a computer-based statistical sampling experiment. Thus, if the results of a simulation study are to have any meaning, appropriate statistical techniques must be used to design and analyse the simulation experiments and ICT tools must be developed to make the process more effective and efficient. In addition, there are some important issues of output analysis that are not strictly connected to statistics. In particular, an evident gap in literature regards the analysis and integration of feedbacks in modelling and simulation process. Actually, stakeholders are involved, in a post-processing phase, in order to analysis the results (more often only the elaboration of them) and understand something about the policy. Sometimes they are able to give a feedback on the difference between their expectations and the result but the process is not structured and effective tools are lacking.

The development of tools for analysing and integrating feedbacks should be explored in order to enlarge the number of stakeholders involved and, at the same time, to allow efficient and effective modification at each phase of the process, incrementally increasing the knowledge of the model and, consequently, of the given policy.

Available Tools

A review of the available tools is ongoing.

Key challenges and gaps

A fundamental issue for statistical analysis is that the output processes of virtually all simulations are non-stationary (the distributions of the successive observations change over time) and auto correlated (the observations in the process are correlated with each other). Thus, classical statistical techniques based on independent identically distributed observations are not directly applicable. At present, there are still several output-analysis problems for which there is no commonly accepted solution, and the solutions that are available are often too complicated to apply. Another impediment to obtaining accurate estimates of a model's true parameters or characteristics is the cost of the computer time needed to collect the necessary amount of simulation output data. Indeed, there are situations where an appropriate statistical procedure is available, but the cost of collecting the amount of data dictated by the procedure is prohibitive.

Current research

In current research, main references are Law (1983), Nakayama (2002) , Alexopoulos & Kim (2002), Goldsman & Tokol (2000), Kelton (1997), Alexopoulos & Seila (1998), Goldsman & Nelson (1998), Law (2006).

For output analysis, there are two types of simulations:

  1. Finite-horizon simulations. In this case, the simulation starts in a specific moment and runs until a terminating event occurs. The output process is not expected to achieve steady-state behaviour and any parameter estimated from the output will be transient in a sense that its value will depend upon the initial conditions (e.g. a simulation of a vehicle storage and distribution facility in a week time).
  2. Steady-state simulations. The purpose of a steady-state simulation is the study of the long-run behaviour of the system of interest. A performance measure of a system is called a steady-state parameter if it is a characteristic of the equilibrium distribution of an output stochastic process (e.g. simulation of a continuously operating communication system where the objective is the computation of the mean delay of a data packet).
Future research

Referring to previous cited works and in particular to Goldsman (2010), future research should further explore following issues:
  1. ICT tools for supporting or automating output/feedback analysis
  2. Allowing an incremental understanding of the model (knowledge synthesis)
  3. Adapting Design Of Experiment (DOE) for policy model simulation
  4. Use and integration of more-sophisticated variance estimators
  5. Better ranking and selection techniques.
RELATED ARTICLESExplain
Crossover Research Roadmap – Policy-Making 2.0
4. Research Challenges for Policy-Making 2.0
Policy Modelling
6. Output Analysis and Knowledge Synthesis
1. Systems of Atomized Models
2. Collaborative Modelling
3. Easy Access to Information and Knowledge Creation
4. Model Validation
5. Immersive Simulation
Graph of this discussion
Enter the title of your article


Enter a short (max 500 characters) summation of your article
Enter the main body of your article
Lock
+Comments (0)
+Citations (0)
+About
Enter comment

Select article text to quote
welcome text

First name   Last name 

Email

Skip