Introduction and definition Policy makers need and use information stemming from simulations in order to develop more effective policies. As citizens, public administration and other stakeholders are affected by decisions based on these models, the reliability of applied models is crucial. Model validation can be defined as âsubstantiation that a computerised model within its domain of applicability possesses a satisfactory range of accuracy consistent with the intended application of the modelâ (Schlesinger, 1979). Therefore, a policy model should be developed for a specific purpose (or context) and its validity determined with respect to that purpose (or context). If the purpose of such a model is to answer a variety of questions, the validity of the model needs to be determined with respect to each question. A model is considered valid for a set of experimental conditions if the modelâs accuracy is within its acceptable range, which is the amount of accuracy required for the modelâs intended purpose. The substantiation that a model is valid is generally considered to be a process and is usually part of the (total) policy model development process (Sargent, 2008). For this purpose, specific and integrated techniques and ICT tools are required to be developed for policy modelling.
Model validation is composed of two main phases:
- Conceptual model validation, i.e. determining that theories and assumptions underlying the conceptual model are correct and that the modelâs representation of the problem entity and the modelâs structure, logic, and mathematical and causal relationships are âreasonableâ for the intended purpose of the model.
- Computerised model verification ensures that computer programming and implementation of the conceptual model are correct, as well as states that the overall behaviour of the model is in line with the available historical data.
Why it matters in governance Model Validation is connected both to modelling and simulation. According to the general need for policy assessment and evaluation, there are some specific issues stemming from the Model Validation, which are strongly related to governance:
- Reliability of models: policy makers use simulation results to develop effective policies that have an important impact on citizens, public administration and other stakeholders. Model validation is fundamental to guarantee that the output (simulation results) for policy makers is reliable.
- Acceleration of policy modelling process: policy models must be developed in a timely manner and at minimum cost in order to efficiently and effectively support policy makers. Model validation is both cost and time consuming and should be automated and accelerated.
- Composable and re-usable models: a policy model developer deciding to re-use existing models or compose them, stumble across the issue of modelsâ reliability. Model validation can be used for certifying this reliability and creating a database of validated models.
Current Practice and Inspiring cases In current practice the most frequently used is a decision of the development team based on the results of the various tests and evaluations conducted as part of the model development process. Another approach is to engage users in the validation process. When developing large-scale simulation models, the validation of a model can be carried by an independent third-party. Needless to say, that the third party needs to have a thorough understanding of the intended purpose of the simulation model. Finally, the scoring model can be used for testing the modelâs validity (e.g. see Balci 1989; Gass 1983; Gass & Joel 1987). Scores (or weights) are determined subjectively when conducting various aspects of the validation process and then combined to determine category scores and an overall score for the simulation model. A simulation model is considered valid if its overall and category scores are greater than some passing score.
Available Tools A review of the available tools is ongoing.
Key challenges and gaps Typically all above-mentioned approaches are applied after the simulation model has been developed. Performing a complete validation effort after the simulation model has been finalised requires both time and money. However, conducting model validation concurrently with the development of the simulation model enables the model development team to receive inputs earlier on each stage of model development. Therefore, ICT tools for speeding up, automating and integrating model validation process into policy model development process are necessary to guarantee the validity of models with an effective use of resources.
Current research In Current research, there are a large number of subjective and objective validation techniques used for verifying and validating the submodels and the overall model. Robert G. Sargent at the Syracuse University in 2010 provided a relevant ones: Animation; Comparison to Other Model; Degenerate Tests; Event Validity; Extreme Condition Tests; Face Validity; Historical Data Validation; Historical Methods; Internal Validity; Multistage Validation; Operational Graphics; Parameter Variability / Sensitivity Analysis; Predictive Validation; Traces; and Turing Tests. Furthermore, he described a new statistical procedure for validating simulation and analytic stochastic models using hypothesis testing when the amount of model accuracy is specified. This procedure provides for the model to be accepted if the difference between the system and the model outputs are within the specified ranges of accuracy. The system must be observable to allow data to be collected for validation.
Future research Future research should explore the following issues:
- In order to speed up and reduce the cost of a model validation process, user-friendly and collaborative statistical software should be developed, possibly combined with expert systems and artificial intelligence.
- Due to the big gap between theory and practice, the considerable opportunity exists for the study and application of rigorous verification and validation techniques. In the current practice, the comparison of the model and system performance measures is typically carried out in an informal manner.
- Complex simulation models are usually either not validated at all or are only subjectively validated; for example, animated output is eyeballed for a short while. Therefore, complexity issues in model validation may be better addressed through the development of more suitable methodologies and tools.
- Model validation is not a discrete step in the simulation process. It needs to be applied continuously from the formulation of the problem to the implementation of the study findings as a completely validated and verified model does not exist. Validation and verification process of a model is never completed.
- As the model developers are inevitably biased and may be concentrated on positive features of the given model, the third party approach (board of experts) seems to be a better solution in model validation.
- Considering the ranges that simulation studies cover (from small models to very large-scale simulation models), further research is needed to determine with respect to the size and type of simulation study
- which model validation approach should be used,
- how should model validation be managed,
- what type of support system software for model validation is needed.
- Validating large-scale simulations that combine different simulation (sub-) models and use different types of computer hardware such as in currently being done in HLA (Higher Level Architecture). A number of these VV&A issues need research, e.g. how does one verify that the simulation clocks and event (message) times (timestamps) have the same representation (floating point, word size, etc.) and validate that events having time ties are handled properly.