Workgroup III summary

Data bases and principles for model evaluation

The following text is the section 'Summary of discussions in Work Group III' from the proceedings of the workshop "Objectives for Next Generation of Practical Short-Range Atmospheric Dispersion Models".

The workshop was organized by DCAR (Danish Centre for Atmospheric Research).

The proceedings are now out of print, but certain sections of the proceedings are available via the Word Wide Web.

Contents of this document

Executive summary

Additional discussion

Summary of discussions in Work Group III:

Data bases and principles for model evaluation

Rapporteurs: G. Graziani, Italy and E. Sartori, France.

Chairman: W. Klug, Germany

This summary is structured in the form of a short executive summary, summing up the conclusions of the workshop, followed by an additional section where some more details on the discussion concerning experiments can be found.

Executive summary

In order to develop methodologies and verify their adequacy for comparing the performance of a new model against an older one, it is recommended to start with a pilot study. This approach would involve a small number of models and data sets and is considered to be the most realistic in view of the following constraints at the time of the discussion:

- no funds have been allocated specifically for this project,

- no official manpower, but only good will by participants is available,

- the time available for achieving something concrete before the next workshop (August 1993) is short.

In the meantime, all efforts should be undertaken to ensure that for the next phase, when the evaluation will involve many models and many data sets, the necessary funds are available.

At that time it is expected that the data sets and the protocol of evaluation will be generally available.

The rationale behind the setting up of this evaluation protocol is that the group believes that the old regulatory models in use today do not represent the state of the art any more, and that they do not allow satisfactory assessments. This evaluation protocol will be an effective means to show that more advanced models provide a better performance.

For this pilot study, the following steps are considered necessary:

a) Critical review of existing experimental data sets.

b) Recommendation of two or three data sets for the pilot study.

c) Acquisition of these experimental data sets to be placed into a suitable format.

Selection of the models for the pilot study (ISC and OML).

d) Definition of a protocol for model evaluation.

e) Running of the models by model owners on

- an extensive data set

- routine data.

f) Collection of results, analyses, presentation of summary with recommendations at the next workshop.

Work Group III also identified, based on the experience with the ATMES project, the measures of quality control that were considered adequate; these are:

1) Scatter plots

2) Cumulative frequency distributions

3) Box plot

4) Correlation coefficients (log) (Pearson, Spearman)

5) Bias (normalized bias, bias in geometric mean) t-test

6) Normalized mean square errors (NMSE) F-test

7) Figure of merit in space and time

8) Statistical Kolmogorov-Smirnov test.

It was recommended that these quantities should be stratified according to the following criteria:

- stability

- receptor distance from the source

- wind speed

- boundary layer parameters

Additional discussion

In the work group, there were discussions on experimental databases for model evaluation. Some details of these discussions given in this section.

The total data base of experimental releases that should be available for model comparison should include the following categories:

Class A : short range, no buoyancy, uniform terrain

Class B : same as A, but with buoyancy

Class C : complex terrain.

From class A about 10 different experiments were discussed briefly, but finally only the following were retained:

Name Range (km) Height (m) Stability   No. of trials
CONDORS 3-4 100-300 convective SF6 10
Copenhagen 7 115 conv/neutr SF6 10
Karlsruhe 10 60/100 different   50
NILU 10   stable SF6 6
Siesta 10 ground stable/neutr SF6 6

Three experiments with buoyancy were selected plus a number of operational data:

Name Range (km) Height (m) Comments Stability
Kincaid 50 187 SF6, 100 h all conditions
Bull Run 20   SF6  
Indianapolis 15   SF6  
Power stations        
network     operational  

The data sets considered for complex terrain or for complex meteorological situations were

Name Distance (km) Description  
ASCOT-84 5-10 valley, nighttime and morning  
Cinder Cone   SF6, non-buoyant  
Hogback Ridge   SF6, non-buoyant  
Tracy P. Plant   20 h, buoyant  
Transalp 50 PFC, non-buoyant, valley, 6 releases  
Oeresund 80 85 m height, land-sea-land, 9 releases  
Sophienhoehe a few SF6, non-buoyant, 20 experiments  
Siesta 6 mountain ridge  
Guardo 40 valley, buoyant, 14 releases  
Sirhowy   hills  
Sostans   SO2, NOx, operational, buoyant  
Ensted   SO2, SF6, land-sea breeze  


No systematic work was ever done with these experiments to see if all the possible conditions of terrain and meteorology are covered. There is certainly a need of such an effort which could indicate the necessity of further experimental releases to test the European regulatory models for air pollution. Further areas for experiments not covered in the past are those related to:

- the emissions of large particles (10 :m diameter),

- dispersion over water,

- dry and wet deposition.

Comparison with numerical experiments ought to be considered too.

As far as the comprehensiveness of the data is concerned, one should first evaluate the different models on the data sets identified. This will allow to identify which percentage of the possible cases is covered. Once this is assessed, it will be possible to say which experiments are required.

Some of the existing data sets are in good shape and can easily be obtained; others would need a certain effort before being usable. As the quality of several of these data sets is not well known, a first priority task is to evaluate the quality of the data sets.

In what form should the data sets be ? Ideally they should all be in an agreed format. This goal, however, is not achievable. The TRANSALP experiments, for instance, have demonstrated that it is impossible to impose a uniform format to different groups, even in a single experiment.

It was agreed that the exchange protocol would consist in providing data in flat ASCII files, including a separate format description (readme file) and tables with headings describing the data and units.

Quality control of improved regulatory atmospheric dispersion models can be achieved when it is possible to answer the following questions:

- how well the models perform with well-defined meteorology, i.e. using as input data all the available information which can be obtained from the above experimental data-sets.

- how well the models perform with degraded input, as compared to research quality input.

In real situations in fact one cannot expect to have for any industrial site all the available information; some of the boundary-layer parameters will be obtainable only from the closest meteorological station.

As the average spacing between these stations for European Countries is in the order of 100 km, one can expect that models should produce reasonable results with approximated meteorological input.

An activity which could be sought for the future is to compare the performance of mesoscale meteorological models, which then could assist the dispersion models to correctly interpolate the synoptic station data, even in complex situations.



Back to Homepage of the
Initiative on Harmonisation within Atmospheric Dispersion Modelling for Regulatory Purposes