1. "Polution Modelling by Using Monte Carlo Methods Based on Ray Tracing Principles

Vassil Alexandrov

Computer Science Department
University of Liverpool
Liverpool, United Kingdom

2. "Modeling of Global and Regional Transport/Transformation of Air Pollutants"

Artash E. Aloyan

Institute for Numerical Mathematics
Russian Academy of Sciences
Gubkin str., 8
Moscow, 117333, Russia

Two problems are considered here. (1) Numerical modeling of gaseous pollutants and aerosols in the atmosphere with consideration for photochemical transformations in the gas- and aqueous phases as well as aerosol formation processes due to condensation and coagulation. These mechanisms allow one to estimate the secondary pollution levels of the atmosphere along with the disperse phase formation (caused both by fluctuations and interaction with atmospheric nuclei). Here the problem is resolved combined with the mesoscale hydrodynamics model. The results of numerical experiments are provided for specific problems. (2) Numerical modeling of the transport of persistent organic pollutants in the Northern hemisphere. A representative of POP, lindane, was used in the calculations due to availability of European-source emission data. The transport of lindane in the atmosphere and soil, and accumulation in water is considered. A series of parametrization mechanisms for lindane exchange in the atmosphere and soil is used in the model. The numerical calculations were performed to obtain the spatial and temporal variability of lindane in the Northern hemisphere for one year period. The amount of lindane outgoing from the EMEP modeling domain in the vertical and horizontal directions is estimated along with the contribution of lindane reemision from soil as secondary pollution source for the atmosphere.

3. "Studying High Ozone Episodes and Their Effects on Human Health and Vegetation"

Annemarie Bastrup-Birk

National Environmental Research Institute
Department for Atmospheric Environment
Frederiksborgvej 399, P. O. Box 358
DK-4000 Roskilde, Denmark

4. "Mixed-layer Height in Coastal Areas - Experimental Results and Modelling"

Ekaterina Batchvarova

National Institute of Meteorology and Hydrology
66 Tzarigradsko chaussee
Sofia 1784, Bulgaria

5. "Parallel Computation of Air Pollution Models"

Claus Bendtsen

Danish Computing Centre for Research and Education
DTU, Bldg. 304
DK-2800 Lungby, Denmark

Pollution models in which all physical and chemical processes are adequately described lead to the treatment of huge computational tasks: in a typical simulation one has to perform several hundred runs, each consisting of several thousand time-steps for the numerical solution of large systems of ordinary differential equations containing up to a million equations. This presentation will show how parallel processing can easily be introduced into allready existing codes with a minimum of work while still achieving adequate performance.

6. "Real Time Predictions of Transport and Dispersion from a Nuclear Accident"

Joergen Brandt

National Environmental Research Institute
Department for Atmospheric Environment
Frederiksborgvej 399, P. O. Box 358
DK-4000 Roskilde, Denmark

7. "Expereiences with Parallel Programming for Scientific Applications"

Marian Bubak

Institute of Computer Science, AGH
and Academic Computer Center CYFRONET
al. Mickiewicza 30
30-059 Krakow, Poland

8. "Data-assimilation and HPCN-examples of the LOTOS-model"

Peter Builtjes

Department of Environmental Quality
P.O.Box 342
7300 AH Apeldoorn, The Netherlands

9. "Computational Challenges of Modeling the Interactions between Aerosol and Gas Phase Processes in Large Scale Air Pollution Models"

Gregory R. Carmichael

Center for Global and Regional Environmental Research
and Department of Chemical and Biochemical Engineering
Iowa University,
Iowa City, Iowa 52242-1000, USA

10. "Parallel Numerical Simulation of Air Pollution in Southern Italy"

Guido Barone, Pasqua D'Ambra, Daniela di Serafino, Giulio Giunta,

Almerico Murli and Angelo Riccio
CPS (Centro di Ricerche per il Calcolo Parallelo
e i Supercalcolatori) - CNR
Via Cintia, Monte S. Angelo
80126 Napoli, Italy

In this talk we shall present a prototype of a parallel code for the numerical simulation of the transport and photochemical transformations of air pollutants in some areas of Southern Italy. The air quality model is based on an Eulerian approach and the numerical solution is performed using a time-splitting technique that decouples advection and horizontal diffusion from vertical diffusion and chemical reactions. The computational environment consists of MIMD distributed-memory machines and the parallelization is based on a domain decomposition technique.

11. "Large-scale Problems of Transportation of Ecologically Dangerous Materials"

Vladimir F. Demyanov, Stanislav K. Myshkov and V.V.Chashnikova

Applied Mathematics Dept.
St.Petersburg State University,
198904 Staryi Peterhof,
St.Petersburg, 198904, Russia

Let us consider a large system consisting of producers and consumers of ecologically dangerous products connected in a system by some transportation net. A finite set of products is circulating in the system and the danger of the system is caused by storing/production as well as the transportation of the products in the communication net. In the present report we do not specify the types of endangering factors (inflammability, toxicity, explosiveness). The efficiency of functioning the system is characterized by a large and contradictory system of criteria describing, on the one hand, production, storage and transportation cost, and, on the other hand, the damage (ecological, economical etc.) caused to the environment by the functioning the system and by possible accidents.

Mathematical statement of the problem is as follows. Assume that we have production sites (points) where dangerous products are produced and/or stored.

Given is also a matrix defining the overall amount of products available/produced at all production sites. The cost of the production of each mentioned product is known too. The cost matrix may include the investment cost, the production and storage cost etc.

The production/storage of dangerous products at each point inevitably causes ecological losses with the possibility of an accident and a damage due to the accident.

In addition, there are some points, where the dangerous product is used. In the regular case this point is the terminal consumer of the product. However, if it is a point of transhipment where the type of transportation is changed then in the mathematical model an additional production point is added to the consumer one alongside with corresponding parameters of functioning the system. Every consumer is characterized by the consumption/usage level of the proper product. For the system in general it will be a known matrix. The usage of dangerous products at a consumption point inevitably causes ecological losses, with the possibility of an accident and the damage due to the accident.

There exists an alternative transportation network which links the producers and the consumers of the products. It's assumed that different types of transportation are available (railways, highways, airplanes, pipelines etc.). Their characteristics are also taken into account. Mathematically, the transportation net may be described by different factors: - transportation costs described by corresponding matrices of the delivery costs of the measure unit of the product from a point of production to a point of consumption by the given type of transportation; - inevitable ecological losses described by the matrix of losses if the product is delivered from the producer to the consumer by the considered type of transportation; - possible damage caused by an accident characterized by the probability matrix of the accident and by the damage matrix (the cost matrix) in the case of an accident during the transportation of the product by the given type of transportation.

We define a set of matrices which describe the volume of transportation of products by the transportation network . These matrices must satisfy some inequalities due to the constraints on the volume of production of dangerous products, the consumption level and the volume of transportation.

The price of the products dispatched from each consumer, the overall damage caused by the product ramaining at the production point are local criteria of functioning the system. An analogous approach is used to describe parameters of functioning for consumers and for the transportation net. Note in passing that the functions describing the constraints as well as the local criterion functionals of the system are nonsmooth.

The problem stated is a multicriterial nonsmooth optimization problem. It's characterized by a large volume (large-scale) of computation because many local optimization problems must be solved. An interactive system "ASPID" and corresponding software are available at St. Petersburg State University for the solution of some of the above problems.

12. "Statistical Sensitivity Tests of Air Pollution Levels to Variations of Some Chemical Rate Constants"

Ivan Dimov and Zahari Zlatev

Central Laboratory for Parallel Processing
Bulgarian Academy of Sciences
Acad. G. Bonchev Str. 25 A
Sofia 1113, Bulgaria

It is well known that the method of statistical simulation (or Monte Carlo method) is a power tool in sensitivity analysis of large-scale systems. In this work we study and apply this statistical approach to air-pollution transport problem. It is important to study the pollution levels on large space domains (e.g. on space domains containing a given continent). Big mathematical models are indispensable tools in the attempts to determine the levels of concentrations and depositions of the harmful air pollutants. Such models are often described by systems of partial differential equations (the number of equations being equal to the number of chemical species studied by the model). Discretization and splitting techniques lead to the solution, over thousands of time-steps, of several very large systems of ordinary differential equations. It is not uncommon that each of these systems contain several millions equations. This means that the computational tasks arising in the treatment of large-scale air pollution models are enormous, and great difficulties arises even when modern high-speed computers are used. Therefore, it is highly desirable to simplify as much as possible the model. A careful sensitivity analysis is needed in order to decide where and how simplifications can be made.

In this topic the first step in a procedure, related to the sensitivity of the concentrations and depositions to variations of certain chemical rate constants, is described. A simple box model has been used in these tests. After that a big mathematical model for studying air pollution levels in Europe, the Danish Eulerian Model, has been used to study quantitatively the effect of varying the selected chemical rate constant to the concentrations of the two most involved chemical species in different parts in Europe. A Monte Carlo technique has been used in these tests. The results show that the variations of the concentrations in different parts of Europe are different, although the variations of the rate constant were the same at all grid-points in the space domain.

Key words: air pollution levels, mathematical models, partial differential equations, ordinary differential equations, sensitivity tests, statistical simulations

13. "Enabling Technologies for High Performance Computing"

Jack Dongarra

University of Tennessee and
Oak Ridge National Laboratory
104 Ayres Hall
Knoxville TN, 37996, USA

14. "Long Range Dispersion Modeling in the Aftermath of the Chernobyl Accident: Conclusions of ETEX"

Han van Dop

Institute for Marine and Atmospheric Research Utrecht
P.O. box 80.005 3508 TA Utrecht The Netherlands
Buijs Ballot Laboratorium
Princetonplein 5
Utrecht, De Uithof, The Netherlands

15. "Parallel 4D-variational Data Assimilation for a Eulerian Chemistry Transport Model"

Hendrik Elbern

Institute for Geophysics and Meteorology (EURAD)
University of Cologne
Albertus Magnus Plat
50923 Koeln, Germany

16. "Large Scale Air Pollution Control Problems"

Juri Ermoliev

International Institute for Applied Systems Analysis
A-2361 Laxenburg, Austria

The talk is concerned with the development of economic instruments for the air pollution control on a regional level with multiple sources, receptors and ambient standards. Optimization decomposition techniques are discussed which can be viewed as a decentralized search of emission charges, and emission permit trading procedures. Special attention is paid to the treatment of uncertainties and incomplete information.

17. "Approaches for Correcting the Numerical Solution of the Advection Equation"

Michael Galperin

Department of Biophysics, Radiation Physics and Ecology,
Moscow Physical - Engineering Institute,
Studencheskaya street,31-9,
Moscow, 121165, Russia

The advection schemes differ in the methods of approximation of concentration profile with respect to the masses in cells. It seems that accuracy can be improved by application of high order approximation. However, in the reality, many factors do not allow to implement this approach. Some of the reasons are:

1. In order to use the remote points concentrations in approximating the concentration in the calculation point the existence of high order derivatives must be assured. In fact, this condition does not take place. Moreover, factors like atmospheric fronts, precipitation and pollution sources produce discontinuities in the concentration fields. As a result, approximations of orders more than 3 or 4 accomplish nothing and even can impair the situation.

2. When we approximate the concentration, the mass in the cell differs in general from its predetermined value. This circumstance is not taken into account in many schemes or it is overcome by introducing rather crude normalization, limitations, etc.

In this study, outgoing from the above reasoning, some schemes meeting the conflicting requirements of accuracy and computer resource economy are presented. In the first scheme called "self-normalising flux scheme", low order polynomial approximation of mass distribution is applied. It is shown that mass-in-cell conservation is the necessary and sufficient condition for concentration profile restoration and, further on, for flux calculation. In the second scheme, the first order moment of mass-in-cell distribution is calculated. Initially, the mass is evenly distributed inside cell onto maximum area at the known mass centre. Further on, the transport and the redistribution of this mass do not depend on masses in the other cells and the new values of mass and its first moment are obtained by superposition.

These schemes are put to comparative testing together with the well known schemes of Egan & Mahoney and Bott, with Holmgren's modification of MacCormack's scheme (4th order in space and 2nd order in time) and with the simplest pseudo-Lagrangian scheme. These tests show that the accuracy of the presented schemes is as good as the one of the best of known advection models. However, the presented schemes are many times faster than the other ones.

18. "Application of Parallel Algorithms in an Air Pollution Model"

Krassimir Georgiev and Zahari Zlatev

Central Laboratory for Parallel Processing
Bulgarian Academy of Sciences
Acad. G. Bonchev Str. 25 A
Sofia 1113, Bulgaria

One of the most important tasks has to be solved in the modern society is to find reliable and robust control strategies for keeping the pollution under certain safe levels and to use these strategies in a routine way. Large mathematical models, in which all physical and chemical processes are adequately described, in cooperation with the modern high-performance computers can successfully be used to solve this task. One such a model is the Danish Eulerian Model developed in the National Environmental Research Institute in Roskilde, Denmark.

The size of the computational task obtained after the appropriate splitting and discretization procedures of the system of partial differential equations describing the phenomenon is enormous and even when the computers with several GFlops top performance are in use, it is difficult to solve this problem efficiently and moreover, to prepare codes which may be used for operating purposes in estimating the pollution levels in different parts in Europe.

A new algorithm for implementation of the Danish Eulerian Model on parallel computers, both with distributed and shared memory, is discussed in this paper. The algorithm is based on partitioning of the computational domain in several subdomains. The number of the subdomains is equal to the number of the processors which are available. The splitting procedure used in the model is a splitting according to the different physical processes that are involved in it. As different numerical algorithms are used in the different submodels to types of subdomains are used: overlapping - into the advection-diffusion submodels and nonoverlapping - into the chemistry-deposition submodels.

Numerical results obtained for an important module of the model (a transport-reaction scheme) on the parallel computer with distributed memory IBM SP (up to 32 processors) and the parallel computer with shared memory SGI POWER CHALLENGE (up to 16 processors) as well as concluding remarks explaining how the performance could be further improved are presented.

19. "Numerical Library Software for Large Scale Computations"

Sven Hammarling

Numerical Algorithms Group Ltd.
Wilkinson House, Jordan Hill Road
Oxford, OX2 8DR, United Kingdom

20. "Aerosol Modeling within the EURAD Model System:
Developments and Applications"

Heinz Hass

Ford Forschungszentrum Aachen
Dennewartstr. 25
52068 Aachen, Germany

21. "The Influence of Lateral Boundary Values
on the Calculation of Future Ozone ond
Ozone Precursor Concentrations
in a Regional Photochemistry Model

Jan Eiof Jonson

he Norwegian Meteorological Institute
P.O. Box 43 Blindern
N-0313 Oslo, Norway

22. "The Regional Weather Forecasting Models as Predictive Tools for Major Environmental Disasters and Natural Hazards

George Kallos, S. Nickovic, A. Papadopoulos and O. Kakaliagou

Dept of Applied Physics
Meteorology Lab.
Panepistimioupolis, Bldg. PHYS-V
Athens 15784, Greece

Today, with the present status of the computer technology, there is sufficient cmputer power for running regional weather forecasting models covering the regional and even the mesoscale portion of the spectrum of the atmospheric disturbances. The cost of purchasing the necessary computer power is affordable from most of the weather services. A number of RISC nodes ranging from 4 to 16 are considered as sufficient for regional/mesoscale forecasts for up to three days. Most of these models are able to accurately describe the ignition and evolution of extreme weather phenomena such as storms, heat waves etc. The output of these models should be further utilized for a series of other applications such as prediction or follow-up of huge environmental catastrophes (e.g. explosion of a nuclear reactor), prediction of sea waves, dispersion of oil spills, etc. Dispersion processes can be also included in real time simulations (e.g. non-reactive pollutants, desert dust, radioactive material). Photochemical processes require more computer power and of course, more memory. These requirements cannot be easily covered in most of the operational places today. The limitations are mainly due to the cost of purchasing the necessary computer power.

In this presentation, the Regional Weather Forecasting System SKIRON is presented and its capability to forecast extreme weather events or huge environmental catastrophes is discussed. The SKIRON system has been developed at the University of Athens and is based on the Eta/NCEP model. It is been used operationally at the Hellenic National Meteorological Service for regular 48-hour forecasts. The system is currently capable to aacurately predict extreme weather phenomena, dispersion of any passive scalar from predefined sources and dust uptake and transport. It is running in almost any parallel machine utilizing MPI.

23. "Variable Scale Modelling of Air Pollution Transport (Telescopic Method)"

Vladimir K. Kouznetsov, V. B. Kisselev and V. B. Milyaev

Institute of Atmospheric Protection
7, Karbisheva str.
St Petersburg, Russia

At present acidification of soils and lakes takes place almost in all areas of Europe due to great SO2 and NOx emissions and transboundary transport of these substances.

The evaluaton of this process (in the frame of international Program EMEP) is made by Meteorological Synthesizing Centres West (Oslo) and East (Moscow) on microscale level (150x150 km) and is considered as official for the most European countries.

These data are also used in other international projects (for example HELKOM). On the base of these data the ecologo-economic scenarios of emission reduction have been created. At the same time the spatial distribution of emission sources is irregular and therefore it is necessary to have more detailed information, that is the evaluation of transboundary pollution on larger scale levels (for example, the evaluation of transboundary pollution from one region to others inside one country). Evidently in this case, the co-ordination between results of varying scale modelling is required.

The application of such approach may be illustrated by the case study in Karelia, which is situated in the North-West part of Russia. The territory covers 15 EMEP qrid cells (150x150km). For the calculations with a finer resolution it was divided into 540 smaller cells (25x25km)and for this area one-level climatic mesoscale model of Lagrangian-Eulerian tipe was used, which is similar to models, used in MSC-W. The main feature of our model is the use of climatological characteristics of the considered region as input variables and the following statistical generation of the fields of meteorological elements, simulating in this way the real conditions of pollutant transport. It allows to avoid dealing with the huge amount of input meteorological data for the estimation of pollution levels.

With a help of this model SO2 and NOx depositions have been estimated from the emission sourses which are situated inside of estimated area of Karelia with spatial resolution 25x25km. It is "internal deposition" of SO2 and NOx and it varied from 12 to 1912 mgS/m2 and from 0 to 6 mgN/m2 in 1996. Conseqnently, the "external deposition" of SO2 and NOx is the difference between total deposition (EMEP data, 1996) and average internal deposition in Karelia. It varied from 140 to 711 mgS/m2 and from 97 to 252 mgN/m2 in 1996.

Thus, there is opportunity to determine the share of external and internal influences for different regions. It is very important becouse it enables:

a) to determine the areas in the region, where external deposition are more than the values of critical load. In this case, the reduction of its own emissions in the region can not liquidate acidification.

b) to determine the areas in the region where the exceedances of total depositions above critical loads are conditioned mostly by internal depositions. In this case, it is necessary to create the emission reduction scenarios for specific sources to lequidate exceedances.

Finaly, the telescopic method allows to determine the different levels of ecological responsibility for pollution in some territories - country, region, city, plant and so on.

24. "Adjoint Equations and Application to Problems of Global Change"

Guri I. Marchuk

Institute for Numerical Mathematics
Russian Academy of Sciences
Gubkin str., 8
Moscow, 117333, Russia

The development of science and technology demand the solution of extremely complex problems and construction of adequate mathematical models. First of all it concerns the problems of climate global change, environment protection, human health, etc. Enormous diversity of input data and regions of the planet assumes the introduction of numerous phenomenological parametrizations whose justification is often insufficient because parametrized phenomena are very complex. Sometimes it is efficient to consider some functionals of solutions of a problem instead of the very solutions. These functionals can filter out possible noises and characterize to some extent the solutions of the problem. The sensitivity of functionals makes it possible to correlate the functionals of processes being modelled and actually observed in nature.

Here, using the principal and adjoint equations, we consider the global transport of pollutants in the atmosphere. Numerical experiments were performed for different regions of the Earth. Based on the adjoint functions, special functionals were constructed characterizing the total amount of pollutants in specific regions. In so doing, the adjoint function is a weighing function allowing one to determine the contribution of pollution sources in given regions.

Besides, the transboundary transport of sulfur-containing air pollutants in different European regions is simulated using adjoint functions. The pollution considered here results from emissions both in the region itself and transported from other countries or regions.

25. "Modelling of The Long-term Atmospheric Transport of Heavy Metals Over Poland"

Andrzej Mazur
Meteorology Centre,
Institute of Meteorology and Water Management
Podlesna 61
PL-01-673 Warszawa, Poland

The theoretical basis, the evolution and the results of long-term (1991-1995) runs of a regional model for atmospheric transport of four heavy metals: As, Cd, Pb and Zn over Polish territory are described in this presentation. The model represents an Eulerian, three-dimensional approach to the atmospherical transport described by an ordinary advection-diffusion scheme. It consists of three main modules, the first sets parameters of the model (i.e. local deposition coefficient, washout ratios, dry deposition velocities etc.), the second applies the advection-diffusion solver and the last prepares output files for graphical presentation and statistical analysis. The equations are solved using the Area Flux Preserving method (advection) and a Gaussian elimination method - modified (diffusion). The dry deposition velocity was assumed a function of roughness height, friction velocity and diameter of a particle according to Sehmel's model. In turn, washout ratio was assumed constant in this model. To create an appropriate data base for emissions in Poland the heavy metal emissions from Polish sources for 1991-95 were collected and compiled.

26. "A Parallel Iterative Scheme for Solving the Convection Diffusion Equation on Distributed Memory Processors"

L. A. Boukas and Nikolaos M. Missirlis

Department of Informatics
University of Athens
Panepistimiopolis 15710
Athens, Greece

In this paper we consider the numerical solution of the Convection Diffusion equation. We propose the local Modified SOR method and apply Fourier analysis to study its convergence. Parallelism is introduced by decoupling the mesh points with the use of red-black ordering for the 5-point stencil. Optimum set of values for the parameters involved are determined. It is found that the perfomance of the proposed method is significantly more efficient than other iterative methods. Finally, the parallel implementation of the local MSOR method is discussed and results are presented for distributed memory processors with a mesh topology.

27. "Theory, Algorithms and Software Systems of Bayesian Heuristic Approach for Optimization of Large Scale Discrete or Continuous Models"

Jonas Mockus

Department of Optimization
Institute of Mathematics and Informatics
Akademijos 4
Vilnius 2600, Lithuania

Solving big optimization problems in connection with the distribution of new economical objects the efficient methods of global and discrete optimization are needed as usual. Discrete optimization problems are often solved using "heuristics" (expert opinions defining how to solve a family of problems). The paper is about ways to speed up the search by combining several heuristics involving randomization. Using expert knowledge an it a priori distribution of optimization results as functions of heuristic decision rules is defined and is continuously updated while solving a particular problem. This approach (BHA or Bayesian Heuristic Approach) is different from the traditional Bayesian Approach (BA) where the {\it a priori} distribution is defined on a set of functions to be minimized.

The paper focuses on the main objective of BHA that is improving any given heuristic by "mixing" it with other decision rules. In addition to providing almost sure convergence such mixed decision rules often outperform (in terms of speed) even the best heuristics as judged by the considered examples. However, the final results of BHA depend on the quality of the specific heuristic. That means the BHA should be regarded as a tool for enhancing the best heuristics but not for replacing them.

The paper is concluded by a short discussion of Dynamic Visualization Approach (DVA). The goal of DVA is to exploit heuristics directly, bypassing any formal mathematical framework.

The purpose of the paper is to inform the authors inventing and applying various heuristics and about the possibilities and limitations of BHA hoping that they will improve their heuristics using this powerful tool.

28. "Parallel Algorithms for Large-Scale Location Problems in Environmental Modeling"

Panos M. Pardalos

Center for Applied Optimization
Industrial and Systems Engineering Department
303 Weil Hall, University of Florida
Gainesville, FL 32611-6595, USA

29. "On Air Pollution Monitoring (Case Study)"

Tamas Rapcsak
Laboratory of Operations Research and Decision Systems
Computer and Automation Research Institute,
Hungarian Academy of Sciences
H-1518 Budapest, P.O. Box 63, Hungary

30. "Neural, Fuzzy Modelling in Pollution, an Interesting Alternative to Other Procedures: An Overview and PC Illustrations of Some Experiments"

Mariana Bistran and Gheorghe M. Sandulescu

Department of Advanced Research
ASTEIDA University
Bucharest, Romania

31. "Advanced Operational Air Quality Forecasting Models for Urban and Regional Environments in Europe: Madrid application"

R. San Jose, M. A. Rodriguez, M. A. Arranz,

I. Moreno and R. M. Gonzalez
Environmental Software and Modelling Group
Computer Science School - Technical University of Madrid
Campus de Montegancedo - Boadilla del Monte-28660
Madrid, Spain

32. "Adaptive Approximation with Fractal Functions"

Blagovest Sendov

Central Laboratory for Parallel Processing
Bulgarian Academy of Sciences
Acad. G. Bonchev Str. 25 A
Sofia 1113, Bulgaria

The classical orthonormal systems of Walsh and Haar are generalized in a new direction, different from these already well-known [1], [2], [3], [4], [5]. This generalization involves real parameters and allow adaptation of the orthonormal system to a particular function by appropriate choice of this parameters. The generalized Walsh and Haar functions may have any given fractal dimension.

The motivation for this generalization is the application for signal and image compression, which is an approximation problem for non smooth functions, resembling fractals.

The quality of the adapted approximation of real signals and images by generalized Walsh and Haar functions is compared with some classical methods for approximation.

1. Chrestenson, H. E. (1955): A Class of Generalized Walsh Functions. Pacific J. Math., 5, 17 - 31.

2. Fine, N. J. (1950): The Generalized Walsh Functions. Trans. Am. Math. Soc., 69, 66 - 77.

3. Levy, P. (1944): Sur une generalisation des fonctions orthogonales de M. Rademacher. Comm. Math. Helv., 16, 146 - 152.

4. Redinbo, G. R. (1971): A Note on the Construction of Generalized Walsh Functions}. SIAM J. Math. Anal., 2 (3), 166 - 167.

5. Watari, C. (1958): On Generalized Walsh Fourier Series. Tohoku Math. J., 10 (2), 211 - 241.

33. "A Preliminary Investigation of Soils, Oceans and Atmosphere Enrichment by Persistent Volatile Pollutants"

Mikhail Sofiev

Institute of Program Systems
Russian Academy of Science
Moscow, Russia

One of important features of many toxic pollutants like mercury and persistent organic compounds is their high evaporation capability. Their lifetimes in the environment are much longer than those of sulphur and nitrogen compounds. Depending upon the atmospheric conditions these substances can be deposited, accumulated in ecosystems and then come back to the long-range transport. These cycles are accompanied by slow degradation process of some of these substances. As a result, initial emission of such species affects not only one-two years after the release but also makes significant impact to a long-term toxic pollution. The key role in such a cycle plays a process of secondary emission of previously deposited masses (so-called re-emission). The investigation of this effect is rather complicated because re-emission of the pollutants can easily be treated as a natural emission during the measurement campaign. The analysis of the pollution history is available only with mathematical models. Current experiment was aimed at development of adequate model and preliminary studying of the accumulation of persistent pollutants and analysis of the its sensitivity to chemical transformation rates and re-emission / fixation intensities applied in the model. The model is based on the Eulerian multi-layer transport routine with capabilities to consider soil and water layers below the surface. Available meteorological information and model structure enable to carry out continuos calculations over the Northern Hemisphere for the time period of more than 20 years. The first run made for mercury has shown that multi-annual accumulation of the pollutants in the environment may cause the air concentrations comparable with those observed at monitoring stations. After several years of model simulations the dynamic equilibrium between the income of antropogenic mercury and its removing from the transport cycle was observed. Two main processes of the mercury removing were detected. The first one is the fixation of the deposited substances in soils. The second one is transport outside the model domain (which is purely artificial effect connected with the limited area of calculations).

34. "The Future of Chemical Mechanism Development for Air Pollution Models: Chemistry, Uncertainty and Sensitivity Analysis"

William R. Stockwell

Fraunhofer Institute for Atmospheric Environmental Research (IFU)
Kreuzeckbahnstr. 19
82467 Garmisch-Partenkirchen, Germany

35. "On Some Flux-type Advection Schemes for Dispersion Modelling Application"

Dimiter Syrakov

National Institute of Meteorology and Hydrology
66 Tzarigradsko chaussee
Sofia 1784, Bulgaria

The description of advection processes still keep to be a real challenge for tracer dispersion calculations. Recently, the most exploited numerical scheme is the Bott's one (Bott, 1989). The Bott scheme is a flux type one: it is explicit, positively definite and conservative with limited numerical dispersion and good transportability. The main feature of Bott's approach is the normalization of mass fluxes when calculating the one step increase (decrease) of mass in cell. The flux is presented as a product of the concentration and the ratio between the mass in the flux (one time step passage) and mass in the cell. Bott calculates masses by integrating the polynomial which interpolates the concentration over the nearest points. Best results are obtained using a 4th order polynomial and the pattern points are symmetric according to the center of the cell. As a result, 3 points at domain border are necessary as boundary ones.

The advection scheme TRAP (from TRAPezium) was elaborated especially for the Bulgarian dispersion model EMAP, which is 3D, PC-oriented Eulerian multi-layer model. The scheme was presented at the first REMAPE Workshop (Syrakov, 1997). In the TRAP scheme, the flux area is supposed trapezoidal. Instead of integrating, the flux is determined as a product of Courant number and a single value of the approximating polynomial, taken in the middle of the passed distance. The same 4th order polynomial is used in TRAP. Bott's normalization is also applied. Displaying the same properties as Bott's scheme, the TRAP-scheme turns out to be several times faster.

Some faster variants of Bott and TRAP schemes are presented here using interpolation polynomials of smaller order. They show almost the same quality of transport description as the Bott and TRAP schemes. Another important advantage of the new versions is the fact that they need only two grid points at the borders of the model doma in as boundary ones.

Principally new schemes are elaborated, performing self-normalization instead of Bott's one. Integrated flux and TRAP-flux aproaches are used estimating the mass fluxes through the edges of the cell. These schemes are also presented and tested.

Bott A., 1989: A positive definite advection scheme obtained by nonlinearrenormalization of the advective fluxes, Mon.Wea.Rev., 117, 1006-1015.

Syrakov D.(1997), On the TRAP advection scheme - Description, tests and applications, in Geernaert G., A.Walloe-Hansen and Z.Zlatev <Eds.>, Regional Modelling of Air Pollution in Europe. Proceedings of the first REMAPE Workshop, Copenhagen, Denmark, September 1996, National Environmental Research Institute, Denmark, 141-152.

36. "The Use of 3-D Adaptive Unstructured Meshes in Air Pollution Modelling"

Alison Tomlin

Dept of Fuel and Energy
University of Leeds
Leeds LS2 9JT, United Kingdom

37. "Atmospheric Environmental Management Expert System for an Oil-fired Power Plant"

Eloy A. Unzalu

Environmental Laboratory
LABEIN, Technological Research Centre
Bilbao, Basque Country, Spain.

The Atmospheric Environmental Management Expert System, actually in operation in a 1,000 MW Power Plant, reckons in real time the atmospheric impact caused by the Power Plant in the affected urban area, an estuarine valley and industrial zone by the Atlantic Ocean, in the bay of Biscay and located in a complex terrain.

The expert system is based on a real time reception of meterological and atmospheric pollutant emission data, which are used to automatically select the most appropiate meteorological and dispersion models from the set implemented in the system, and to execute them in order to estimate the atmospheric impact caused by SO2.

It includes the automatic calculation of dispersion parameters such as atmospheric stability, heigh of mixed layer etc, by means of advanced methods implemented in the software. On the other hand, and along the project, the meteorological and dispersion models of the software have been calibrated and the results validated through four multidisciplinary experimental campaigns covering both meteorological and pollutant measurements in the area.

38. "Collaborative Air Pollution Modeling"

Emanuel Vavalis

Purdue University, Computer Science Department,
West Lafayette, IN 47 907 USA
and Institute for Applied and Computational Mathematics
Foundation for Research and Development Hellas
711 10 Heraklion, Crete, Greece

39. "Applied SMP Parallel Computing to Air Pollution Models"

Bjarne S. Andersen, Krassimir Georgiev, Jerzy Wasniewski and Zahari Zlatev

Danish Computing Centre for Research and Education
DTU, Bldg. 304
DK-2800 Lungby, Denmark

40. "Automatically from a Model Specification to Fortran Programs"

Lex Wolters

Computer and Software Systems Division
Dept. of Computer Science, Leiden University
P.O. Box 9512, 2300 RA Leiden, the Netherlands

41. "Including of Surface Source in SL Parameterization"

Dimiter Yordanov and Dimiter Syrakov

Geophysical Institute
Bulagarian Academy of Sciences
Sofia 1113, Bulgaria

Recently, a great number of dispersion models have been developed. They possess different features and need different computer resources. Meanwhile, the PC-oriented Eulerian multi-layer model EMAP was developed and applied to different pollution problems. The vertical diffusion block of the model uses a 2nd order implicit scheme including dry deposition as a bottom boundary condition, realised on a non-homogeneous staggered grid. The experiments with EMAP show that, if the concentration at the first computational level is used for calculation of the dry deposition flux, the deposited quantity changes when the height of the level is changed. Obviously, the roughness level concentration is necessary for the proper calculation of the dry deposition. It is not possible to have a model level at this height, because roughness usually changes from one grid point to another. On the other hand, because of the steep gradients in SL, many levels have to be introduced for its good description. Thus, the memory and time requirements would increase without any practical need. For this reason, the first computational level is usually placed at some height above the roughness. That is why, a good estimate for the roughness level concentration is necessary, determined on the grounds of the calculated concentrations. The problem becomes more complex when a surface pollution source is treated. Similar processes are those of evaporation and re-emission of the tracer under consideration.

A proper parametrization of the diffusion processes in the surface layer can avoid these difficulties. A parametrization, based on the similarity theory and taking into account the presence of continuous surface source, is presented. It is adjusted for the implicit diffusion scheme. The parametrization is tested under various conditions, both for single source and for a combination of surface and high sources. The tests confirm its good quality. Concentration profiles and dry deposited mass are described adequately by a grid with practically no levels in the surface layer.

42. "Long-term Calculations Performed by the Danish Eulerian Model"

Annemarie Bastrup-Birk, Joergen Brandt and Zahari Zlatev

National Environmental Research Institute
Department for Atmospheric Environment
Frederiksborgvej 399, P. O. Box 358
DK-4000 Roskilde, Denmark

The Danish Eulerian Model has been run over a time-period of seven years, from 1989 to 1995. Results concerning the distribution, in Europe, of sulphur pollutants, nitrogen pollutants, ozone and ammonia-ammonium pollutants will be reported. The main objective of this paper will be the demonstration of the influence of the reductions of the European emissions in the seven-year period, 1989-1995, on the concentrations and the depositions of studied pollutants. Some other issues, such as the need to use big supercomputers and to handle huge input and output files, will also be discussed.

This page is maintained by Annemarie Bastrup-Birk , Jørgen Brandt, Helge Rørdam Olesen and Zahari Zlatev

Document date: May 26, 1998


[Dep. Homepage] Department of Atmospheric Environment, NERI