Skip to content
mafavaron edited this page May 24, 2015 · 1 revision

What NanoPart is

NanoPart is a Lagrangian particle atmospheric dispersion model of an entirely different kind than "conventional" particle models.

In conventional particle models, concentration at a point is approximated by counting how many particles out of the very many released at emission points end up in close proximity to the receptor; particle movement from emission to receptor's neighbourhood is simulated by means of a Langevin equation whose parameters are estimated from current mean wind and turbulence.

In existing particle models the Langevin equation is not entirely free, and some simplifications are imposed. Among the ones more relevant to our case we have the following:

  • The random part of Langevin equation is a Wiener process. Then, would mean wind be exactly equal to zero particles would follow at any time a normal distribution centered at emission point. The standard deviation of this normal multivariate density would grow in linear proportion with time.

  • Wind is assumed non-correlated across different spatial dimensions. And, as a consequence, time step is large enough to prevent any possible correlation between any two different dimensions: the time frame must be longer than Lagrangian decorrelation time (in practice, longer than a few tens of seconds - most particle model use cases assume an hourly time frame).

These assumptions are sensible in applications, like air quality, in which chemicals' concentration effect are visible only after a long time, quantifiable in months and years. There exist fields however where effects occur on a much smaller time frame.

This is the case with odours, whose detection may occur any time an in-breath is taken - approximately every 12 seconds on average. Another application is with effects from extremely high concentration of highly toxic chemicals: some chemicals of common industrial use, or some chemical weapons, are able to impart irreversible damage in one or two in-breaths.

As we shift attention to long to short time frames the wind correlation structure becomes more and more evident, to a point it can not be neglected. In addition, significant deviations from normal distribution in wind values begins to appear as time frame tends to zero.

Attempting to extend conventional particle models to time frames shorter than Lagrangian decorrelation times might in principle be made by renouncing some or all the simplification assumptions, but results would be at best questionable as the high statistical regularity suggested by a Langevin type equation fades out and more and more "randomness" gets in.

The approach with NanoPart, and more generally any other data-driven particle model, is quite different. Instead of trying to imagine which the statistical structure of wind will be given, say, wind speed and directions and some parameters describing turbulence like friction velocity and Obukhov length, we might let "wind describe itself".

For this to be possible, a measurement of wind fast and accurate enough to take into account both mean wind and turbulent fluctuation should be available. This is the case with various equipment offered by current technology, like for example three-dimensional ultrasonic or hot wire anemometers. Especially ultrasonic anemometers, of sturdy construction, allow for both continuous use at a fixed location, and "emergency deploy" at a crisis spot.

If enough fast-gathered wind samples are available, then it becomes possible to estimate the actual, empirical distribution of wind. Would the observation frame be long enough, we would see something similar to what happens with a conventional particle model.

But if the time frame is small enough, say in the order of ten to sixty seconds, then non-normality and correlation effects would become fully visible.

As particle dynamics is due to "really measured" wind (or better, random samples of it on a short time frame), the kind of distributions you might experience with data-driven particles may be very different from a conventional model. Plumes begin to show. Wind meandering, too, would become visible. Indeed, data-driven particle models make for an excellent wind visualization tool. Operating at a much longer time frame, conventional particle model would describe meandering and other plume changes as components of the random part in their Langevin equations: no real plume would be visible, but rather a "worst case" smooth, large spatial size distribution.

When data-driven particle models are ineffective?

Any dispersion model, and particle models are no exception, have a precise range of applicability. In case of data-driven particle models, the application is in fields like chemical warfare, industrial safety and odour management, where spatial scales rarely extend for more than 1 km. In these cases, and if terrain is plain and uniform enough, the mean wind field can be approximated as a uniform field moving parallel to ground surface. As odour-carrying compounds, or toxic gases, are released at ambient temperature no or very little buoyancy tends to occur, so changes in wind direction with height will show little or no effect.

Of course the atmosphere is not a rigid body, and some speed and direction change may be observed as we change our observation position. This in turn might induce changes in plume position. But effects are significant only as we consider spatial domains of large size, from 10km up, and especially if terrain is complex.

It is often assumed conventional particle models have an edge in these situations. It might be said this claim is as precise as the three-dimensional wind field is well reconstructed (something very difficult to assess). But anyway, these large-scale situations are most definitely not what NanoPart has been devised for.

Sure large scale spatial effect could be incorporated into NanoPart would a sufficiently accurate description of mean wind field and turbulence is available, not differently from conventional particle models. But, faithful to the application range of NanoPart, implementation of "transport equations" and the like has intentionally been left apart: in this exploratory phase they would interfere grossly with current model simplicity, with an uncertain gain.

Clone this wiki locally