In a recent work, Kiessling et al. ^{[12]} analyzed a method for wind field reconstruction based on a machine learning approach and compared it with wellestablished interpolation techniques. Their approach was mainly devoted to the support of wind farm planning, for which measurements are used to estimate the expected aggregate energy output. The model considered approximates data using a Fourier series, exploring the frequency domain by using a Metropolis ^{[13]} adaptive algorithm: at each step, the Fourier coefficients are optimized with respect to a loss function. This method does not consider previous or future time steps, then only a subset of the physical hypothesis on the system is actually applied. The model includes the hypothesis of divergencefree flux, which is always applied in the mesoscale atmospherical models ^{[14]}:
In order to perform benchmark activities, the following classic algorithms were considered as references: nearest neighbors, inverse distance weighting (IDW), kriging ^{[15]}, random forest, and neural network. The quality of the reconstruction with the model considered with respect to the classical models was measured using the Q(f) indicator, defined as the mean square deviation of the data provided by the model f(x) against observational data u.
The indicator
ε(f), obtained by normalizing
Q(f) with respect to the expected squared velocity, was used too. The analysis of the results showed that the random model Fourier features provide the best results against the other models tested.
4. The Meteo Particle Model (MPM)
This model was introduced by Sun et al. ^{[16]} with the aim of providing an estimate of atmospherical variables inside the airspace using a Monte Carlo approach, using only surveillance data from aircraft. The original method is applicable to both wind and temperature fields. The main idea is based on the usage of a stochastic process to obtain meteorological information in a short time range (from minutes to one hour) in areas where observations are lacking, starting from data collected along highdensity flight trajectories. Wind and temperature states are reconstructed using virtual particles that are generated every time new observations are available (of wind and/or temperature) and that then propagate and decay over time. In this way, particle propagation allows the evaluation of atmospherical variables in those areas where measurements are not available. On the basis of the MPM model, it is possible to build a shortterm wind predictor based on a timedependent statistical model (Gaussian Process Regression, GPR). The GPR predictor can be built for each position of interest, in order to provide shortterm forecasts. However, it is necessary to record a short chronology of the states estimated by the MPM model.
4.1. Selection of Input Data
The ensemble of measurements performed by the different aircraft represents a measurement array [x, y, z, u, v, T] including spatial coordinates, wind components and temperature. Initially, a probabilistic selection process is used to remove wrong measurements that could occur, specifically:
$$p=\mathrm{exp}\left[\frac{1}{2}\frac{{\left(x\mu \right)}^{2}}{{k}_{1}\sigma}\right]$$
In which k_{1} is a control parameter defined as the acceptance probability factor.

New data selection: each new data has a p probability of being accepted, in such a way that data related to extreme values have a low probability of being selected. The numerical value assigned to k_{1} is defined by the user in an empirical way, and its value can be augmented to allow a larger tolerance (increase in the number of accepted measurements). The value proposed by the authors of the method is 3.
4.2. Construction of Particles
A particle is defined as an object able to provide information on the state of wind and temperature. Particles are generated every time new wind measurements (u, v) are available: in particular, for each measurement, N particles are generated close to the position of the aircraft that performed the evaluation. Each particle is characterized by the age (set equal to zero at the time of initialization and increased at the successive steps) in such a way that, at a fixed age, the oldest particles are removed. A small variance is then assigned at the state carried by each particle, to consider the measurement uncertainties. Successively, it is assumed that particles move according to a Gaussian random walk model, i.e., the coordinates x_{p}, y_{p}, z_{p} (m) of the N particles (x_{p,i}, y_{p,i}, z_{p,i}, with i = 1, … N) at the new time step t + Δt are evaluated on the basis of the positions at the previous step t using the following expressions:
$${x}_{p,i,t+\u2206t}={x}_{p,i,t}+\u2206{P}_{x,i,t}\phantom{\rule{0ex}{0ex}}{y}_{p,i,t+\u2206t}={y}_{p,i,t}+\u2206{P}_{y,i,t}\phantom{\rule{0ex}{0ex}}{z}_{p,i,t+\u2206t}={z}_{p,i,t}+\u2206{P}_{z,i,t}$$
In which the ΔP factor is calculated as
$${\u2206P}_{x,i,t}={k}_{2}{\sigma u}_{i}\u2206t\phantom{\rule{0ex}{0ex}}{\u2206P}_{y,i,t}={k}_{2}{\sigma v}_{i}\u2206t$$
Along the horizontal direction (x and y), particles move according to a random track characterized by a small bias (σ), conveniently controlled by the k_{2} factor (particle random walk factor). Along the vertical direction (z), the propagation follows a zeroaverage Gaussian track. All the particles are resampled at the end of each step. The time step Δt is chosen according to criteria of numerical stability, considering the time frame of the specific application, e.g., the size of the geographical domain, the time interval between two successive measurements, and the time step of the NWP (if the MPM is coupled with an NWP). The particles that for their motion fall outside the domain (both in horizontal and vertical directions) are removed, while the remaining ones are classified according to their age, according to the following probability function:
$$p(\alpha )=\mathrm{exp}[\frac{{\left(\alpha \right)}^{2}}{2{\sigma}_{\alpha}^{2}}]$$
where α is a number that represents the age of the particle and σ_{α} is a control parameter (aging parameter). This resampling ensures a periodic particle renewal, in such a way that the oldest ones are removed.
4.3. Evaluation of Variables Value in a Generic Point
Numerical values of wind and temperature at each position can be calculated by using information carried by the surrounding particles. In particular, the wind in a generical position is evaluated as a weighted average of the wind values carried by the particles belonging to an ensemble P, which includes all the particles whose coordinates x,y,z are within a maximum predetermined distance from the coordinates of the position being considered.
$$u\left(x,y,z\right)={\sum}_{i}{W}_{p,i}{u}_{p,i}$$
$$v\left(x,y,z\right)={\sum}_{i}{W}_{p,i}{v}_{p,i}$$
$$w\left(x,y,z\right)={\sum}_{i}{W}_{p,i}{w}_{p,i}$$
where the previous sums extended to all the particles of the P ensemble. The weight W_{p} assigned to each particle is calculated as a product of two exponential functions:
$${W}_{p}={f}_{d}\left(d\right)\xb7{f}_{0}\left({d}_{0}\right)$$
The first function establishes a relationship between the weight itself and the distance d (m) between the particle and the position considered, and the second one establishes a relation between the weight and the distance d_{0} of the particle from its origin:
$${f}_{d}(d)=\mathrm{exp}[\frac{{\left(d\right)}^{2}}{2{C}_{d}^{2}}]$$
$${f}_{0}({d}_{0})=\mathrm{exp}[\frac{{\left({d}_{0}\right)}^{2}}{2{C}_{d}^{2}}]$$
in which
C_{d} is a control parameter (weighting parameter). These formulae are based on the IDV technique (Inverse Distance Weighted)
^{[17]}.
5. NWP Models
Limitations in the applications of wind field reconstruction methods are related to the fact that estimations can be produced only if a sufficient number of drones is already flying in the area considered; this limitation could be mitigated using measurements from ground anemometers or using data provided by Numerical Weather Prediction models (NWPs). Scientific and technological developments have led to increasing the weather forecast capabilities over the past 40 years. In ref.
^{[18]}, Mazzarella et al. investigated if NWPbased shortrange highresolution weather forecasts (including data assimilation) can improve the predictive capability of extreme events, to understand if such forecasts can be suitable to support air traffic management. NWPs take current weather observations as input through processdefined data assimilation aimed to produce initial conditions for the meteorological variables (from the oceans to the top of the atmosphere). The derivation of the current state (the analysis) of the atmosphere is treated as a Bayesian inversion problem using observations (even from drones), previous information from forecasts and related uncertainties as constraints. These calculations involve a global minimization and are performed in four dimensions to produce an analysis that is physically consistent in space and time and can deal with observational data that are heterogeneously distributed in space and time, such as those provided by drones themselves. NWPs deliver weather forecasts by solving the full set of prognostic equations upon which the evolution in the atmosphere of wind, pressure, density and temperature is described
^{[19]}. These equations are solved numerically using temporal and spatial discretization, and this technique provides a distinction between resolved and unresolved scales of motion. Physical processes on unresolved scales need to be parameterized in terms of their interaction with the resolved scales. NWPs are classified into two categories: General Circulation Models (GCMs) and Limited Area Models (LAMs). GCMs perform simulations considering the global atmosphere and are characterized by a low resolution. Limited Area Models are used to obtain detailed information over a specific area of interest and they allow the usage of a higher resolution. GCMs are important also because they provide initial and boundary conditions to LAMs; in the present study, the attention is focused only on LAMs because they are widely used to support civil aviation.
6. Conclusions
The treatment of the retrieved wind field is still incomplete and a big effort from the scientific community is needed to cope with the chaotic nature of wind, the movement of aircraft, and the nonuniform distributed network of observations. In the framework of the EDUS project, CIRA is defining an operating platform demonstrator based on the existing CIRA Meteo Service Center in order to integrate data and algorithms with newer ones, aimed at treating the urban wind, in particular the evaluation of the three components from soil up to 3 km of altitudes. A promising approach could be based on the integration of monitoring “low cost and mobile” data and other sources, such as those available from the COPERNICUS program (copernicus.eu), especially for what concerns urban areas. Given the current lack of meteorological data available in urban environments, the usage of existing measurement networks will be increased, such as those coming from universities, regional agencies, civil protection and small airports. Of course, the installation of new sensors will be required, according to local stakeholders. These stations will represent the “ground truth” of remote sensing data and, along with all the data sources available, will allow the monitoring and nowcasting at high resolution of wind and other variables.