Like in the previous three interim reports there is no change to be mentioned here with respect to the initial description within the contract. Since this is a joint interim and final report, the latter will however be copied just below.
«The following is the methodology which will be employed (slight variations may occur) :
to continuously gather potential input for the work from several sources: results of upstream research (either in-house or published) in all possible related fields (experimental, computational or analytical), theoretical and/or numerical analysis of the encountered NWP problems and return of operational experience with the code one wants to upgrade;
to convert this input into the analysis of an approach feasible under the severe constraints of NWP: speed, reliability, generality, modularity, flexibility, ...
to develop (usually under relaxed conditions for a first quick-look) a prototype problem and the coded application of the chosen idea to this problem;
to assess the obtained results and to decide whether or not the idea is worth pursuing (if «not», especially in case of a blocking problem, one is sent back to stage zero of the described process);
in case of continuation of the effort, the likely interactions with all other aspects of the complex NWP software ensemble have to be carefully scrutinised and consequences for the envisaged full development to be taken into account at that stage;
to make the development effort, with a view on operational implementation, but at minimum in order to create a clean option that might become useful even if only at a later stage;
to start a complex validation process that aims at verifying the fulfilment of the basic aims of the work and the promises of the prototype tests while drifting towards real cases of interest for NWP; the danger of a wrong ratio of false alarms vs. missed events and the need to work under several meteorological conditions (seasons, active or stable situations, highly predictable as well as very sensitive events, ...) have to be taken into account as much as technically feasible;
if possible, at that stage, even for changes limited to the pure model part of the code, the data assimilation aspects should be part of the process owing to their capacity to detect slowly accumulating effects (the combination of short forecasts leading to each new analysis step creates a long model integration perturbed by the corrections towards observed values but nevertheless able to exhibit some characteristics of the model’s long term behaviour);
in case of success of this validation procedure (most of the time at the price of several new tuning efforts) one has to decide whether the created option is ready for a potential operational implementation or whether it should be kept waiting for other circumstances (more computing power, more central memory, other necessary items to form a coherent package for implementation, ...); in the second case, publicising the results of the study is of paramount importance since all other new developments and/or basic research effort should at least consider the question whether or not to activate this option; given the complexity and the distributed type of work associated with ALADIN this is one of the most difficult tasks of the project;
in the first case, one reaches the stage of parallel testing in which the chosen option (or a combination of several of them when one thinks that their effects are either additive or non-interacting) is put as sole modification with respect to the current operational application and tested in real time and comparatively verified against the weather that happens a few hours or days afterwards; here success means introduction into the operational procedure and failure a return to one on the intermediate steps of the procedure (which one depends on the gravity of the arguments that lead to the decision not to implement).
This NWP specific method is born out of three facts. First, meteorologists do not have any controlled laboratory: the atmosphere is their only «truth» and it never passes twice through the same state at any place. Hence a model is not only a forecasting tool but also a proxy laboratory for experimentation of new ideas that otherwise cannot be put to any acid test. While this concept leads to a study of the models for understanding their behaviour nearly as much as that of the atmosphere in upstream research, in the case of research for NWP it introduces the above-mentioned complex hierarchy of testing procedures. Second, the need to only consider applications that can be run without ever diverging or failing and at a speed characterised by the rule of thumb «one minute of elapse time for one hour of simulation» introduces constraints that need to be taken into account at a quite early stage of the process, something unknown in many other scientific activities. Third, the atmospheric behaviour is chaotic and its modelling reflects this fact through very non-linear characteristics. The truth about the validity of any assumption or tuning choice is thus very delicate to capture and may require an enormous amount of computing power (until statistical significance is reached) but for the design of rather sophisticated multi-stage validation protocols that aim at avoiding non-linearly induced pitfalls.»
In fact, one can retrospectively say that all the progress registered thanks to the ALATNET effort closely followed the above-described method, as anticipated at the start of the network. More important probably, all young researchers were trained in awareness of this methodology and its tight constraints, a tuition that should hopefully help them all along their further scientific activity.
B2-