Research Article - (2018) Volume 6, Issue 2
Keywords: Weather forecasting; Prediction; Atmospheric dynamics; Computer modelling; Chaos
We are always with or around a weather forecast; we always carry it with us and never want to be away from it. On average, we might encounter this about four-five times a day: news bulletin, newspapers, tweets, conversation, TV, mobile phone apps, internet etc. Weather forecasting is one of the most prominent topics that has influenced people's lives and activities over a long time. It is a kind of scientific activity, contributing to the social and economic welfare in many sections of the society [1,2]. Weather forecasts are issued to protect life and property [3-5], save life and tell us what changes to expect in the atmosphere. They provide vital information to a wide range of categories: agriculture, aviation, commerce, marine, advisories, etc. Forecasts can also significantly influence decision and policymaking, construction planning, productivity and environmental risk management [6].
Weather forecasts have always played an important role to humankind in their everyday life activities [3]. For example, forecasts help people determine what to wear or what activities to do on a given day; whether the weekend will permit an outing, rally, school bash, or an outdoor for a wedding reception [2,4] and whether or not to put on a coat or carry an umbrella [7]. Thanks to forecasts also because people are less likely to be surprised by severe weather or any unprecedented phenomena such as tropical storms. People can also know and be aware of atmospheric changes through variables such as temperature, wind speed and direction, humidity, sunshine, cloudiness and precipitation. Variations in these parameters describe weather the state of the atmosphere at a particular time. However, different terminologies are used to describe weather in terms of sets of fundamental quantities and various characterizations proposed and employed in weather forecasting [8]. Forecasting refers to the prediction of weather through the application of physics principles, supplemented by a variety of statistical and empirical techniques. This can also be defined as a scientific estimate of the weather conditions at some future time [4], expressed in terms of temperature, precipitation, wind etc. [9] and references therein. It is generally accomplished through three major steps: i) observation and data collection, ii) assimilation, processing and analysis and iii) extrapolation to predict the future state of the atmosphere. The totality of observations, analysis, model and computer system constitute a forecast system [10].
Weather forecasting has become an important field of research [2] and has always been done since the ancient times, although the trend, methods and techniques have changed with time. It began with the early civilizations of humankind using their personal experience to monitor seasonal changes and other reoccurring meteorological events in the atmosphere. Today, more advanced techniques and sophisticated tools are used to make weather forecasts. One of the most reliable techniques commonly used is numerical weather prediction (NWP) by application of computer models and capable of producing weather predictions of relatively high accuracy [3,11,12]. Since the 20th century (after the advent of computer technology), it became easier and better to study and determine changes in the atmosphere. The development and widespread of NWP techniques brought a whole range of players (e.g. computer specialists, numerical processing experts and mathematicians) on board to work with atmospheric scientists and meteorologists. Nevertheless, despite other disciplines’ keen interest in NWP, weather forecasting still largely remains the prospect of meteorologists. It is one of the most imperative operational tasks carried out by meteorological services around the world [13]. Consequently, weather forecasting is still faced with significant uncertainties [14,15] sometimes leading to erroneous or bad predictions despite advances made in atmospheric studies. Much of this is attributed to the complex nature of the natural atmosphere, observational errors and imperfections/limitations in the tools used (e.g. computers, satellites etc).
The purpose of this paper is to provide an overview of the current state of weather forecasting and highlight on the trend and factors influencing weather forecasts. One intriguing question is ‘why do weather forecasts sometimes go bad?’ It is evident that forecasting is often burdened with numerous problems such as inadequate data, environmental degradation and/or limited computer knowledge [16]. However, there is an ever-increasing demand for accurate weather forecasts from the forecasters to the clients. The rest of this paper is arranged as follows; section 2 highlights on the history and significance of weather forecasting, section 3 discusses the trend in forecasting over time, section 4 highlights on modern-day forecasting and NWP techniques, chaos and uncertainties are presented in section 5. Lastly, section 6 is the summary.
As highlighted earlier, weather forecasting began with the early humankind using reoccurring astronomical and meteorological events to monitor seasonal changes in the atmosphere. The early pioneers relied on their personal experience, animal behaviour, vegetative fruiting and other folklores to gauge when a rough weather was coming their way, or historical events to predict how the weather was going to be in the foreseeable future. Until the end of the 19th century, weather forecasting was entirely subjective and based on empirical rules, with limited understanding of the physical mechanisms behind weather and related processes. Different ‘cultures’ had their own way of observing and making forecasts to meet their personal needs and experience. For example, the Babylonians predicted short-term weather changes based on cloud appearance and optical phenomena; the Chinese divided the year into 24 festivals –each associated with a certain type of weather phenomena; the Greeks developed theories about rain, cloud, lightning and other observations. A notable example is Aristotle, who wrote in his Meteorologica, a philosophical treatise including theories about the formation of rain, clouds, hail, wind, thunder, lightning, and hurricanes. He made some remarkable observations concerning weather. For almost two millennia, meteorological thought was based on the ideas set out by Aristotle [17]. Aristotle’s ideas dominated intellectual thought for centuries, although it was a product of observation rather than experimentation. Although many of Aristotle’s claims were erroneous, it was not until about the 17th century when many of his ideas were overthrown.
Weather changes and atmospheric patterns were also associated with some ‘super beings’ or gods, for there existed a wide range of weather gods in various cultures [18]. Whatever the lifestyle followed by the ancient people, they always developed beliefs about the world around them. These beliefs helped them explain the world and why and how things happened and could happen in the foreseeable future. Ancient people reacted to the weather in rather fearful or superstitious manner, believing that mythological gods controlled and governed weather elements. For example, the Egyptian sun god Ra and Thor; Norse the god of thunder and lightning etc. Their livelihoods were at the mercy of the gods for they believed that fortunes and other disasters were linked with the moods and actions of their gods. Many ancient civilizations also developed concepts such as rain dances and animal sacrifices in order to propitiate their weather gods [17]. Weather forecasting was not considered as a science at the time; there was no experimentation or theoretical testing. It was more like an art than a science [12,19]. Indeed the ancient meteorologists appeared to be ‘shamans’ within their communities.
On the other hand, people were not entrusted with making any notice or claims about weather forecasts. For example, François Arago (then director of the Paris Observatory & Permanent Secretary of the French Academy of Sciences, 1846) stated that, “Whatever may be the progress of sciences, NEVER will observers who are trust-worthy, and careful of their reputation, venture to foretell the state of the weather.” The discipline of meteorology struggled for respect for quite some time [17]. Meteorology had to become a science so that humans could understand the physical processes that make the weather [7]. Moore contends that, “For centuries, meteorology had been characterized by mystery and superstition. While many sciences –geology, botany, physics and chemistry had flowered under enlightened analysis, meteorology barely progressed from its classical conception as a science of meteors.” These clearly shows how forecasting was perceived at the time. It was only entrusted with the philosophical knowledge; let alone monitoring of seasonal changes in the atmosphere. This experience was the knowledge gathered over years of observations and the background theory on weather [20].
Significance and importance of weather forecasting
Weather forecasting has always played a vital role in peoples’ lives in their everyday activities. With the forecast information, people can know and plan better for what would come or to expect. For example, hunters, farmers, fishermen, warriors, shepherds and sailors needed to know how the weather might be in the next day, because they worked in the open. They had to rely on the ‘weather wisdom’ (e.g. appearance of clouds or animal behaviour) to tell them what was coming and/or how it would happen. This experience was always passed from one generation to another over time. However, it becomes increasingly evident that 'speculations of the natural philosophers' were inadequate and therefore could not rely upon to predict the atmosphere.
Understanding weather patterns and atmospheric changes has many applications across various sectors and societal importance: warning for/about severe weather, agriculture (e.g. type of crops, planting- and harvest time, livestock), transport services (e.g. shipping, aviation, roads), flood warning, commerce and industry etc. It is also worth noting that weather is certainly the most important factor determining agricultural enterprise success or failure [21]. Knowing the type of weather expected at a particular time and location enables people to plan better [22] and predict natural calamities such as floods and destructive winds, thus ensuring necessary precaution measures. For example, the growing vulnerability of densely populated areas and natural hazards make increased demands for reliable forecasts of the consequences for the safety of life and property [23]. Pilots need to plan for their flights; sailors need to plan their [marine] activities; farmers need to plan for watering, fertilizer application, harvesting etc. Important parameters to know include temperature, wind speed and direction, clouds, precipitation, visibility, humidity and/or trends in all of the above. The timing of significant changes and occurrences of extreme events is very important.
Understanding the connections between climate and weather is essential to meet societal needs and improve early warning on potential disasters to provide information needed [16]. Such information is important to enhance our ability to predict the future climate conditions. For example, historical climate information is crucial to plan the year’s product and influences the long-term strategies [24]. Due to lack of understanding and incapability in the early humankind, it was always not easy to qualitatively monitor and predict atmospheric changes and make informed decisions about the likelihood of the weather. Although attempts were made, safety against bad weather events for example, was never guaranteed. For example, an estimated 459 people drowned in the island near Wales in 1859, when the Royal Charter gold ship was wrecked in a large storm off Anglesey [25]. This havoc influenced Robert Fitzroy (1805-1865: an English sailor, royal navy and scientist) to develop weather charts to allow forecasts to be made to ‘improve safety’ at the sea. He devised a code of meteorological telegraphy in cipher, and instituted regular service by means of which weather information was received from stations and issued to the public [26]. For example, he organised for ship captains to be provided with weather instruments in order to collect data (e.g. barometric temperature, air and sea temperature, humidity, wind, cloud and soundings) and return it for computation [5]. Standard instruments were distributed to the navy and merchant marine, he collected and analysed the records returned by observers.
Weather forecasting was pioneered by Robert Fitzroy in the mid 1800. Fitzroy is famous for being the captain of the HMS Beagle during Charles Darwin’s voyage around the world (1831-1836) [4], and credited for coining the term ‘forecasting the weather’ and his methods were more scientific than the weather lore that had proceeded before [27]. He developed fundamental techniques for weather forecasting and put a lot of effort in studying the weather –using his own nautical charts. Several observing stations were set up across Europe, from which he recorded and collected data for analysis and issuing of daily forecast. He applied a qualitative knowledge of the atmosphere’s dynamics based on observations and the known physical explanations of the causes of circulations [28]. His main interest was to save lives, especially his fellow mariners [29]. For example, in a letter he addressed to The Times, he wrote:
“Man cannot still the raging of the wind, but he can predict it. He cannot appease the storm, but he can escape its violence, and if all the appliances available for the salvation of life from shipwreck were but properly employed, the effects of these awful visitations might be wonderfully mitigated” [30].
However, Fitzroy’s work was heavily criticized by the ‘scientific gentlemen’ of the Royal Society, including astro-meteorologists and the media, with the public relentlessly pointing out when forecasts were wrong [31]. It was regarded to be so inaccurate and haphazard and a character not of any true scientific value [32]. But it was through his weather records that there was more interest in weather forecasting. His many significant contributions to weather forecasting optimistically promoted the practical utilization of meteorology [33]. Soon, meteorologists begun to map data from various observing stations such as temperature and humidity, onto weather charts. For example, observations of pressure and other variables were plotted in symbolic form on a weather map –produced through experience, memory and a variety of empirical rules [34]. Availability of weather maps enabled scientists to detect and study various weather phenomena and compare the current meteorological situation to past ones -ultimately leading to more forecasts being produced. Since Fitzroy’s time, weather forecasts have gone from strength to strength, with significant improvements in accuracy as a result of computer modeling in the latter part of the 20th century, along with the availability of data from satellites and automatic weather stations across the world [35].
Because it became evident that the natural knowledge was inadequate; there was a need to further understand the dynamics of the atmosphere. For example, instruments were needed to measure properties of the atmosphere (e.g. moisture, speed, temperature) and how they vary with time. A great stride in monitoring weather was made in the 1920s after the invention of the radiosonde. Also, technological improvement led to a growing reliance on remote sensing techniques such as the earth-orbiting satellites and other measuring instruments. While meteorological instruments and other techniques were being refined, other related observational, theoretical, and technological developments also contributed to better understanding of the atmosphere. For example, the invention of the telegraphs in the mid-19th century allowed routine transmission of weather observations between observers and compilers. Profound developments in the theory of meteorology provided crucial understanding of atmospheric dynamics, compounded with advances in numerical analysis that enabled the design of stable algorithms. Also, the invention and introduction of some measuring instruments in a global network meant that timely observations of the atmosphere become available [36].
Prior to World War I, weather forecasters never considered (or knew very little about) mathematical modelling. Predictions were made largely based on personal experience of what would happen next and were done by a set of empirical rules of thumb, passed from one forecaster to another. By the end of the 19th century, a growing number of scientists were of the view that atmospheric behaviour could be modeled from first principles –using laws of physics [37]. It was important that forecasters need to understand the state of the atmosphere and the laws of physics that could alter its state. Weather forecasting was particularly revolutionized in the 1920s by Norwegian scientists –the leading proponent being Vilhem Bjerknes (1862-1951) introducing empirically observed conditions and described precipitation formation, cyclones and atmospheric circulation systems. He combined all the sciences needed to conduct numerical weather predictions: meteorology, physics and numerical mathematics, although it took several years before his ideas could be put into practice [38]. Bjerknes’ approach was a more explicit analysis of the weather prediction from scientific perspective [38]. He believed that the problem of predicting the future atmospheric evolution could be formulated mathematically in terms of seven variables (three components of air velocity, pressure, temperature, density and humidity) –each being a function of time and space [39]. For example, the Bergen School of Dynamic Meteorology under Bjerknes’ leadership was set up and worked to develop systematic ways of understanding the atmospheric dynamics based on mathematical modelling of its physical structure [9,28]. He worked toward a physically based way to forecast the weather, proposing that:
• a sufficiently accurate knowledge of the state of the atmosphere at the initial time
• a sufficiently accurate knowledge of the laws according to which one state of the atmosphere develops from another [10].
More data became available over time for observation-based weather forecasting; weather maps were drawn from the available crude data and surface wind patterns and storm systems could also be identified and studied. Thereafter, more weather-observing stations started appearing across the globe, eventually spawning the birth of synoptic weather forecasting, based on the compilation and analysis of many observations. Bjerknes’ technique was later developed by Lewis Fry Richardson (1881–1953: an English scientist and navy officer). Richardson’s scheme amounted to a precise and detailed implementation of the prognostic component of Bjerknes’ program, although highly intricate [37]. According to Richardson, the scheme was complicated because of the complex nature of the atmosphere. He concluded that all computations had to be done manually, estimating that it would take about 64, 0000 human computers to perform the calculation in time with the weather actually happening. For example, his work took several months of calculations of pressure changes to produce a 6-hr forecast. His failure was later shown to be due to lack of understanding of some atmospheric processes at the time. He deduced that:
“Perhaps someday in the dim future it will be possible to advance the computation faster than the weather advances and at a cost less than the saving to mankind due to information gained. But that is a dream” (Weather Prediction by Numerical Process [40].
Although it was a spectacular failure in the end, Richardson devised many of the modern principles of numerical weather forecast. To date meteorological observations are made all over the world and are used to compute the best estimate of the system’s initial conditions [11]. There is a great variability in the density of observation network. Since the invention of the first weather instruments, weather observation has undergone tremendous improvement. Early weather forecasts based on astrology, phenology of plant flowering and animal life cycle are no longer reliable as they used to be in the past. Denser monitoring networks, more sophisticated instruments and communication systems and better-trained personnel now produce increasingly detailed, reliable and representative weather records [32]. For example, observations such as those from the air balloons and radiosondes can be taken at fixed locations and specific times while others (e.g. aircrafts, ships or satellites) can be done in space. Increasing computing power and efficient numerical methods as well as more sophisticated physical parameterisations has led to a huge improvement of weather forecasts [19].
Notwithstanding, weather forecasting can either be subjective or numerical. The former is based on describing the current daily observation of the atmosphere and what has been happening in the past, from which interpretations are then inferred to the future. On the other hand, the latter is an approach that uses mathematical equations and physical laws to simulate atmospheric conditions to predict weather, see [41] and references therein. The best approaches to weather forecasting rely on generative approaches such as numerical methods or simulations [23] and other modern-day measuring instruments.
Numerical weather prediction and forecasting techniques
NWP is a method of weather prediction by use of computational techniques through time integration of the fundamental equations or computer programs. The process describes a set of partial differential equations (PDEs) and other formulations describing the dynamic and thermodynamic processes in the earth’s atmosphere, comprising of equations, numerical approximations, parameterizations, domain settings as well as initial and boundary conditions [42]. The physics laws are programmed to model the atmosphere, written in a format that the computer can understand.
The earliest work on NWP can be traced back to the contribution of Cleveland Abbe (1838-1916: an American meteorologist) in the 1890s, who recognized that “meteorology is essentially the application of hydrodynamics and thermodynamics to the atmosphere” [37,38] see also [43] and references therein. Abbe was optimistic that science would ‘take the problems in earnest and devise either graphical, analytical or numerical methods’ to solve equations [1]. His goal was to make meteorology an exact science, and a true physics of the atmosphere [22,37]. The idea was later taken up by Bjerknes using diagnostic and prognostic methods and variables to predict weather, despite being confronted with the challenge of inadequate weather information in some areas. Bjerkens created a strategy for ‘rational thinking’ that included a ‘diagnostic step’ whereby the initial state of the atmosphere needs to be collected and a ‘prognostic step’ in which the laws of motion are calculated to show changes occurring over time [44]. He suggested that it would be possible to forecast the weather by solving a system of nonlinear PDEs, although the math took long to crank out with hand. For example, the mathematical models he proposed were far too complex to be solved analytically [45].
However, modern concept of numerical weather forecasting was initiated by Richardson during World War 1, developing the concept of both Abbe and Bjerknes and using his own theories and reasoning. Richardson presented a set of equations describing the physical processes governing atmospheric behaviour, together with a method for their approximate solution, arguing that it should be possible to proceed from an initial to a final state of the atmosphere by a purely mathematical process [21]. Although Richardson’s example was a spectacular failure, his method revolutionized modern numerical weather forecasting [3]. The concept was developed further in the 1940s by Charney and colleagues, providing a theoretical basis to overcome problems faced by Richardson. The simplified equations they proposed lead to the construction of an electronic computer, which became a milestone in NWP. The first computer generated prediction of the flow in the middle atmosphere was developed in 1949 by Charney, Fjørtoft and von Neumann.
Models comprise of fundamental laws and parameterised physical, and chemical components of the atmosphere. The state of the atmosphere is described at a series of ‘grid-points’ by a set of variables such temperature, pressure, velocity and humidity. The laws are expressed as mathematical equations, averaged over time and grid volumes –describing the evolution of such variables. They are solved by replacing time-derivatives by finite differences, and spatially either finite difference schemes or spectral methods (i.e. state of the body as a function of time). They are converted into a computer program, defining among other things, possible integrations between the variables with other formulations, and integrated forward in discrete time steps (i.e. making them predictive) to simulate changes in the atmosphere. In this context, the model is a computer program that produce meteorological information at given locations [32]. All numerical models are based on the same set of governing laws used to predict the physics and dynamics of the atmosphere [46].
Mathematical formulation of atmospheric models used in weather forecasting is based on equations mechanics of compressible fluid, stemming three fundamental laws: momentum, mass conservation and thermodynamic equation [8]. Various physical quantities that characterize the state of the atmosphere are assumed to have unique values at each point in the atmospheric continuum. Moreover, these field variables (e.g., pressure, density, temperature, speed) and their derivatives are assumed to be continuous functions of space and time. These include:
Ideal gas law [47-51], given by:
where p is pressure, ρ is density, R is gas constant (287 J kg-1 k-1) –a universal quantity that does not change with time, and T is temperature.
Hydrostatic equation [30,33], given by:
where Φ is the geopotential at a given height/altitude, T is temperature; p and R are as defined above. This means that the thickness of an atmospheric layer bounded by isobaric surface is proportional to the mean temperature. Pressure decreases more rapidly with height in a cold layer than a warm layer.
Rate of change of wind or momentum equation [30,33]given by:
where 2Ω×V is the Coriolis force (Ω is the angular velocity of planetary rotation and V=iu + jv is the horizontal velocity vector); the Coriolis parameter [34] is given by f=2Ωsinφ. ∇p is horizontal pressure gradient, g is gravitational field strength, k is the vertical unit vector and ν∇2V is the fluid viscosity. Coriolis force describes the deflection of the air masses and fluids due to the earth. For example, air is deflected to the right in the northern hemisphere, whereas it is deflected to the left in the southern hemisphere.
Thermodynamic equation, given by:
where QH and QD represent adiabatic and diabatic heat fluxes of air. The heat could be added to a unit air mass due to external sources (e.g. solar and thermal radiation, turbulent heat exchange or phase transformation of atmospheric moisture).
Conservation of water mass [30,63] given by:
where q is the mixing ratio, and QE and QC indicate water vapour evaporating and condensing into air respectively.
Conservation of air mass or continuity equation [14,30] given by:
where∇ is the 3-dimensional gradient operator and V is the 3-D wind.
Potential temperature [30,33,51], given by:
where κ = R/cp, cp is specific heat capacity at constant pressure and R is as defined earlier.
This relationship is referred to as Poisson’s equation, where temperature θs is called the Potential temperature. θ is simply the temperature that a parcel of dry air at pressure p and temperature T would have if it were expanded or compressed adiabatically to a standard pressure po (1 atm=1.01325 × 105 Pa). For dry air, κ=0.286. Potential temperature provides a useful label for an air parcel (unique value) because it remains unchanged no matter how many adiabatic processes occur; it conserves its properties.
Numerical weather prediction has been viable since the 1960s. Prior to that, forecasters were used to collecting as much data as possible about the atmosphere to make predictions based on their own experience and intuition. Before the computer era, principles of theoretical physics played little role in practical forecasting [37]. Although much of the under-lying physics was known, its application to predict atmospheric conditions was impractical [38]. Key development in weather forecasting was made in the immediate postwar period after the introduction of ‘nascent’ computer technology to solve the complex equations and put the theory of dynamic meteorology into practice [28]. For example, Bjerknes theorized that the atmosphere must obey the basic laws of physics. He stated (1904) that ‘subsequent atmospheric states develop from the preceding ones according to the physical law’. By stating the laws as mathematical equations, real observations from the atmosphere could be used to generate a mathematical model to simulate the atmosphere.
Qualitative data (current state of the atmosphere) is collected, from which scientific understanding of the atmospheric processes is used to infer how the atmosphere will evolve over time [32]. However, because some physical processes are active at scales smaller than the horizontal grid size and therefore, cannot be easily resolved in the models. In some cases, they are explicitly accounted for, while in other cases may be neglected or calculated appropriately as corrections [8]. This means effects of physical processes are included implicitly (or indirectly) when they cannot be included explicitly. The approximation of such unresolved processes is referred to as parameterisation. Parameterization can be thought of as ‘modeling the effects of a process rather than modelling the process itself. Processes associated with this include radioactive transfer, microphysics (e.g. moist processes such as cloud and rain drops), turbulent mixing, atmospheric emissions (e.g. aerosols and gases, air quality). Figure 1 illustrates different parameterised processes approximated in the model. There are many types of parameterisation schemes used in different models. These are important aspects that strongly influence model forecasts [55] and interact with each other in the atmosphere. Parameterisation needs to account for computational costs (e.g. model running time, resolution used and/or any other resource specification), especially when using or increasing the complexity of the choice. There must be a ‘balance’ between the type of scheme chosen and the computational cost. For example using one particular scheme can result in more [simulation] time than another choice of scheme.
Chattopadhyay [11] Data is interpolated onto a global grid comprising of several grid points (Figure 2). However, there could be cases where data is insufficient or severe, measurements not covering all of the globe and/or not at set points. On the other hand, because observations cannot be used directly to start the model integration, they are modified in a dynamically consistent way to obtain a suitable data set [11]. Input data need to be interpolated, smoothed and filtered –a process usually referred to as data assimilation. Real observation data is compared with predicted conditions to give the best possible estimate of the atmosphere.
Figure 2: A model system showing grid points, built from a set of differential equations based on laws of physics, fluid mechanics and atmospheric chemistry. The planet is divided into a 3D grid. The basic equations are applied and results evaluated to calculate variables such as wind field, heat transfer, radiation, temperature, pressure etc. within each grid.
Many studies have devoted to improve numerical modelling through more advanced numerical methods, better representation of atmospheric dynamics and improved parameterisation schemes. However, these developments can be hampered by inadequate observations (data voids), limited understanding of the atmospheric physical processes and the chaotic nature of the atmospheric flow. Therefore, uncertainties will always exist in both initial conditions and/or numerical prediction results. Climate scientists and weather researchers strive to understand the behaviour of such processes through models better suited to investigate interactions that are otherwise difficult to observe [52]. Many models are developed to improve precision and to accommodate more complex processes.
Weather variables are heavily influenced by the interaction of several factors [23]. Forecasters need to know the current state of the atmosphere. Anything missed can result in controversy from the public or other related forecast clients. There are a numbers of uncertainties why weather forecasts sometimes go bad or wrong. First, it comes from the initial conditions and the models (tools) themselves. Forecasting tools and other methods used can vary with the time period of forecast. For example, the chaotic nature of the atmosphere, computational power required, errors involved in measurements and incomplete understanding of the atmosphere show that forecasts become less accurate as the difference between the current and the time for which the forecast is being made increases [2,32]. It is also worth noting that measured or observed data (including initial conditions) and atmospheric equations are used to forecast the future status of the atmosphere. Therefore, measurements might not be sufficiently precise or detailed; biasness in the choice of weather stations (e.g. located near airports or other active hubs where temperatures may be higher due to heat from the nearby processes); computer programs not storing data to infinite precision; actions by humans, animals or human institutions not modeled in atmospheric system, yet perturbing it adequately.
One hindrance could also arise from the fact that the current (available) forecast might not sample the atmosphere on a grid-size picking the local events or not resolving small-scale phenomena. Uncertainty due to parameterisation of sub-grid scale processes also plays a crucial role in prediction quality [31] and reference there in Generally weather forecasting is a ‘tedious’ work and can be subject to boundary value problem requiring good information on all external factors that influence the weather or climate over time (e.g. variation in land use, emissions, volcanic eruption, solar radiations etc.). Similarly, numerical projections of future climate always involve extrapolation beyond the time range for which models might not be developed or tested for. Furthermore, models include some equations accounting for the effects of small-scale processes that cannot be explicitly represented as their resolutions may not be high enough. This can directly or indirectly affect accuracy in the predictions. Forecasters therefore sometimes have to perform enormous computations (simulations) in order to improve or get better forecasts. However, this can be resource intensive, depending on the type and duration of the simulation.
Consequently, the further you go into Ffuture; the model’s precision skill is lost. Model prediction accuracy is limited within a few days. For example, Edward Lorenz [35] asserted that even with a perfect model stating with initial conditions, weather forecasting is limited to about two weeks only. He argues that no matter how good the observational network or how good the forecasting procedures, there is almost certainly an insurmountable limit as to how far into the future one can forecast. In a nutshell, no matter how good the forecast tools are, small errors will always exist and can grow into large errors within a certain amount of time. Below we highlight and describe some of the factors influencing numerical weather prediction and accuracy as alluded to by [52] and [53] on why do weather forecasts sometimes go bad.
Imperfect knowledge about the current state of the atmosphere (data voids)
• Insufficient or lack of interesting data in between observing locations
• Instrument sensitivity and errors
• No observation of important quantities
• Errors in data assimilation (imperfect statistical and numerical forecast methods)
The overall impact of this can lead to limitations in observations. Without considering the accuracy of weather measurements, atmospheric features can go unobserved when measurements cannot be made or accessed everywhere. For example satellites, radars and radiosondes might not provide all the information missed by surface observations. Similarly, data inconsistencies between different tools, incomplete understanding of the complexities and interactions between atmospheric physics and chemistry can be another cause of errors in the forecasting.
Imperfect computer model
• Need more powerful and faster computers for more grid points
• Need of better understanding of physical processes going into the model grid
• Limited resolution (processes represented must be truncated spatially, temporally and physically)
• Systematic and/or random errors
Computer forecasts can be deficient in that they neglect small-scale effects and/or approximate complicated physical processes. For example, processes occurring at scales smaller than horizontal model grid sizes. Approximation of such processes in terms of model-resolved variable is done through parameterisation –one of the most difficult and controversial areas in weather modelling [11]. But models must treat or parameterise the effects of the sub-grid scale on the resolved scale [25]. There is a significant source of random errors associated with parameterisation. For example, grid point locations over tropical warm pool area during a period of organised deep convection. According to Buizza [11], the actual contributions to the tendencies due to parameterisation are often associated with organised mesoscale convective systems whose spatial extent may be comparable with the model resolution.
Chaos
• sensitivity dependence to initial conditions (the atmosphere could react very differently to slightly different initial conditions) –the ‘butterfly effect’
• Small differences in initial conditions can evolve into large changes/amplify rapidly. (e.g. forecasts skill is lost with increasing lead time and is case specific)
Even small variations at a given time can ‘balloon’ to bigger changes. The word ‘chaos’ was coined to describe the model sensitivity to initial conditions –a property popularly known as the ‘butterfly effect’ that the atmosphere exhibits a dynamical sensitivity such that predictions would lose skill at some point in the future [54]. This was captured in the “chaos theory” pioneered by Lorenz in the 1960s. He described how a butterfly flapping its wings could set up air currents that under the right conditions could alter the weather many miles away. Golestani and Gras [20] also note that chaotic behaviours are strongly dependent on initial conditions and those small changes in initial conditions can possibly lead to immense change in subsequent time steps and particularly difficult to predict.
Some of the problems encountered in weather forecasting could also arise from the fact that we don’t fully understand everything that happens in the atmosphere. A wide variety of factors influence the weather in many ways. The fact that a number of relevant processes occur at scales smaller than model grids would possibly introduce uncertainties into the model as we don't always have a good understanding of the behaviour of such processes, particularly their response to feedbacks [26]. However, it is worth noting that in the real world, a wide variety of forces are in play at the same time. Therefore, it can be difficult (and sometimes impossible) to identify dominant factors in the system. Although observations, theories, models and other tools continue to improve, the earth-atmosphere is an immensely complex system. Therefore, attempt to predict the future climate is bound to have at least some uncertainties about the science involved. Figure 3 summarizes some of the factors associated with forecasting uncertainties in numerical weather prediction.
(Source: http://www.crh.noaa.gov/Image/lmk/pdf/WeatherForecastUncertainty.pdf)
Figure 3: Forecasting uncertainties in NWP. Several factors can render weather prediction unreliable and make forecasts go bad.
Weather forecasting has come a long way since the early civilizations and will continue to be present. Man has always been interested in the weather and wanted to find out various methods to allow them know the weather in advance [7]. Early weather forecasting and predictions were made based on personal experience without experimentation or testing. But it became increasingly evident that more understanding of the atmospheric dynamic behaviour was vital. Many scientists realized that atmosphere could be modeled from the physics laws that could alter its state. But it was the pioneering work of Bjerknes and Richardson in the 1920s that kicked off the development of modern weather forecasting [17]. Notwithstanding weather forecasting is a complex and challenging science, depending on the efficient interplay of weather observation, data analysis, computers and rapid communication [32]. Generating weather information involves a vast infrastructure of space, earthbound observations, numerical weather models and scientific knowledge requiring significant coordination [44]. Today, forecasts are prepared routinely on powerful computers running algorithms to produce vast amount of data.
Modern-day skilful weather forecasts are based on the NWP approach, involving computer-based modelling from a set of initial atmospheric and environmental conditions. Advances in NWP represent a quiet revolution because they have resulted from a steady accumulation of scientific knowledge and technological development over the years [6]. However, although computer models and other techniques have improved weather forecasting, there are still significant challenges and uncertainties in weather forecasting [40,57]. The problems can arise from an attempt to observe, analyze and predict many atmospheric interrelations, physical feature of the earth, and the properties and motions of the atmosphere –the basis of which weather forecasts go wrong, see [32] and references therein. Given this complexity of the atmospheric system, the tools cannot always successfully model important elements of the atmosphere [58,59] even with the most sophisticate tools available. That being said, advanced technology offers no guarantee that forecasts will always be spot on. Nevertheless, the importance of accurate weather forecasts cannot be over emphasized, since human life relies heavily on the sound knowledge of the atmosphere [32] for their everyday life and planning activities. However, weather forecasts indicate that meteorology can at least make good assumptions about the future state of the atmosphere.
This study discusses how weather forecasting evolved and developed from the early human civilizations to present-day weather prediction. The study gives an insight into weather prediction from the old days, when people relied on their personal experience and observations of some weather phenomena and animal behaviour through Fitzroy’s fundamental techniques in weather forecasting in the mid-1800, leading to the work pioneered by Bjerknes and Richardson in the 1900 during first World War and after the advent of computer technology and NWP techniques. Although it relates to rather out-dated methods, the scope is to highlight on the historical trend and evolution in weather forecasting from ancient time to present day. However, it is worth noting that even today, new methods, techniques and instruments are being developed while others are being improved.