Here as before, by operational management I'm talking about the real-time operations on a timeframe of minutes to hours, as opposed to tactical management (several days to weeks).
I'm going to explore this in reference to 2 case studies, the 2 largest hydroelectric dams on Earth:
Itaipu dam on the border between Brazil and Paraguay is the world's largest producer of hydro-electrical power. In 2012 it produced 98,287 GWh of electricity. Villwock et al. (2007) report that it opened in 1984 and in 1986 2200 monitoring instruments (e.g. peizometers, entensometers, pendulums, and flow gauges) were installed. These have been periodically collecting information about how the dam is functioning and the data is now a vast store of useful information. In 2007, the Itaipu company converted 210 of these to automatic data monitors collecting information in real-time around the clock.
Itaipu dam |
The second case study is the Three Gorges over the Yangtze River in China. Although conceived in the 1920s, the dam opened in 2008. It stretches more than 2 km over the river and is 200 meters high. The reservoir behind the dam extends back 600 km. In 2012, the dam produced 91,100 GWh of electricity. It is one of the most monumental pieces of infrastructure ever built.
Three Gorges Dam |
Producing such a vast amount of electricity both of these dams have undeniable benefits to society. However, both projects are controversial in regard to their effect on both the environment and society. Operating dams and infrastructure like these in the past has been done largely manually and based on periodically collected data. By integrating automatic data collectors and modelling they can be operated to a much higher degree of efficiency and these impacts can be reduced.
The first question to explore is how, in real and tangible terms, is automating the observation and operation of these dams beneficial. Well, to start with it would provide the ability to accurately forecast system behavior and requirements. Information relating to social variables such as the usage figures of the electricity allows the dam's turbines to be activated when required to avoid blackouts or unnecessary usage of water stored in the reservoir. Usage can be predicted too with models that can be fed back into the dams operation system. For example, imagine the Chinese Meteorological Agency's weather model predicts a snow-storm is coming, this can be fed into a model of the behavior of the Chinese population which might (again for example only) imply that this means people will be staying in, watching television, boiling lots of water for mugs of tea and turn on their heaters. Having this information, the dam can open and close its turbines to produced just the right amount of electricity at just the right time.
Secondly, these automated data monitors and models can notice faults and problems in the system much faster, on average, than would be recognized by human operators. So costs and system closures are reduced as they are fixed before they become major problems.
Thirdly, all this information is assembled in huge databases. These can then be mined, using machine learning algorithms and statistics, to identify the driving mechanics behind system behavior. For example, it might be identified that one of the major variables driving electricity requirement in the Itaipu dam is football match times.
The type of system I'm describing is called Knowledge Discovery in Databases (KDD) by engineers. Maimon and Rokach (2010) describe them as consisting of a constant cycle of data collection processing, mining and result interpretation and evaluation. The effects of applying this type of system are profound. Electricity costs can be drastically reduced. Considering that energy cost are one of the largest single expenses (10 - 50%) in industrial processing, chemical plants, agriculture, pharmaceuticals, plastics, pulp, metal, mining and manufacturing, changes in energy costs will have a profound societal effect.
It could result in a much better service and resource allocation and a huge reduction in the environmental impact of large scale energy infrastructure projects. Both Itaipu and the Three Gorges dam have been the subject of much environmental criticism. Sun et al. (2012) describe the monumental consequences the Three Gorges has had on the downstream hydrological regime, wetlands and nearby lake basins. The Itaipu, for example, has been shown by Santos et al. (2008) to cause a huge problem for the migration of fish, and habitats for aquatic life. Of course, healthy fish depend upon habitat connectivity and suitable habitat feature. It is unavoidable that hydroelectric dams, especially on the scale of the Itaipu will effect the river flow regimes and thus the environmental system it operates within. However, by utilizing computational technology dam operation can be optimized to have the minimum environmental impact possible.
This sort of system can and should be integrated into all sorts of applications from power stations, to irrigation schemes, to transport infrastructure. And they are! It is now in vogue to invoke the term 'big data' and this refers to optimizing operations in exactly the way I'm exploring in these posts. However, one of the major issues to implementing these type of schemes to being able to act of what the data is telling you. The 'big data' trope came out of technology and internet companies that provide products with short life-cycles (mobile phones) and services that are easily changed (a website). Making adjustments of infrastructure is MUCH MORE difficult and working out how best to implement the lessons learnt from 'big data' is one of the major operational and managerial challenges of the coming decades.
I'm keenly aware that I have been proposing the benefits of no-holes-barred computation integration without really yet looking into potential adverse effects. In my next post, I will endeavor to resolve this.
****************
As always, please leave comments, criticisms and suggestions below.