background image

background image

Description

The Industrial Revolution pushed civilization forward dramatically. The technological innovations achieved allowed us to build bigger cities, get richer and construct a standard of life never before seen and hardly imagined. Subsequent political agendas and technological innovations have pushed civilization up above Nature resulting in a disconnect. The environmental consequences though are leaving the Earth moribund. In this blog, I'm exploring the idea that integrating computational technology into environmental systems will be the answer to the aftermath of industry.

Above drawing is by Phung Hieu Minh Van, a student at the Architectural Association.

Friday 29 November 2013

Some statistics relating to global internet usage.

Following on from the last post, this post is going to be about availability of computational equipment for both organisational and individual purposes. Having already explored an example (of which there are many more) where a place's position on the globe and their environment inhibit the adoption of computational integration, in this post i'm going to explore other, more human or societal, barriers.

To look at the global (or at least international) scale, internet usage figures are useful. I found some data from the International Telecommunications Union for the percentage of all countrie's population with internet access. With the intention of looking at how access has changed over time I've plotting these data as 4-year periodic histograms:



Histograms for percentage of population with internet access of all countries. Plotted with data from the International Telecommunications Union. Link here
So, on the horizontal axis are the bins for each 10 percent categories. For example if 46% of country X's population has access to the internet then country X will be counted in the 40% bin. The vertical axis show the total number of countries in each bin. If you add up all of the blue bars in each histogram you would get the total number of countries in the dataset (which is constant throughout the time period).

 What strikes me as interesting, is that the time-period 2008-2012 represents a monumental change in pace regarding the increased exposure of people to the internet. However the price of a laptop has been lowing exponentially as i found out here.....



Consumer price index for personal computers and peripheral equipment from 2005 to present. Global recession was experienced from 2007 to 2008. Plotted with data from the US department of Labour. Link here.

... and this means the the reduction in price of the average processor was lowest during 2008 to 2012 than it has been before. It is also interesting to note that a global recession occurred in 2007 - 2008. So, the reason why, all of a sudden, internet got much more wide-spread in this period remains a mystery.

It's also interesting, if not somewhat sad, that despite this progress, that internet access remains out of the reaches of the majority of the world's population.

The UK's Office for National Statistics pulls out some interesting numbers for Great Britain. In 2012, 21 million households in Great Britain (80%) had Internet access, compared with 19 million (77 per cent) in 2011. So, 205 of the UK households do not have in internet access. The reason's though for not having it are also interesting: Of the 5.2 million households without Internet access, the most common reason for not having a connection was that they 'did not need it' (54 per cent).



UK households with internet access. Before green dashed line data is from the UK and after it is for GB. Replotted using data from UK National Office for Statistics Internet Access Report 2012: link here.

I would like to break down this data but I cant find anymore information; the trail goes cold. So, I have to speculate as to why people would say they do not think they need it. As a proponent (generally speaking) of increased connectivity, I am inclined to interpret this as effectively saying that they 'do not understand it' fully (although this is understood to be reductive) - as the benefits to any demographic are myriad.

Additionally, only 67% of adults in Great Britain used a computer every day. This figure also strikes me as very low. GB is one of the countries for which use of computers is amongst the highest globally. This suggests there remains lot of of work to be done.

Directing attention back to the global scale, there is a huge disparity between the language in which the content on the internet is in and the native tongues of its users. In this diagram you can see that Chinese speakers represent almost a quarter of all internet users however in the 2nd diagram below you can make out that only 4% of internet content is in Chinese. 




Native languages of internet users. Plotted with data from the Web Technologies Survey. Link here.



Languages of internet content. Plotted with data from Internet World Stats. Link here.
This too strikes me as very strange. This implies that lot of the content on the internet is not accessible to a vast number of people even if they have an internet connection. So really, this illusion of global connectivity is false and what emerges is a highly skewed and unequal accessibility: especially for something that is deemed by the UN to be an essential human right.

Alot of work could be done with this data, far beyond that of the scope of this blog but in futures posts I may return to this to explore the spatial biases in these data in more depth.

For this post, perhaps more than any other, comments, interpretations etc. are hugely appreciated.
  

 

Wednesday 27 November 2013

Iqaluit

To date in this blog I've almost unreservedly proposed and expounded the benefits of integrating computational technology and automated systems into the natural and human world. In doing so, I've implied that this is universally possible and beneficial and that the utility will be felt worldwide. However, the reality is that there are barriers preventing everywhere and all from getting these benefits. In fact, there are places that are falling, or are likely to fall, victim to computational expansion.

The reasons why somewhere might not benefit fit from these technologies can be divided into 2 groups: geographical (i.e. spatial) and human (i.e. economic, social, and political) constraints.  Of course in reality there is considerable overlap between these two groups.

In this post I'm going to talk about a place that is not 'computationally' thriving as others are for predominately geographical (i.e. simply where they are on the globe) reasons. In a paired post, later in the week I will try to look at some examples of places where there are considerable 'barriers to entry' for human reasons.

My example case study that I want to talk about is Iqaluit, in Nunavut, Northern Canada. With a population of 6,700 people, Iqaluit is the largest settlement in Nunavut, itself the largest of the Canadian provinces. Nunavut is of an equivalent size to Mexico or Western Europe but has a population equivalent to Gibraltar. It is also the location of the world's most northerly fully inhabited settlement.


Iqaluit, Nunavut, Canada.
Formalized as a settlement in 1942 as an American air base with a specific function of a cross-Atlantic refueling station, Iqaluit was for a long time known as Frobisher Bay. Before that, it was a traditional Inuit place of hunting for over 4,000 years. In 1940, the Hudson's Bay company relocated here and the settlement grew. In 1964, a community council was elected and in 1979 its first mayor was installed. Ever since then its population has steadily been rising and was chosen as the provinces capital in 1995. Its population is around 85% Inuit, the remaining 15% are predominately caucasian Canadians who migrate there for 3-4 years to take advantage of the high wages and skill deficit.  Despite being a monumental testament to human ability to live in extreme environments Iqaluit is in many ways lagging behind the rest of Canada.


Shooting hoops in front of Iqaluit's cathedral (left) and town hall (right)
Many of the technologies that underpin modernity and much of the advancements that humanity is making are themselves underpinned by access to the internet. Rapid communication of information results in profound and lasting changes. However, the position of Iqaluit, north of the 60th parallel and at the mouth of the Northwest Passage, means that laying fiber optics is simply unfeasible. As such, the only telecommunications option is satellite. This means that internet in Iqaluit is equivalent to that of the UK in the mid 1990s. They pay $30s a month for 45 hours of dial-up services. The effect of this is severely hampering.

Whilst personal access could easily be shrugged off as an unnecessary indulgence (after all, haven't we lived without internet for over 8000 years?), the effect on services is severe. Especially, when it is considered how much Nunavut is dependent upon Canada's greater infrastructure and resources situated 3,000 miles to the south.

Additionally mobile phone service is poor. It is often discussed and quoted how beneficial mobile telecommunications are to remote and developing parts of the world, for example Africa, but this is simply not feasible. In fact, the publishing of this post is very timely as on Saturday, Iqaluit will for the first time have a mobile phone service that has the capability to run smartphones.

Iqaluit has many huge problems surrounding issues of food requirements and food security. A large proportion of these could be solved, or improved, with the use of computational technology. Wakegijig et al. (2013) highlight how community members, all sorts of organisations, public servants and academics alike have long been describing the 'desperate situation for food security' in Nunavut. Defining food insecurity as when food systems and networks are such that food is not accessible and/or of sufficient quality, 70% of Nunavut's Inuit pre-schoolers live in food insecure homes. Wakegijig et al. highlight the absolute importance of strategic planning, advocacy, and public mobilisation to raise the profile of an issue and to solve it. Community engagement and action is difficult to organize today without internet access, especially when large, indoor public spaces are scare and outdoor spaces, to for example protest in, are difficult to occupy 80% of the year as Iqaluit is frozen and one doesn't want to be hanging around outside in -40 Celsius.

Food in Iqaluit comes from two sources. It is either hunted, or imported. Both of these are highly sensitive to climatic changes. Indeed, the recent warming trends have changed hunting and fishing grounds in ways that make regular hunting difficult. It is also not necessarily easy to predict these changes. For example, Inuit have for many, many years hunted caribou (also called Reindeer). With the recent warning trends they have had access to more grazing land and populations had flourished. However, this led to a relaxation in policy and over-hunting ensued. As such populations have now been devastated.

Importing food is costly. Having myself walked around the supermarket in Iqaluit, I have seen first hand how much food goes to waste because it is not bought. Many vegetables and other perishable goods are sent north and stocked. However, eating many of these things has never been a part of Inuit cooking so they remain unbought and go off. If internet access was more widely available, people could order the food they wanted thus costs could go down and waste could be reduced.

Samuelson (1998) found that the rapid population growth this coastal community has resulted in increased pressure on the (particularly sensitive) environment, and waste management issues have become increasingly complicated. There is severe contamination of fresh water: which is not an abundant resource as one might expect. Computational technology would dramatically improve this situation but it simply remains to be integrated because at them moment it is too expense to get the technology up there and build it rugged enough for the local environment.

Iqaluit also experiences many health problems. Cole and Healey (2013) outline the on-going challenge of providing health-care in the Canadian Arctic, specifically the region's capital. The truth is that Inuit culture and the vast geography of the region make things difficult. The great distances people must travel to get any form of specialized health care or diagnosis leads to a number of ethical dilemmas. One third of Nunavut health care budget is spent on moving people to a site that can provide them with the care that they need. The sort of simple optimization that computers can provide would dramatically improve this situation. Both in the fields of propagating information and finding better solutions to existing problems.


Iqaluit, and Nunavut more generally, has many social and health related problems.


Of course this is not an exhaustive list of the problems areas that integrating computational technology would be able to help out in. Iqaluit has two prisons for its population of under 7,000 people. Both of these are full, and a new one has just been commissioned. This is startling. Animals also play a huge role in the daily lives of individuals living here. The sort of help that automated tracking, for example of polar bears, could bring is monumental. 


Many of these problems wont be fixed anytime soon by computers. In many cases it is impossible to bring the sort of technologies that would help this far north due to environmental constraints, for example as is the case for fiber optics. In other cases, the fact that the population is so small makes it unlikely for projects to be commercially viable despite the profound importance of Iqaluit as the regions capital. As such, creative ways to solve these problems need to be composed.


*****************************
 I've put the code to draw the map on the sister site. URL: http://herculescyborgcode.blogspot.co.uk/2013/11/map-drawing.html

The photos are my own. If you would like to use them please email me and I can give you access to many more.

Wednesday 20 November 2013

They know you're about to put the kettle on before you do

One of the major technological innovations that defines how advanced civilizations are is their ability to control the flow of water. For examples one need not look far. The birth of sedentary communities and growth of agriculture came out of the innovation of irrigation. The Roman Empire's aqueducts allowed for unseen urban growth and standards of living. Today as renewable resources are being explored and instigated, hydroelectric power is an exciting component of modern infrastructure. In this post, I am going to explore an outline for another example of complete computational integration and operational management. From the 'upstream' [i.e. monitoring electricity requirements (current and potential) and environmental monitoring], to 'at the coal-face' [i.e. understanding how a river is running and a dam is working in real-time] and 'downstream' [i.e. looking at the effects of a system of this type].

Here as before, by operational management I'm talking about the real-time operations on a timeframe of minutes to hours, as opposed to tactical management (several days to weeks).

I'm going to explore this in reference to 2 case studies, the 2 largest hydroelectric dams on Earth:

Itaipu dam on the border between Brazil and Paraguay is the world's largest producer of hydro-electrical power. In 2012 it produced 98,287 GWh of electricity. Villwock et al. (2007) report that it opened in 1984 and in 1986 2200 monitoring instruments (e.g. peizometers, entensometers, pendulums, and flow gauges) were installed. These have been periodically collecting information about how the dam is functioning and the data is now a vast store of useful information. In 2007, the Itaipu company converted 210 of these to automatic data monitors collecting information in real-time around the clock.



http://upload.wikimedia.org/wikipedia/commons/3/3a/Itaipu_Dam.jpg
Itaipu dam
 
The second case study is the Three Gorges over the Yangtze River in China. Although conceived in the 1920s, the dam opened in 2008. It stretches more than 2 km over the river and is 200 meters high. The reservoir behind the dam extends back 600 km. In 2012, the dam produced 91,100 GWh of electricity. It is one of the most monumental pieces of infrastructure ever built.



http://blog.cleantech.com/wp-content/uploads/2011/05/800_ap_three_gorges_dam_china_110521.jpg
Three Gorges Dam

Producing such a vast amount of electricity both of these dams have undeniable benefits to society. However, both projects are controversial in regard to their effect on both the environment and society. Operating dams and infrastructure like these in the past has been done largely manually and based on periodically collected data. By integrating automatic data collectors and modelling they can be operated to a much higher degree of efficiency and these impacts can be reduced.

The first question to explore is how, in real and tangible terms, is automating the observation and operation of these dams beneficial. Well, to start with it would provide the ability to accurately forecast system behavior and requirements. Information relating to social variables such as the usage figures of the electricity allows the dam's turbines to be activated when required to avoid blackouts or unnecessary usage of water stored in the reservoir. Usage can be predicted too with models that can be fed back into the dams operation system. For example, imagine the Chinese Meteorological Agency's weather model predicts a snow-storm is coming, this can be fed into a model of the behavior of the Chinese population which might (again for example only) imply that this means people will be staying in, watching television, boiling lots of water for mugs of tea and turn on their heaters. Having this information, the dam can open and close its turbines to produced just the right amount of electricity at just the right time.

Secondly, these automated data monitors and models can notice faults and problems in the system much faster, on average, than would be recognized by human operators. So costs and system closures are reduced as they are fixed before they become major problems.

Thirdly, all this information is assembled in huge databases. These can then be mined, using machine learning algorithms and statistics, to identify the driving mechanics behind system behavior. For example, it might be identified that one of the major variables driving electricity requirement in the Itaipu dam is football match times.

The type of system I'm describing is called Knowledge Discovery in Databases (KDD) by engineers. Maimon and Rokach (2010) describe them as consisting of a constant cycle of data collection processing, mining and result interpretation and evaluation. The effects of applying this type of system are profound. Electricity costs can be drastically reduced. Considering that energy cost are one of the largest single expenses (10 - 50%) in industrial processing, chemical plants, agriculture, pharmaceuticals, plastics, pulp, metal, mining and manufacturing, changes in energy costs will have a profound societal effect.

It could result in a much better service and resource allocation and a huge reduction in the environmental impact of large scale energy infrastructure projects. Both Itaipu and the Three Gorges dam have been the subject of much environmental criticism. Sun et al. (2012) describe the monumental consequences the Three Gorges has had on the downstream hydrological regime, wetlands and nearby lake basins. The Itaipu, for example, has been shown by Santos et al. (2008) to cause a huge problem for the migration of fish, and habitats for aquatic life. Of course, healthy fish depend upon habitat connectivity and suitable habitat feature. It is unavoidable that hydroelectric dams, especially on the scale of the Itaipu will effect the river flow regimes and thus the environmental system it operates within. However, by utilizing computational technology dam operation can be optimized to have the minimum environmental impact possible.

This sort of system can and should be integrated into all sorts of applications from power stations, to irrigation schemes, to transport infrastructure. And they are! It is now in vogue to invoke the term 'big data' and this refers to optimizing operations in exactly the way I'm exploring in these posts. However, one of the major issues to implementing these type of schemes to being able to act of what the data is telling you. The 'big data' trope came out of technology and internet companies that provide products with short life-cycles (mobile phones) and services that are easily changed (a website). Making adjustments of infrastructure is MUCH MORE difficult and working out how best to implement the lessons learnt from 'big data' is one of the major operational and managerial challenges of the coming decades.

I'm keenly aware that I have been proposing the benefits of no-holes-barred computation integration without really yet looking into potential adverse effects. In my next post, I will endeavor to resolve this.

****************
As always, please leave comments, criticisms and suggestions below.

Tuesday 12 November 2013

The Transportable Array: difficult data observation

Last week marked an interesting point in the history of geology. It was announced in Nature that scientists had almost finished completely mapping the whole of the United States seismology in high resolution. 'The Transportable Array', a set of 400 seismometers (see below), has, since 2004, been moving across the country. Elements (stations) in the array are arranged at 70-kilometer intervals. Each one stays in the ground for 2 years before being dug up and moved to the easterly edge. In this way it has moved, over the last 9 years, across from the Pacific to the Atlantic recording observations 40 times per second. This data has provided the scientific community with an unparalleled opportunity to look at the Earth's crust. It has allowed scientists to construct images of the deep earth and trace big earthquakes' movements over the globe.



Comparing this project to the real-time monitoring of infrastructure or continuous monitoring to the planet's atmosphere by satellites, one could easily be underwhelmed. However, the point to be made through this example is that not every useful variable can be measured easily but that doesn't mean the data is less useful. This might seem like a facile observation but in reality financial resources tend to follow the path of least resistance and projects like this can easily be overlooked. Observing tectonic activity is notoriously difficult. It is also very expensive (the Transportable Array cost US$90 million). However, despite these constraints, long-term data collection projects like this one can be conceived and implemented to huge benefit to society at large not just the Ivory Tower.

Earthquake's continue to plague civilization. They are almost impossible to predict bar vague recurrence statistics. Projects like this can help us drastically improve our knowledge about what's going on under our feet. Similar projects, recording high-resolution data of difficult-to-pin-down variables, should not be swept under the rug.

Thursday 7 November 2013

Thinking roads part 2. (..... in waiting .....)

I'm still playing around with using the real-time data to look at the road network in the UK. As I wrote that I would put something of this sort up by last friday I though I would direct those interested to a (albeit more technical) post doing the same thing while I fix the bugs in my code. I'm using Rich Wareham's code as a starting point for my things.

See this link here. 


******* Tuesday, 14th of January 2014 Note *********

I've just come back to this. I've been trying to get this code to work now for ages and have been emailing Rich Wareham but unfortunately I haven't been able to do it because some code needed has depreciated. I'm really sorry. The point I was trying to make is that you can plot the state of the road network at any given time in the UK as the government puts all the data online. It calculates this by looking at long long it takes car to travel certain distances. They do this using the cameras reading number plates! People clever than me can plot this data, here what Rich Wareham's looked like when he did it last summer: 



    
Again the link to his post is here.

And the link to where the data is put is here.