background image

background image

Description

The Industrial Revolution pushed civilization forward dramatically. The technological innovations achieved allowed us to build bigger cities, get richer and construct a standard of life never before seen and hardly imagined. Subsequent political agendas and technological innovations have pushed civilization up above Nature resulting in a disconnect. The environmental consequences though are leaving the Earth moribund. In this blog, I'm exploring the idea that integrating computational technology into environmental systems will be the answer to the aftermath of industry.

Above drawing is by Phung Hieu Minh Van, a student at the Architectural Association.

Monday, 13 January 2014

Summing, reviewing and thoughts to the future

This the final post of the project. I've been thinking about how I was going to end this now for quite a while. My opinion changed over the 4 months I want to reflect on this in useful way about how this happened. The second thing I want to do in this final post is cast an eye on the future briefly and to comment on how my current thoughts about all this reflect back on to the topic.

So, I've decided to accomplish these two aims first) by looking at my changing word use of the course of writing and second) by looking at some research in the journals and the news this week.

1) My words

To chart my themes in this blog I've just looked at all 12,946 words of it. It shows some unexpected results.

To start with I've counted up how many times I used each word and ranked them. Aside from the common pronouns, prepositions, and conjunctions ( I've written the word 'the' 751 times) the top words are:

Data ~ 105
 Technology ~ 58
Information ~ 49
Environment, Environmental ~47
System(s) ~45
Internet ~40
Example ~34
Things ~34
Post ~ 28
Time ~26
Computational ~ 24


All these words would be expected right? I don't think these results show anything particularly surprising.  But as all these words appear many, many more times than once per entry, I thought it would be interesting to look at how their use changed over time as a means to chart the progression of the blog. These results are interesting. 

The graph of their use looks like this:




So to start with its kind of interesting to see that the while the themes of 'data' and 'technology' both follow the same trend, i.e. they both come and go every 2 posts or so rising every time they peak, they are exactly out of phase with each other the whole way through. Why would this be? Well, it certainly means that I have confronted these two topics separately. They are definitely separate topics but they are connected. So this lends me to thinking that my approach could have been better. These data reflect one of my main conclusions reached in this blog: data is more important that the technology used to collect it. What I mean by that is data comes first, or least it should do. Before implementing any system, for example active control of natural or environmental systems or passive observation, what you're going to get out of it needs to be considered. It then it needs to be considered again during technological development and again after implementation.  More on this below.

Another observation is that its surprising the internet is not mentioned more, as at the outset of writing this blog I would have imagined that this would be a really central theme. Of course, this could be down to my negligence but I think that its reflecting something else entirely. From doing this project, it has become increasing clear to me that using computational technology does not, in fact should not, go hand in hand with connecting all these systems up. I think these systems need to remain to a certain degree geographically specific.

This data also reflects my belief that I have somewhat neglected the environmental connection more than I should have. If I had sketched out what I would hope these lines would look like at the start of the project in October I would have put 'Environment' up there level with 'Technology'. Both of these words I would have expected to appear higher than 'Data', which is somewhat of a surprise winner. Appearing almost double the amount of times of the second most common word, it is clear that knowledge (inclusive of both the words 'Data' and 'Information') is far and away the most important point that to emerge here. I suggest that this is for two reasons. Firstly, as much as the ability to put in the systems I've been talking about here is important, the ability to USE them properly and responsibly is far more important. This is derived from data. I think this conclusion too belongs to the growing conversation that people are having that we shouldn't just do things because we can, particularly with respect to technology. Possibilities and effects, adverse and beneficial, need to be discussed. Secondly, I believe it reflects the possible uses of these technologies as well. Just as these computation systems are useful for active control over Nature, for example flows of rivers etc, they are equally as useful for observing Nature (and our interactions with it). This is perhaps something that is overlooked in often flippant pieces discussing civilisation's magnificent ability 'to do engineering'. 

Finally, I'm really happy to notice that I have been able to consistently fill my posts with examples to illustrate my explorations and discussions. At the same time, I'm somewhat embarrassed that the word 'post' appeared so consistently throughout. Its making me think that I have been overly self-aware of the whole blog writing experience, and that this might have held be back somewhat.

Having looked at what I've been writing about, I want to end by looking at what people are writting about right now and whether what I've been exploring has any relevance all at. Happily it turns out it does! Simply looking at published pieces in Nature this week confirms this.


2) The Thoughts of Others

The first piece listed is an Editorial entitled 'Data Sharing will pay dividends'. In discussing data sharing in the pharmaceutical industry it highlights one of the major issues - access, availability and ownership. This complements one of the conclusions of this blog in that it states: the more data the better and furthermore the more people who have access to the more data the better. Ultimately this is one part of the much broader debate into how the internet actually exists and interacts with global capitalism and nation states. Another part of the same debate would be the current privacy debates. The sharing of information is shaping the current intellectual and political landscape.

The second thing to catch the eye is a new piece on using massive amounts of data to recognize photos and speech, deep-learning computers are taking a big step towards true artificial intelligence. What better follow up to the aforementioned conclusion could one ask for? One need not say more!

Also reported is that a swarm of satellites set to deliver close to real-time imagery of swaths of the planet.  This signals a huge change in satellite technology, this is the first civilian project to do anything of this sort. This shows that computational technology of the sort that isn't just 'personal technology' like iphones etc. is coming to the civilian world and this will change things again. A second coming of the technological revolution.

3) My final word

So, these two analyses are cause for celebration. I've been talking about things that really matter and can only increase being so. I've even managed to come to some of the same conclusions as people who have been thinking about this for decades!

I've really enjoyed this project, its given me a great opportunity of look at research and writing in a huge number of different areas and link them all together. I've had fun toying with bits of code to supplement my reading. The final thing I want to saying is this.......

As I mentioned last time after tackling this topic for four months I discovered a journal called Environmental Technology only last week. I been reading some of its papers. Clearly 'Computational Technology' is a tiny subset of 'Technology'. The title of the first paper to catch my eye is 'The effect of heterotrophic growth on autotrophic nitrogen removal in a granular sludge reactor', a somewhat different topic than that which we've been dealing with here.

***************************
The code I used to do the word counting etc. is up here ~ http://herculescyborgcode.blogspot.co.uk/2014/01/word-frequencies_3837.html. To repeat the same thing just save all posts into separate .txt files named in numerical order in a directory with this code saved as frequencies.py - go to this directory and type 'python frequencies.py' at the command line. Of course, you would want to change your key words though to suit your subject matter.

Sunday, 5 January 2014

'technology' has become so profoundly useless

I've come to the last two posts (maybe) in the blog. It seems only right to review what I set out to achieve. This was (is):

          1) To explore computational technology as a means to fix the environmental problems caused by the industrial revolution.

          2) To explore what these technologies are and how they work.


I also should explicitly state one of things I didn't try to do, namely:

To explore the environmental problems that need fixing

Why didn't I do this? Well because it is well past the day that human impact on environment change became accepted as fact. Having written the posts I have, I now realise that perhaps I have not talked about the link between and technology and nature as much as I could have. So, in these (potentially) last two post I want to remedy my neglect of the environmental link and sum up my position.

In an attempt to understand why I found it difficult to make this link more I turned to the history of the discussion of technology.

In a long, paper/essay like entry in the Stanford Dictionary of Philosophy (SEP) on the 'Philosophy of Techology' the SEP notes that it wasn't until the late twentieth century that philosophers seriously began considering the 'ethics of technology', that is to say take and discuss perspectives on the phenomenon and its relation to others. I, as someone who is far from an expert of philosophy, found this surprising given the range and depth of topics subject to consideration by thinkers in the field. The SEP proposes that this late development is a result of societal consensus that advancing technology was simply a positive thing. It gives us many possibilities and enables us to do more things, and this is a good thing of course.

As someone who has spent more time studying the Environment than the philosophical can easily understand this. Although human technology has been having undesirable consequences on the environment (on spatial and temporal scales of mensionable significance) for a long time -- perhaps as early as the the Neolithic revolution, certainly as early as our first use of fossil fuels that occurred in tenth century Chinese iron smelting industry (Steffen, Crutzen, and McNell: 2007) -- we're only just discussing it and systematically studying it!

Additionally, although there was a profound absence of actual formal and intellectual assessment on technology an alternative was had in the form in propagating tropes in myth, art and literature. For example, it was clearly understood that technology can be put to bad use or lead humans to hubris - see here (kinda):




The SEP says that this way of understanding it, i.e. as having no innate positive or negative quality, is known at the 'neutrality thesis'. A number of heavy weight twentieth century philosophers disputed this into disrepute. So, in the last 50 years or so, the scope and agenda of the ethics of technology has grown massively. People have begun to talk about it variously as a social activity, a cultural phenomenon, a professional activity and a cognitive activity.

The coincidence of this increase in talk in the ivory tower about the meaning of technology, the studying of its environmental consequences and rise of technology is at once amazing and unsurprising. 


In search of a more detailed answer to the question: what held this debate in philosophy back from technology so? - my reading lead me to Schummer's (2001) paper titled Aristotle on Technology and Nature. He inadvertently provides some satisfying answers. He says that any view that philosophers historically had of technology came out of one of three theses, (the paper discusses whether or not they can be attributed to Aristotle, hence the 'inadvertently'). 
1) Technology imitates nature, such that there is no place for authentic human creativity. 
2) Technology in supplementing and completing nature fulfills but the inherent aims of nature. 
3) There is an ontological hiatus between natural things and artifacts such that technology cannot reproduce or change natural things.
 
I would be lying if I said I understood what an 'ontological hiatus between natural things and artifacts' was but for my purposes its not important. Schummer nicely illustrates one of my closing points to this blog. This is that 'technology' as a term has become so profoundly useless. For the whole of this blog, as one always should, I've tried to avoid talking in meaningless generalities but it is only really now that I am realizing how much of one the word 'technology' itself is. This is what has held back Philosophy for many years, even now it seems that philosophical transactions are dominated by discussion of what it actually means rather than what it is doing. If very early on people had talked about specific practices, ideas, creations, or groups thereof perhaps Philosophy as a discipline would be further ahead in this area and would have provided Geography with the impetus for investigating anthropogenic impact on environmental change sooner *.

So, whilst I would never, of course!, ever, want to blame one of the inadequacies of this blog upon the neglectfulness of over two thousand years worth of philosophers and other thinkers besides, let alone the inadequacies of language itself, this sure has made my task hard.

This is therefore one of my concluding thoughts of this blog. We all need to stop talking about technology and be more specific. After all, can 3D printing really be discussed under the same term as environmental monitoring and smart roads?

------- 
In my post next week I'll try to pull everything to draw some other meaningful conclusions.

Annoyingly, but at the same time luckily, while put finishing this (penultimate) post I've discovered a journal called Environmental Technology. After writing this blog for over 4 months, the countless internet searches I've done, papers, articles, blog posts I've read and videos I've watched only now do I learn about this. For sure I'll be using this to write my next post. 

*
Its also more than probable that Geography wasn't ready for it anyway being stuck in deep thoughts and studies of regionalisms and being busy being the 'Imperial science' that it was. These are also explanations why Geography couldn't bring this about itself.

Saturday, 28 December 2013

The Internet of Things

To follow my previous post which talked about the vast improvements in computer power over the last decade here I would like to tackle the issue of connecting all these devices together. To do this I'm mainly going to draw on two sources.

The first is written by Khan et al. and entitled 'Future Internet: The Internet of Things Architecture, Possible Applications and Key Changes'. It was published as part of the 2012 10th Institute of Electrical and Electronics Engineers (IEEE) International Conference on Frontiers of Information Technology. It provides a great overview of the IoTs and its future. The second is a workshop paper by Ali and Abu-Elkheir entitled 'Data Management for the Internet of Things: Green Directions'. It comes from a 2012 IEEE conference 'Green Internet of Things'. Despite being a fairly technical paper I've chosen it because it highlights one of the major problems of ubiquitous computing, namely that its going to demand to huge amount of energy! It stresses the need for (and proposes) efficient and sustainable solutions to this problem so that we can have all the benefits the the IoTs will bring without bringing about crippling environmental change.

So, what is the Internet of Things?

It sounds like corporate jargon but really its definition is quite specific. Ali and Abu-Elkheir (2012) define it as:


a networking paradigm that exploits data capture and communication capabilities to link the volume growth of sensors, Radio Frequency Identification (RFID) tags, and smart objects directly to the internet. The ultimate purpose of IoT is to provide a standard platform for developing cooperative services and applications capable of harnessing the collective power of information and resources available through the individual "Things" and systems managing the aforementioned Things.

What this means its that it is the process of connecting all sorts of objects to the internet and letting them communicate with each-other. Khan et al. (2012) writes:


Thus, IoT provides connectivity for everyone and everything. The IoT embeds some intelligence in Internet connected objects to communicate, exchange information, take decisions, invoke actions and provide amazing services.

This is why is also called ubiquitous computing. By now almost all desktop computers, laptops, tablets, smart phones etc. are connected to the internet. In a few years time this will happen for cars, televisions even smoke detectors. Khan gives a simple schematic as an example:
 
The generic IoT senario (Source: Khan et. al. 2012)

.... but of course this could and will include of sorts of other things like I've been exploring in this blog like dams, roads, and observation satellites. So this is what the internet will look like a few years time indeed it is already beginning to take this form but what are the problems inhibiting the growth of this sort of things? Well, there are many but I want to focus here on one in particular. It's one of the largest, if not the largest: energy efficiency.

The IoT can be described as having 3 components: hardware, software and data. For the most part the hardware to do this already exists so that's not the problem. The software may not yet be written but its certainly not an huge barrier to write it: we have the technology with the latest high-level programming languages and the people (and money) to get it written. Ali and Abu-Elkheir (2012) stress that data is the issue.

IoT data is different from previously collected, stored and processed data in that it is ...


'a massive volume of heterogeneous, streaming and geographically-dispersed real-time data ... created by million [or billions] of diverse devices periodically sending observations about certain monitored phenomena or reporting the occurrence of certain or abnormal events of interest' (Ali and Abu-Elkheir, 2012).

So there is a huge problem of energy consumption. Storing data requires alot of energy and the explosion of data will result in an explosion in energy requirements. Just take a look at how keen Google are to stress how efficient their data centers are:



and take a look at their website - here.

So what are the issues here? Well there need to be (and there is!) research undertaken and conversations begun about what information should be recorded. Of course this also ties the other huge problem of implementing the IoT: privacy. To read up on this debate check out the guardian's piece here. This is an art and not a science and will evolve as sentiment and political powers evolve - but there is a clear environmental constraint here - we simply cannot record everything as we just dont have the energy to do that!

On a more technical side there are things we can do to improve the systems. There is also intense research here and Ali and Abu-Elkheir (2012) outline the four major fields of trends: the development of efficient indexing schemes, scalable archiving, localized data-centric storage, and migrating data to the cloud. Whilst the technical aspect of these may be lost to you reading this (they are on me) they make some general sense in what they're trying to achieve:


Efficient Indexing ~ being able to find things in the data quickly by good labelling,

Scalable archiving ~ putting things in the right place like things that will be used lots somewhere quick and easy to get to and visa versa,

Localized data-centric storage ~ understanding the geographies of data and its usage,

Migrating data to the cloud ~ removing the geographies of data in a clever way,

One other thing that this paper highlights is the life cycle of data - not all things need to be kept forever - but some do!

The point to be made here is that there is a clear need to further analysis and address the issue of the energy efficiency of the IoT otherwise it wont be possible and we'll continue to put the planet's environmental systems under stress. But there are lots of routes to explore for improvements so we have reason to be hopeful!

Conclusions

So in this post I've tried to introduce and flesh out the Internet of Things. Khan et al. (2012) sum things up nicely:


The IoT embeds intelligence in the sensor devices to autonomously communicate, exchange information and take intelligent decisions. Simply, IoT transitions human-human communication to human-human, human-device and device-device communication.
When this is taken along side development in computational power (see previous post on Moore's law) and in context of integrating these technologies in with the natural world, I hope I have had even the smallest amount of success in conveying how exciting the coming decades promise to be!


Monday, 23 December 2013

Bringing technology away from the human and into the natural.

In writing this blog one of the most considerable barriers that I feel I have been up against is an lack of a coherent literature. Very few papers, articles, other blogs, etc. deal with the same problem as I have framed it. This is, in part, a fault of my own. Perhaps it would be more useful to engage in existing debates and try to lead them into new directions than starting a new ones. Nonetheless, I feel that I have had some success in formulating the boundaries of my discussion even if it has resulting in me having to search harder for germane sources. Despite this, I do feel that the debate as to whether to allow unfettered integration of computational technology with the natural environment is an important one - indeed one that will effect the future of life on earth as much as perhaps anything else. What the lack of coherent discussion around this topic suggests to me is there is a problem. A problem that is undoubtedly larger and more systematic than this individual instance but that regardless needs elaboration in this context.

The nonexistence of discussion here will result in two fundamentally damaging problems: inaction and the wrong action. The first is perhaps more self explanatory but also less significant and realistic.  By not doing anything we miss out on huge potential benefits but as there are lots of examples of projects of computational integrate this is not really the problem here. The second, action of the 'wrong' sort is more important (at least as I see things). By not communally and systematically analyzing what's happening on all spatial (right down from the microscale of individual rivers for example to the planetary e.g. monitoring via satellites) and temporal (e.g. disposable, instantaneous action to projects implemented for longevity) we are opening ourselves up for damage. Whilst the implications of this are multitudinous and therefore require a great deal more attention than I can direct toward them in this work I will list a few examples to try to convey clearer my point.

    a) that projects that do get implemented are likely to suit particular interests, probably the interest of people with money.

    b) they will also probably be limited as a result of these interests and financial and other constraints.

    c) projects will likely be incompatible with each other and therefore we will miss out of great opportunities for some hugely valuable work and action.

    d) there will be considerable overlap in projects and therefore considerable waste of resources.

 I want to stress the immediacy of this debate, and to do this I going to discuss a paper from 1965. Gordon Moore, the CEO of Intel wrote a paper called ' Cramming More Components onto Integrated Circuits'. It looks at trends of technology and makes some predictions about the future. With remarkable accuracy he see how things will turn out. This paper is most famous for Moore's observation that the power of computers doubles roughly every 18 months. This is now referred to as Moore's Law.

In one particularly prophetic quote Moore says:


Integrated electronics will make electronic techniques more generally available throughout all of society, performing many functions that presently are done inadequately by other techniques or not done at all. 


This relates exactly to what I have been attempting to confront in this blog. Now after 50 years we've gone from a time when silicon transistors didn't exist to one now when we can put over 1.7 billion one chip! What Moore skims around but doesn't directly confront, even in an interview in 2005 about the paper and advances, is the form these changes take. I see can see two potential stages. The first, the 1965 to 2010 is the advance in computational technology to bring it to individuals. For example, the widespread adoption of laptops and smartphones. The next step is a complete overhaul of infrastructure etc. to incorporate this technology.

To make this even more interesting is the context of the implications of climate change. The energy efficiency of computers colours this debate nicely. In a 2011 paper, 'Implications of Historical Trends in the Electrical Efficiency of Computing', Koomey et al. make a similar empirical observation to Moore's Law. The electrical efficiency of computation has doubled roughly every year and half for more than six decades. Why is this relevant? Well, industrial infrastructure has historically been hugely energy inefficient with hugely losses in the form of heat, sound and kinetic energy. This scales badly such that huge structures (e.g. factories) are hugely inefficient. In contrast, the efficiency of computational infrastructure is getting to be hugely efficiency and scales up very well.

So what am I trying to get across in this post? Well two things: 1) that there has been very little discussion directly focused towards this topic and this could have hugely damaging effects and 2) that this is time to act. Things are changing so quickly and for the better. We are at the end of the first stage of the technological revolution -  we should now formally and together enter the second, bringing technology away from the human and into the natural

There is alot of talk about whether Moore's law will continue into the future. The way computer power has increased in the past in by making components smaller and smaller. Soon chips will be so small that they will reach limitations: when they are 5 atoms thick (which they probably will be in about 10 years time) quantum effects such as tunneling will prevent further advances. Thus silicon chips can only get better by so much. There are other chips possible such as those made out of DNA and using quantum mechanics but these are so way in the future. What ever the case, we now have technology with huge unreleased potential for improving global environments.

Friday, 6 December 2013

Lessons learnt from high frequency trading

After all the posts and reading I have done over the last few months around this topic, I would say things have changed. Whilst still an absolute proponent of integrating computational technology into the natural world, my understanding of this field has undoubtedly become much more nuanced. Whilst thinking about how I was going to put this forward in a new post I came across a useful analogous situation. In this post my aim is to try to illustrate the imperative to remain human controls on automated systems. I'm going to do this by exploring the damage caused by algorithmic trading to the global financial system.

Whilst the financial system is in its entirety anthropogenic, a socio-political-economic construction, I believe from this arena lessons can emerge that are broadly applicable to natural systems.

I would also like to make the caveat at the outset that my understanding of economics is limited such that this post is written by a layman not an expert. If I say, infer or imply anything incorrect, I would appreciate being corrected.

According to this article in Business Weekly, high frequency trading (HFT) has its roots in 1989 with Jim Hawkes, a statistics professor at the University of Charleston, his recently graduated student Steve Swanson and David Whitcomb, a finance professor at Rutgers University. They founded a company called Automated Trading Desk (ATD) that hardwired algorithms for predicting the future prices on the stock market. They taped a data beam with a small satellite dish on a garage and soon they were predicting stock prices in 30 to 60 seconds. This was much faster than any of the humans doing the same job were at the stock exchanges. Fast forward to 2006, and ATD was trading 700 million to 800 million stocks a day - meaning that they represented upwards of 9% of all the US stock market volume.

HFT became so successful because it is both faster at predicting prices than humans and it is also more accurate. It can then additionally act upon these results much faster, pushing trades through at breakneck speeds. In the 17 years between 1989 and 2006 HFT became practiced by a number of other firms, most notably Getco, Knight Capital Group and Citadel. By 2010, it had become somewhat of a standard practice in the city, with greedy interested parties sweating to operate with the most efficient algorithms possible and the fastest connections to the exchanges. Firms were paying big money to be centimeters closer to processors - saving them valuable milliseconds. 

This graph shows the monumental rise of HFT in both the US (red) and Europe (blue). In 2009, HFT represented around 65% of shared traded in the US. World Exchanges (the source of this data) doesn't give an reason as to whether the 2005 - 2007 had no HFT in Europe or whether it is a lack of data. However, even in Europe, in 2009 represented 40% of the stock market. A phenomenal rise to fame.


A) HFT market share in the US (Number of shares traded). B) HFT market share in Europe (value traded). Replotted from World Exchanges.

Despite these huge success, the most notable feature of these graphs is what? What happened in 2009-2010 and after to quell the growth of HFT as a practice? When talking about HFT there are two infamous examples of how it can go wrong.

 The first is known as the 'flash crash'.  On the 6th of May 2010, the Dow Jones Industrial Average, the index for the US stock markets, dropped by 9% in the course of about 30 minutes. This is the largest intra-day point decline in the history of the index. Whilst the true causes of this crash are subject to a certain amount of speculation it is concretely true that HFT was the significant driver, or mechanism, through which prices dropped so rapidly.

However, because there were so many conflicting explanatory theories surrounding the flash crash event, HFT remained relatively popular and wide-spread. It did though cease to grow. It was last year, 2012, that something happened to really bring HFT under the microscope.

In the course of 45 minutes, Knight Capital Group, one of the leading market markets practicing HFT, lost $465 million. For some reason there was a bug in there code so their algorithm was buying at high prices and selling cheaply --> i.e. the opposite of a sensible trading strategy.  Whilst isolated this would be damaging but not fatal, its was situated right in the middle of the most competitive markets. Other firms algorithms sniffed out this bug and then traded extensively to exploit it. The result? Knight Capital Group lost more money in 45 minutes than it had made the year before.

So, what does all this have to do with computationally controlling and monitoring the environment? Well, fundamentally its a cautionary tale. This systems that we are implementing have huge power and influence over human lives. Just as the stock market taking a plunge can have tragic consequences on for individuals so could traffic control, or dam control, or power station control, or irrigation control, or pesticide control etc. etc. ad infinitum.

The clear imperative is to test them to security beyond a shadow of a doubt but is this enough. The programmes are many thousands or millions of lines of code. They have the capacity to behave in a highly non-linear fashion. The clear message that is emerging to me is that beyond integrated, yet still automatic, safeguard, human operators still need to oversee these operations and this is supported by empirical evidence.

In an 2012 paper 'High Frequency Trading and Mini Flash Crashes', Golub et al. find after looking at a range of financial data that problems with HFT result from the regulation frameworks that it operates in. In should be noted however that this paper stands out from the rest of the literature in this conclusion (other work implying that there are inherent stability issues in HFT). Either way the broader implications are clear: we must understand the behavior of the systems we implement and we must operate and regulate them properly.

So this means that we will never live in an idyllic utopia where farming, power production, transport infrastructure, natural hazards are all automatically dealt with be servers ticking over in cooled data centers whilst each and every human can really focus on lives pleasures (whatever that means...). Its this a shame? I don't think so. These systems do provide us with a huge increase in efficiency and they are very fast however past events have resulted in a lost of trust to such an extent that skepticism might always remain. We have seen how fast mistakes can propagate.

Friday, 29 November 2013

Some statistics relating to global internet usage.

Following on from the last post, this post is going to be about availability of computational equipment for both organisational and individual purposes. Having already explored an example (of which there are many more) where a place's position on the globe and their environment inhibit the adoption of computational integration, in this post i'm going to explore other, more human or societal, barriers.

To look at the global (or at least international) scale, internet usage figures are useful. I found some data from the International Telecommunications Union for the percentage of all countrie's population with internet access. With the intention of looking at how access has changed over time I've plotting these data as 4-year periodic histograms:



Histograms for percentage of population with internet access of all countries. Plotted with data from the International Telecommunications Union. Link here
So, on the horizontal axis are the bins for each 10 percent categories. For example if 46% of country X's population has access to the internet then country X will be counted in the 40% bin. The vertical axis show the total number of countries in each bin. If you add up all of the blue bars in each histogram you would get the total number of countries in the dataset (which is constant throughout the time period).

 What strikes me as interesting, is that the time-period 2008-2012 represents a monumental change in pace regarding the increased exposure of people to the internet. However the price of a laptop has been lowing exponentially as i found out here.....



Consumer price index for personal computers and peripheral equipment from 2005 to present. Global recession was experienced from 2007 to 2008. Plotted with data from the US department of Labour. Link here.

... and this means the the reduction in price of the average processor was lowest during 2008 to 2012 than it has been before. It is also interesting to note that a global recession occurred in 2007 - 2008. So, the reason why, all of a sudden, internet got much more wide-spread in this period remains a mystery.

It's also interesting, if not somewhat sad, that despite this progress, that internet access remains out of the reaches of the majority of the world's population.

The UK's Office for National Statistics pulls out some interesting numbers for Great Britain. In 2012, 21 million households in Great Britain (80%) had Internet access, compared with 19 million (77 per cent) in 2011. So, 205 of the UK households do not have in internet access. The reason's though for not having it are also interesting: Of the 5.2 million households without Internet access, the most common reason for not having a connection was that they 'did not need it' (54 per cent).



UK households with internet access. Before green dashed line data is from the UK and after it is for GB. Replotted using data from UK National Office for Statistics Internet Access Report 2012: link here.

I would like to break down this data but I cant find anymore information; the trail goes cold. So, I have to speculate as to why people would say they do not think they need it. As a proponent (generally speaking) of increased connectivity, I am inclined to interpret this as effectively saying that they 'do not understand it' fully (although this is understood to be reductive) - as the benefits to any demographic are myriad.

Additionally, only 67% of adults in Great Britain used a computer every day. This figure also strikes me as very low. GB is one of the countries for which use of computers is amongst the highest globally. This suggests there remains lot of of work to be done.

Directing attention back to the global scale, there is a huge disparity between the language in which the content on the internet is in and the native tongues of its users. In this diagram you can see that Chinese speakers represent almost a quarter of all internet users however in the 2nd diagram below you can make out that only 4% of internet content is in Chinese. 




Native languages of internet users. Plotted with data from the Web Technologies Survey. Link here.



Languages of internet content. Plotted with data from Internet World Stats. Link here.
This too strikes me as very strange. This implies that lot of the content on the internet is not accessible to a vast number of people even if they have an internet connection. So really, this illusion of global connectivity is false and what emerges is a highly skewed and unequal accessibility: especially for something that is deemed by the UN to be an essential human right.

Alot of work could be done with this data, far beyond that of the scope of this blog but in futures posts I may return to this to explore the spatial biases in these data in more depth.

For this post, perhaps more than any other, comments, interpretations etc. are hugely appreciated.
  

 

Wednesday, 27 November 2013

Iqaluit

To date in this blog I've almost unreservedly proposed and expounded the benefits of integrating computational technology and automated systems into the natural and human world. In doing so, I've implied that this is universally possible and beneficial and that the utility will be felt worldwide. However, the reality is that there are barriers preventing everywhere and all from getting these benefits. In fact, there are places that are falling, or are likely to fall, victim to computational expansion.

The reasons why somewhere might not benefit fit from these technologies can be divided into 2 groups: geographical (i.e. spatial) and human (i.e. economic, social, and political) constraints.  Of course in reality there is considerable overlap between these two groups.

In this post I'm going to talk about a place that is not 'computationally' thriving as others are for predominately geographical (i.e. simply where they are on the globe) reasons. In a paired post, later in the week I will try to look at some examples of places where there are considerable 'barriers to entry' for human reasons.

My example case study that I want to talk about is Iqaluit, in Nunavut, Northern Canada. With a population of 6,700 people, Iqaluit is the largest settlement in Nunavut, itself the largest of the Canadian provinces. Nunavut is of an equivalent size to Mexico or Western Europe but has a population equivalent to Gibraltar. It is also the location of the world's most northerly fully inhabited settlement.


Iqaluit, Nunavut, Canada.
Formalized as a settlement in 1942 as an American air base with a specific function of a cross-Atlantic refueling station, Iqaluit was for a long time known as Frobisher Bay. Before that, it was a traditional Inuit place of hunting for over 4,000 years. In 1940, the Hudson's Bay company relocated here and the settlement grew. In 1964, a community council was elected and in 1979 its first mayor was installed. Ever since then its population has steadily been rising and was chosen as the provinces capital in 1995. Its population is around 85% Inuit, the remaining 15% are predominately caucasian Canadians who migrate there for 3-4 years to take advantage of the high wages and skill deficit.  Despite being a monumental testament to human ability to live in extreme environments Iqaluit is in many ways lagging behind the rest of Canada.


Shooting hoops in front of Iqaluit's cathedral (left) and town hall (right)
Many of the technologies that underpin modernity and much of the advancements that humanity is making are themselves underpinned by access to the internet. Rapid communication of information results in profound and lasting changes. However, the position of Iqaluit, north of the 60th parallel and at the mouth of the Northwest Passage, means that laying fiber optics is simply unfeasible. As such, the only telecommunications option is satellite. This means that internet in Iqaluit is equivalent to that of the UK in the mid 1990s. They pay $30s a month for 45 hours of dial-up services. The effect of this is severely hampering.

Whilst personal access could easily be shrugged off as an unnecessary indulgence (after all, haven't we lived without internet for over 8000 years?), the effect on services is severe. Especially, when it is considered how much Nunavut is dependent upon Canada's greater infrastructure and resources situated 3,000 miles to the south.

Additionally mobile phone service is poor. It is often discussed and quoted how beneficial mobile telecommunications are to remote and developing parts of the world, for example Africa, but this is simply not feasible. In fact, the publishing of this post is very timely as on Saturday, Iqaluit will for the first time have a mobile phone service that has the capability to run smartphones.

Iqaluit has many huge problems surrounding issues of food requirements and food security. A large proportion of these could be solved, or improved, with the use of computational technology. Wakegijig et al. (2013) highlight how community members, all sorts of organisations, public servants and academics alike have long been describing the 'desperate situation for food security' in Nunavut. Defining food insecurity as when food systems and networks are such that food is not accessible and/or of sufficient quality, 70% of Nunavut's Inuit pre-schoolers live in food insecure homes. Wakegijig et al. highlight the absolute importance of strategic planning, advocacy, and public mobilisation to raise the profile of an issue and to solve it. Community engagement and action is difficult to organize today without internet access, especially when large, indoor public spaces are scare and outdoor spaces, to for example protest in, are difficult to occupy 80% of the year as Iqaluit is frozen and one doesn't want to be hanging around outside in -40 Celsius.

Food in Iqaluit comes from two sources. It is either hunted, or imported. Both of these are highly sensitive to climatic changes. Indeed, the recent warming trends have changed hunting and fishing grounds in ways that make regular hunting difficult. It is also not necessarily easy to predict these changes. For example, Inuit have for many, many years hunted caribou (also called Reindeer). With the recent warning trends they have had access to more grazing land and populations had flourished. However, this led to a relaxation in policy and over-hunting ensued. As such populations have now been devastated.

Importing food is costly. Having myself walked around the supermarket in Iqaluit, I have seen first hand how much food goes to waste because it is not bought. Many vegetables and other perishable goods are sent north and stocked. However, eating many of these things has never been a part of Inuit cooking so they remain unbought and go off. If internet access was more widely available, people could order the food they wanted thus costs could go down and waste could be reduced.

Samuelson (1998) found that the rapid population growth this coastal community has resulted in increased pressure on the (particularly sensitive) environment, and waste management issues have become increasingly complicated. There is severe contamination of fresh water: which is not an abundant resource as one might expect. Computational technology would dramatically improve this situation but it simply remains to be integrated because at them moment it is too expense to get the technology up there and build it rugged enough for the local environment.

Iqaluit also experiences many health problems. Cole and Healey (2013) outline the on-going challenge of providing health-care in the Canadian Arctic, specifically the region's capital. The truth is that Inuit culture and the vast geography of the region make things difficult. The great distances people must travel to get any form of specialized health care or diagnosis leads to a number of ethical dilemmas. One third of Nunavut health care budget is spent on moving people to a site that can provide them with the care that they need. The sort of simple optimization that computers can provide would dramatically improve this situation. Both in the fields of propagating information and finding better solutions to existing problems.


Iqaluit, and Nunavut more generally, has many social and health related problems.


Of course this is not an exhaustive list of the problems areas that integrating computational technology would be able to help out in. Iqaluit has two prisons for its population of under 7,000 people. Both of these are full, and a new one has just been commissioned. This is startling. Animals also play a huge role in the daily lives of individuals living here. The sort of help that automated tracking, for example of polar bears, could bring is monumental. 


Many of these problems wont be fixed anytime soon by computers. In many cases it is impossible to bring the sort of technologies that would help this far north due to environmental constraints, for example as is the case for fiber optics. In other cases, the fact that the population is so small makes it unlikely for projects to be commercially viable despite the profound importance of Iqaluit as the regions capital. As such, creative ways to solve these problems need to be composed.


*****************************
 I've put the code to draw the map on the sister site. URL: http://herculescyborgcode.blogspot.co.uk/2013/11/map-drawing.html

The photos are my own. If you would like to use them please email me and I can give you access to many more.