background image

background image

Description

The Industrial Revolution pushed civilization forward dramatically. The technological innovations achieved allowed us to build bigger cities, get richer and construct a standard of life never before seen and hardly imagined. Subsequent political agendas and technological innovations have pushed civilization up above Nature resulting in a disconnect. The environmental consequences though are leaving the Earth moribund. In this blog, I'm exploring the idea that integrating computational technology into environmental systems will be the answer to the aftermath of industry.

Above drawing is by Phung Hieu Minh Van, a student at the Architectural Association.

Saturday 28 December 2013

The Internet of Things

To follow my previous post which talked about the vast improvements in computer power over the last decade here I would like to tackle the issue of connecting all these devices together. To do this I'm mainly going to draw on two sources.

The first is written by Khan et al. and entitled 'Future Internet: The Internet of Things Architecture, Possible Applications and Key Changes'. It was published as part of the 2012 10th Institute of Electrical and Electronics Engineers (IEEE) International Conference on Frontiers of Information Technology. It provides a great overview of the IoTs and its future. The second is a workshop paper by Ali and Abu-Elkheir entitled 'Data Management for the Internet of Things: Green Directions'. It comes from a 2012 IEEE conference 'Green Internet of Things'. Despite being a fairly technical paper I've chosen it because it highlights one of the major problems of ubiquitous computing, namely that its going to demand to huge amount of energy! It stresses the need for (and proposes) efficient and sustainable solutions to this problem so that we can have all the benefits the the IoTs will bring without bringing about crippling environmental change.

So, what is the Internet of Things?

It sounds like corporate jargon but really its definition is quite specific. Ali and Abu-Elkheir (2012) define it as:


a networking paradigm that exploits data capture and communication capabilities to link the volume growth of sensors, Radio Frequency Identification (RFID) tags, and smart objects directly to the internet. The ultimate purpose of IoT is to provide a standard platform for developing cooperative services and applications capable of harnessing the collective power of information and resources available through the individual "Things" and systems managing the aforementioned Things.

What this means its that it is the process of connecting all sorts of objects to the internet and letting them communicate with each-other. Khan et al. (2012) writes:


Thus, IoT provides connectivity for everyone and everything. The IoT embeds some intelligence in Internet connected objects to communicate, exchange information, take decisions, invoke actions and provide amazing services.

This is why is also called ubiquitous computing. By now almost all desktop computers, laptops, tablets, smart phones etc. are connected to the internet. In a few years time this will happen for cars, televisions even smoke detectors. Khan gives a simple schematic as an example:
 
The generic IoT senario (Source: Khan et. al. 2012)

.... but of course this could and will include of sorts of other things like I've been exploring in this blog like dams, roads, and observation satellites. So this is what the internet will look like a few years time indeed it is already beginning to take this form but what are the problems inhibiting the growth of this sort of things? Well, there are many but I want to focus here on one in particular. It's one of the largest, if not the largest: energy efficiency.

The IoT can be described as having 3 components: hardware, software and data. For the most part the hardware to do this already exists so that's not the problem. The software may not yet be written but its certainly not an huge barrier to write it: we have the technology with the latest high-level programming languages and the people (and money) to get it written. Ali and Abu-Elkheir (2012) stress that data is the issue.

IoT data is different from previously collected, stored and processed data in that it is ...


'a massive volume of heterogeneous, streaming and geographically-dispersed real-time data ... created by million [or billions] of diverse devices periodically sending observations about certain monitored phenomena or reporting the occurrence of certain or abnormal events of interest' (Ali and Abu-Elkheir, 2012).

So there is a huge problem of energy consumption. Storing data requires alot of energy and the explosion of data will result in an explosion in energy requirements. Just take a look at how keen Google are to stress how efficient their data centers are:



and take a look at their website - here.

So what are the issues here? Well there need to be (and there is!) research undertaken and conversations begun about what information should be recorded. Of course this also ties the other huge problem of implementing the IoT: privacy. To read up on this debate check out the guardian's piece here. This is an art and not a science and will evolve as sentiment and political powers evolve - but there is a clear environmental constraint here - we simply cannot record everything as we just dont have the energy to do that!

On a more technical side there are things we can do to improve the systems. There is also intense research here and Ali and Abu-Elkheir (2012) outline the four major fields of trends: the development of efficient indexing schemes, scalable archiving, localized data-centric storage, and migrating data to the cloud. Whilst the technical aspect of these may be lost to you reading this (they are on me) they make some general sense in what they're trying to achieve:


Efficient Indexing ~ being able to find things in the data quickly by good labelling,

Scalable archiving ~ putting things in the right place like things that will be used lots somewhere quick and easy to get to and visa versa,

Localized data-centric storage ~ understanding the geographies of data and its usage,

Migrating data to the cloud ~ removing the geographies of data in a clever way,

One other thing that this paper highlights is the life cycle of data - not all things need to be kept forever - but some do!

The point to be made here is that there is a clear need to further analysis and address the issue of the energy efficiency of the IoT otherwise it wont be possible and we'll continue to put the planet's environmental systems under stress. But there are lots of routes to explore for improvements so we have reason to be hopeful!

Conclusions

So in this post I've tried to introduce and flesh out the Internet of Things. Khan et al. (2012) sum things up nicely:


The IoT embeds intelligence in the sensor devices to autonomously communicate, exchange information and take intelligent decisions. Simply, IoT transitions human-human communication to human-human, human-device and device-device communication.
When this is taken along side development in computational power (see previous post on Moore's law) and in context of integrating these technologies in with the natural world, I hope I have had even the smallest amount of success in conveying how exciting the coming decades promise to be!


Monday 23 December 2013

Bringing technology away from the human and into the natural.

In writing this blog one of the most considerable barriers that I feel I have been up against is an lack of a coherent literature. Very few papers, articles, other blogs, etc. deal with the same problem as I have framed it. This is, in part, a fault of my own. Perhaps it would be more useful to engage in existing debates and try to lead them into new directions than starting a new ones. Nonetheless, I feel that I have had some success in formulating the boundaries of my discussion even if it has resulting in me having to search harder for germane sources. Despite this, I do feel that the debate as to whether to allow unfettered integration of computational technology with the natural environment is an important one - indeed one that will effect the future of life on earth as much as perhaps anything else. What the lack of coherent discussion around this topic suggests to me is there is a problem. A problem that is undoubtedly larger and more systematic than this individual instance but that regardless needs elaboration in this context.

The nonexistence of discussion here will result in two fundamentally damaging problems: inaction and the wrong action. The first is perhaps more self explanatory but also less significant and realistic.  By not doing anything we miss out on huge potential benefits but as there are lots of examples of projects of computational integrate this is not really the problem here. The second, action of the 'wrong' sort is more important (at least as I see things). By not communally and systematically analyzing what's happening on all spatial (right down from the microscale of individual rivers for example to the planetary e.g. monitoring via satellites) and temporal (e.g. disposable, instantaneous action to projects implemented for longevity) we are opening ourselves up for damage. Whilst the implications of this are multitudinous and therefore require a great deal more attention than I can direct toward them in this work I will list a few examples to try to convey clearer my point.

    a) that projects that do get implemented are likely to suit particular interests, probably the interest of people with money.

    b) they will also probably be limited as a result of these interests and financial and other constraints.

    c) projects will likely be incompatible with each other and therefore we will miss out of great opportunities for some hugely valuable work and action.

    d) there will be considerable overlap in projects and therefore considerable waste of resources.

 I want to stress the immediacy of this debate, and to do this I going to discuss a paper from 1965. Gordon Moore, the CEO of Intel wrote a paper called ' Cramming More Components onto Integrated Circuits'. It looks at trends of technology and makes some predictions about the future. With remarkable accuracy he see how things will turn out. This paper is most famous for Moore's observation that the power of computers doubles roughly every 18 months. This is now referred to as Moore's Law.

In one particularly prophetic quote Moore says:


Integrated electronics will make electronic techniques more generally available throughout all of society, performing many functions that presently are done inadequately by other techniques or not done at all. 


This relates exactly to what I have been attempting to confront in this blog. Now after 50 years we've gone from a time when silicon transistors didn't exist to one now when we can put over 1.7 billion one chip! What Moore skims around but doesn't directly confront, even in an interview in 2005 about the paper and advances, is the form these changes take. I see can see two potential stages. The first, the 1965 to 2010 is the advance in computational technology to bring it to individuals. For example, the widespread adoption of laptops and smartphones. The next step is a complete overhaul of infrastructure etc. to incorporate this technology.

To make this even more interesting is the context of the implications of climate change. The energy efficiency of computers colours this debate nicely. In a 2011 paper, 'Implications of Historical Trends in the Electrical Efficiency of Computing', Koomey et al. make a similar empirical observation to Moore's Law. The electrical efficiency of computation has doubled roughly every year and half for more than six decades. Why is this relevant? Well, industrial infrastructure has historically been hugely energy inefficient with hugely losses in the form of heat, sound and kinetic energy. This scales badly such that huge structures (e.g. factories) are hugely inefficient. In contrast, the efficiency of computational infrastructure is getting to be hugely efficiency and scales up very well.

So what am I trying to get across in this post? Well two things: 1) that there has been very little discussion directly focused towards this topic and this could have hugely damaging effects and 2) that this is time to act. Things are changing so quickly and for the better. We are at the end of the first stage of the technological revolution -  we should now formally and together enter the second, bringing technology away from the human and into the natural

There is alot of talk about whether Moore's law will continue into the future. The way computer power has increased in the past in by making components smaller and smaller. Soon chips will be so small that they will reach limitations: when they are 5 atoms thick (which they probably will be in about 10 years time) quantum effects such as tunneling will prevent further advances. Thus silicon chips can only get better by so much. There are other chips possible such as those made out of DNA and using quantum mechanics but these are so way in the future. What ever the case, we now have technology with huge unreleased potential for improving global environments.

Friday 6 December 2013

Lessons learnt from high frequency trading

After all the posts and reading I have done over the last few months around this topic, I would say things have changed. Whilst still an absolute proponent of integrating computational technology into the natural world, my understanding of this field has undoubtedly become much more nuanced. Whilst thinking about how I was going to put this forward in a new post I came across a useful analogous situation. In this post my aim is to try to illustrate the imperative to remain human controls on automated systems. I'm going to do this by exploring the damage caused by algorithmic trading to the global financial system.

Whilst the financial system is in its entirety anthropogenic, a socio-political-economic construction, I believe from this arena lessons can emerge that are broadly applicable to natural systems.

I would also like to make the caveat at the outset that my understanding of economics is limited such that this post is written by a layman not an expert. If I say, infer or imply anything incorrect, I would appreciate being corrected.

According to this article in Business Weekly, high frequency trading (HFT) has its roots in 1989 with Jim Hawkes, a statistics professor at the University of Charleston, his recently graduated student Steve Swanson and David Whitcomb, a finance professor at Rutgers University. They founded a company called Automated Trading Desk (ATD) that hardwired algorithms for predicting the future prices on the stock market. They taped a data beam with a small satellite dish on a garage and soon they were predicting stock prices in 30 to 60 seconds. This was much faster than any of the humans doing the same job were at the stock exchanges. Fast forward to 2006, and ATD was trading 700 million to 800 million stocks a day - meaning that they represented upwards of 9% of all the US stock market volume.

HFT became so successful because it is both faster at predicting prices than humans and it is also more accurate. It can then additionally act upon these results much faster, pushing trades through at breakneck speeds. In the 17 years between 1989 and 2006 HFT became practiced by a number of other firms, most notably Getco, Knight Capital Group and Citadel. By 2010, it had become somewhat of a standard practice in the city, with greedy interested parties sweating to operate with the most efficient algorithms possible and the fastest connections to the exchanges. Firms were paying big money to be centimeters closer to processors - saving them valuable milliseconds. 

This graph shows the monumental rise of HFT in both the US (red) and Europe (blue). In 2009, HFT represented around 65% of shared traded in the US. World Exchanges (the source of this data) doesn't give an reason as to whether the 2005 - 2007 had no HFT in Europe or whether it is a lack of data. However, even in Europe, in 2009 represented 40% of the stock market. A phenomenal rise to fame.


A) HFT market share in the US (Number of shares traded). B) HFT market share in Europe (value traded). Replotted from World Exchanges.

Despite these huge success, the most notable feature of these graphs is what? What happened in 2009-2010 and after to quell the growth of HFT as a practice? When talking about HFT there are two infamous examples of how it can go wrong.

 The first is known as the 'flash crash'.  On the 6th of May 2010, the Dow Jones Industrial Average, the index for the US stock markets, dropped by 9% in the course of about 30 minutes. This is the largest intra-day point decline in the history of the index. Whilst the true causes of this crash are subject to a certain amount of speculation it is concretely true that HFT was the significant driver, or mechanism, through which prices dropped so rapidly.

However, because there were so many conflicting explanatory theories surrounding the flash crash event, HFT remained relatively popular and wide-spread. It did though cease to grow. It was last year, 2012, that something happened to really bring HFT under the microscope.

In the course of 45 minutes, Knight Capital Group, one of the leading market markets practicing HFT, lost $465 million. For some reason there was a bug in there code so their algorithm was buying at high prices and selling cheaply --> i.e. the opposite of a sensible trading strategy.  Whilst isolated this would be damaging but not fatal, its was situated right in the middle of the most competitive markets. Other firms algorithms sniffed out this bug and then traded extensively to exploit it. The result? Knight Capital Group lost more money in 45 minutes than it had made the year before.

So, what does all this have to do with computationally controlling and monitoring the environment? Well, fundamentally its a cautionary tale. This systems that we are implementing have huge power and influence over human lives. Just as the stock market taking a plunge can have tragic consequences on for individuals so could traffic control, or dam control, or power station control, or irrigation control, or pesticide control etc. etc. ad infinitum.

The clear imperative is to test them to security beyond a shadow of a doubt but is this enough. The programmes are many thousands or millions of lines of code. They have the capacity to behave in a highly non-linear fashion. The clear message that is emerging to me is that beyond integrated, yet still automatic, safeguard, human operators still need to oversee these operations and this is supported by empirical evidence.

In an 2012 paper 'High Frequency Trading and Mini Flash Crashes', Golub et al. find after looking at a range of financial data that problems with HFT result from the regulation frameworks that it operates in. In should be noted however that this paper stands out from the rest of the literature in this conclusion (other work implying that there are inherent stability issues in HFT). Either way the broader implications are clear: we must understand the behavior of the systems we implement and we must operate and regulate them properly.

So this means that we will never live in an idyllic utopia where farming, power production, transport infrastructure, natural hazards are all automatically dealt with be servers ticking over in cooled data centers whilst each and every human can really focus on lives pleasures (whatever that means...). Its this a shame? I don't think so. These systems do provide us with a huge increase in efficiency and they are very fast however past events have resulted in a lost of trust to such an extent that skepticism might always remain. We have seen how fast mistakes can propagate.