background image

background image

Description

The Industrial Revolution pushed civilization forward dramatically. The technological innovations achieved allowed us to build bigger cities, get richer and construct a standard of life never before seen and hardly imagined. Subsequent political agendas and technological innovations have pushed civilization up above Nature resulting in a disconnect. The environmental consequences though are leaving the Earth moribund. In this blog, I'm exploring the idea that integrating computational technology into environmental systems will be the answer to the aftermath of industry.

Above drawing is by Phung Hieu Minh Van, a student at the Architectural Association.

Monday 23 December 2013

Bringing technology away from the human and into the natural.

In writing this blog one of the most considerable barriers that I feel I have been up against is an lack of a coherent literature. Very few papers, articles, other blogs, etc. deal with the same problem as I have framed it. This is, in part, a fault of my own. Perhaps it would be more useful to engage in existing debates and try to lead them into new directions than starting a new ones. Nonetheless, I feel that I have had some success in formulating the boundaries of my discussion even if it has resulting in me having to search harder for germane sources. Despite this, I do feel that the debate as to whether to allow unfettered integration of computational technology with the natural environment is an important one - indeed one that will effect the future of life on earth as much as perhaps anything else. What the lack of coherent discussion around this topic suggests to me is there is a problem. A problem that is undoubtedly larger and more systematic than this individual instance but that regardless needs elaboration in this context.

The nonexistence of discussion here will result in two fundamentally damaging problems: inaction and the wrong action. The first is perhaps more self explanatory but also less significant and realistic.  By not doing anything we miss out on huge potential benefits but as there are lots of examples of projects of computational integrate this is not really the problem here. The second, action of the 'wrong' sort is more important (at least as I see things). By not communally and systematically analyzing what's happening on all spatial (right down from the microscale of individual rivers for example to the planetary e.g. monitoring via satellites) and temporal (e.g. disposable, instantaneous action to projects implemented for longevity) we are opening ourselves up for damage. Whilst the implications of this are multitudinous and therefore require a great deal more attention than I can direct toward them in this work I will list a few examples to try to convey clearer my point.

    a) that projects that do get implemented are likely to suit particular interests, probably the interest of people with money.

    b) they will also probably be limited as a result of these interests and financial and other constraints.

    c) projects will likely be incompatible with each other and therefore we will miss out of great opportunities for some hugely valuable work and action.

    d) there will be considerable overlap in projects and therefore considerable waste of resources.

 I want to stress the immediacy of this debate, and to do this I going to discuss a paper from 1965. Gordon Moore, the CEO of Intel wrote a paper called ' Cramming More Components onto Integrated Circuits'. It looks at trends of technology and makes some predictions about the future. With remarkable accuracy he see how things will turn out. This paper is most famous for Moore's observation that the power of computers doubles roughly every 18 months. This is now referred to as Moore's Law.

In one particularly prophetic quote Moore says:


Integrated electronics will make electronic techniques more generally available throughout all of society, performing many functions that presently are done inadequately by other techniques or not done at all. 


This relates exactly to what I have been attempting to confront in this blog. Now after 50 years we've gone from a time when silicon transistors didn't exist to one now when we can put over 1.7 billion one chip! What Moore skims around but doesn't directly confront, even in an interview in 2005 about the paper and advances, is the form these changes take. I see can see two potential stages. The first, the 1965 to 2010 is the advance in computational technology to bring it to individuals. For example, the widespread adoption of laptops and smartphones. The next step is a complete overhaul of infrastructure etc. to incorporate this technology.

To make this even more interesting is the context of the implications of climate change. The energy efficiency of computers colours this debate nicely. In a 2011 paper, 'Implications of Historical Trends in the Electrical Efficiency of Computing', Koomey et al. make a similar empirical observation to Moore's Law. The electrical efficiency of computation has doubled roughly every year and half for more than six decades. Why is this relevant? Well, industrial infrastructure has historically been hugely energy inefficient with hugely losses in the form of heat, sound and kinetic energy. This scales badly such that huge structures (e.g. factories) are hugely inefficient. In contrast, the efficiency of computational infrastructure is getting to be hugely efficiency and scales up very well.

So what am I trying to get across in this post? Well two things: 1) that there has been very little discussion directly focused towards this topic and this could have hugely damaging effects and 2) that this is time to act. Things are changing so quickly and for the better. We are at the end of the first stage of the technological revolution -  we should now formally and together enter the second, bringing technology away from the human and into the natural

There is alot of talk about whether Moore's law will continue into the future. The way computer power has increased in the past in by making components smaller and smaller. Soon chips will be so small that they will reach limitations: when they are 5 atoms thick (which they probably will be in about 10 years time) quantum effects such as tunneling will prevent further advances. Thus silicon chips can only get better by so much. There are other chips possible such as those made out of DNA and using quantum mechanics but these are so way in the future. What ever the case, we now have technology with huge unreleased potential for improving global environments.

No comments:

Post a Comment