background image

background image

Description

The Industrial Revolution pushed civilization forward dramatically. The technological innovations achieved allowed us to build bigger cities, get richer and construct a standard of life never before seen and hardly imagined. Subsequent political agendas and technological innovations have pushed civilization up above Nature resulting in a disconnect. The environmental consequences though are leaving the Earth moribund. In this blog, I'm exploring the idea that integrating computational technology into environmental systems will be the answer to the aftermath of industry.

Above drawing is by Phung Hieu Minh Van, a student at the Architectural Association.

Friday 6 December 2013

Lessons learnt from high frequency trading

After all the posts and reading I have done over the last few months around this topic, I would say things have changed. Whilst still an absolute proponent of integrating computational technology into the natural world, my understanding of this field has undoubtedly become much more nuanced. Whilst thinking about how I was going to put this forward in a new post I came across a useful analogous situation. In this post my aim is to try to illustrate the imperative to remain human controls on automated systems. I'm going to do this by exploring the damage caused by algorithmic trading to the global financial system.

Whilst the financial system is in its entirety anthropogenic, a socio-political-economic construction, I believe from this arena lessons can emerge that are broadly applicable to natural systems.

I would also like to make the caveat at the outset that my understanding of economics is limited such that this post is written by a layman not an expert. If I say, infer or imply anything incorrect, I would appreciate being corrected.

According to this article in Business Weekly, high frequency trading (HFT) has its roots in 1989 with Jim Hawkes, a statistics professor at the University of Charleston, his recently graduated student Steve Swanson and David Whitcomb, a finance professor at Rutgers University. They founded a company called Automated Trading Desk (ATD) that hardwired algorithms for predicting the future prices on the stock market. They taped a data beam with a small satellite dish on a garage and soon they were predicting stock prices in 30 to 60 seconds. This was much faster than any of the humans doing the same job were at the stock exchanges. Fast forward to 2006, and ATD was trading 700 million to 800 million stocks a day - meaning that they represented upwards of 9% of all the US stock market volume.

HFT became so successful because it is both faster at predicting prices than humans and it is also more accurate. It can then additionally act upon these results much faster, pushing trades through at breakneck speeds. In the 17 years between 1989 and 2006 HFT became practiced by a number of other firms, most notably Getco, Knight Capital Group and Citadel. By 2010, it had become somewhat of a standard practice in the city, with greedy interested parties sweating to operate with the most efficient algorithms possible and the fastest connections to the exchanges. Firms were paying big money to be centimeters closer to processors - saving them valuable milliseconds. 

This graph shows the monumental rise of HFT in both the US (red) and Europe (blue). In 2009, HFT represented around 65% of shared traded in the US. World Exchanges (the source of this data) doesn't give an reason as to whether the 2005 - 2007 had no HFT in Europe or whether it is a lack of data. However, even in Europe, in 2009 represented 40% of the stock market. A phenomenal rise to fame.


A) HFT market share in the US (Number of shares traded). B) HFT market share in Europe (value traded). Replotted from World Exchanges.

Despite these huge success, the most notable feature of these graphs is what? What happened in 2009-2010 and after to quell the growth of HFT as a practice? When talking about HFT there are two infamous examples of how it can go wrong.

 The first is known as the 'flash crash'.  On the 6th of May 2010, the Dow Jones Industrial Average, the index for the US stock markets, dropped by 9% in the course of about 30 minutes. This is the largest intra-day point decline in the history of the index. Whilst the true causes of this crash are subject to a certain amount of speculation it is concretely true that HFT was the significant driver, or mechanism, through which prices dropped so rapidly.

However, because there were so many conflicting explanatory theories surrounding the flash crash event, HFT remained relatively popular and wide-spread. It did though cease to grow. It was last year, 2012, that something happened to really bring HFT under the microscope.

In the course of 45 minutes, Knight Capital Group, one of the leading market markets practicing HFT, lost $465 million. For some reason there was a bug in there code so their algorithm was buying at high prices and selling cheaply --> i.e. the opposite of a sensible trading strategy.  Whilst isolated this would be damaging but not fatal, its was situated right in the middle of the most competitive markets. Other firms algorithms sniffed out this bug and then traded extensively to exploit it. The result? Knight Capital Group lost more money in 45 minutes than it had made the year before.

So, what does all this have to do with computationally controlling and monitoring the environment? Well, fundamentally its a cautionary tale. This systems that we are implementing have huge power and influence over human lives. Just as the stock market taking a plunge can have tragic consequences on for individuals so could traffic control, or dam control, or power station control, or irrigation control, or pesticide control etc. etc. ad infinitum.

The clear imperative is to test them to security beyond a shadow of a doubt but is this enough. The programmes are many thousands or millions of lines of code. They have the capacity to behave in a highly non-linear fashion. The clear message that is emerging to me is that beyond integrated, yet still automatic, safeguard, human operators still need to oversee these operations and this is supported by empirical evidence.

In an 2012 paper 'High Frequency Trading and Mini Flash Crashes', Golub et al. find after looking at a range of financial data that problems with HFT result from the regulation frameworks that it operates in. In should be noted however that this paper stands out from the rest of the literature in this conclusion (other work implying that there are inherent stability issues in HFT). Either way the broader implications are clear: we must understand the behavior of the systems we implement and we must operate and regulate them properly.

So this means that we will never live in an idyllic utopia where farming, power production, transport infrastructure, natural hazards are all automatically dealt with be servers ticking over in cooled data centers whilst each and every human can really focus on lives pleasures (whatever that means...). Its this a shame? I don't think so. These systems do provide us with a huge increase in efficiency and they are very fast however past events have resulted in a lost of trust to such an extent that skepticism might always remain. We have seen how fast mistakes can propagate.

No comments:

Post a Comment