Digital transformation

Lessons from history on why data velocity and volume matters

Sep 13, 2018

When I speak with clients about data velocity, organisational resilience and agility, they often find it hard to envisage circumstances when this need would apply to them.

However, we need only look to history to see frequent high-profile reminders – such as the 2016 “#censusfail” at the ABS, DDoS or the latest service outages at AWS – to understand why establishing strong infrastructure to manage data velocity and volume, is critical.

It used to be an issue framed by unexpected spikes in volume or a feature of seasonality, however in the contemporary world we now have a situation where the desire to turn vast and constant amounts of data into actionable information can have serious risks that need managing.

In my day-to-day interactions with clients, I see an emerging awareness of this issue, as they seek to integrate external information sources, sensors and IoT into their geospatial operations. These sensors for example, can generate hundreds of millions of readings per second and unless system design is optimised, organisations will have a signal-to-noise issue and be hamstrung by the very thing that is supposed to give them greater insight.

This failure to appreciate the risk of data velocity and volume is not a new phenomenon. History is a harsh teacher and lessons in velocity and volume are not confined to the post-internet age.

If you were an investor in the 1920s, times were generally pretty good. The market was growing year-on-year and reasonable returns were the norm. The system that supported stock transactions had leveraged technology such as the telegraph system, as even back then, information was the key to successful trading – especially given the velocity of information.

Stockbrokers around the United States and beyond utilised the ticker-tape system which would report trades and prices in as near to real-time as could be achieved at that time. However, the system did have a latency issue. From the time that a trade was made on the stock market floor and the order transcribed by a typist/operator, to it appearing on the ticker-tape, about 20 minutes had elapsed. This meant that from time to time the information on which one may have made a buy/sell decision could be based not on the actual trading price, but on the price that could be up to 20 minutes old. In a generally rising market this was not really a huge problem as overall the advantage/disadvantage caused by latency evened out.

But fast forward to the market crash of October 1929, and this latency became a central issue – as the real hidden story of events later emerged. Latency and the ability to scale to meet the demand in trades were not the cause of the crash, however, they were key contributors to how the events impacted investors.

As the market 'corrected' on 21st October the trading was so heavy that rather than the 20-minute delays normally experienced, the delays were now over an hour. This lack of reliable, accurate information caused panic selling among traders, further exacerbating the situation. By “Black Thursday”, 24th October, four times the normal volume of 12.4 million shares were traded, creating a four-hour ticker delay, and events eventually spiralled down to the notorious date of 29th October and triggered a ten-year economic depression. 

This depression was global and a line can be drawn connecting these events in the US and the rise of Nationalism in Europe in the 1930’s as economies failed.

The story here is, of course, that the infrastructure could not scale – infrastructure in this case being the people and processes that transcribed the trades into the ticker-tape system.

As the unexpected volume spiked, the system was overwhelmed and the information became completely worthless to its consumers. 


The other real story here which still applies today is that this was a known issue. One article said in 1928, “Anything less than right now is slow,” discussing efficiencies like better queueing, the availability of a backup system and even using fewer characters to represent stock names all in “a battle to gain…thousandths of a second.”  Great advice, but not heeded.

In the 1920s, only two of the ‘Four Big V’s’ were pertinent, being ‘Volume’ and ‘Velocity’. Today, organisations need to have a cogent response to how they manage all ‘Four Big V’s’: Volume, Velocity, Variety and Veracity.  In the modern context, post-disaster after-the-fact reviews of information holdings almost always expose the fact that the information that could have mitigated or prevented an incident already existed. The common problem is that the information is not accessible in a timely manner, or the systems (be they AI, ML or something else) do not detect the pattern. 

This is where sound data governance and solid systems design come into their own. Scalable systems backed, by robust risk-management strategies, are key to ensuring you can effectively plan and respond to unexpected crises.

To stay informed of new articles, subscribe to the Esri Australia blog