This week is a trading technology week, chairing panels at two trading technology events.
The first is focused upon trading architectures and I’m surrounded by engineers.
Forget strategists, technologists, programmers or developers.
Engineers.
The reason is that it’s all about FPGA’s.
What the hell?
FPGAs – Field Programmable Gate Arrays.
This is basically a chip that allows you to place a program on the chip for processing at lightspeed.
It links with low latency, high frequency trading, except that the debate about low latency was all about speed of processing; FPGAs are now all about using massively parallel processing (MPP) to analyse what is being processed.
It’s a data flow analytic, rather than a process flow throughput service.
And yes, it’s the next evolution of debate about trading architecture, after high frequency trading (HFT).
Strangely enough, I’d given up on these things years ago as I thought MPP was done and dusted.
Instead we moved onto discussions around cloud, grid, virtualisation and such like.
The big conversation in fact was about colocated data centres and getting massive amounts of processors to crunch through data in low latency volumes.
Now it’s moved from processing to analysis, which is why parallelism is back on the agenda, or concurrency as some call it.
The reason it’s become important is that if you can run an analytic on the chip, then it can be done far quicker than through the CPU.
This means you can run risk analytics of high frequency data throughput in real-time, rather than after the event.
You can also run complex simulations of massive amounts of data quickly, easily and cheaply.
FPGA is back on the agenda because it’s also far simpler than ten years ago.
Ten years ago, these systems needed hard coding and sat firmly in the telecommunications sector.
As an engineering technology, it was incredibly complex but now the level of tool support from vendors is much greater than ever before, and has made it far easier to use.
In other words, the design issues are no longer part of the problem.
These things can now just be programmed into the system.
It is important as effectively FPGAs allow a compute cycle of data to be programmed onto a cahip or, in this case, a processing board.
The result is savings in heat and power usage, but also a massive increase in raw compute power.
For example, one bank was talking about Monte Carlo simulations that showed performance levels 30 times better than doing this through a CPU and 175 times better in efficiency terms.
Bear in mind that Monte Carlo simulations can involve fifty year or more scenarios with roll back, querying, resets and roll forward all built into the modelling and now in real-time.
That’s complex and involves massive amounts of data analytics.
A little like taking petabytes of real-time data and churning through it all in real-time.
Forget batch and overnight. It’s all real-time.
This is why FPGAs are being used extensively for scenario modelling of real-time risks, calculating positions for the traders of the world in their positioning against Greece, Italy and each other in real-time.
And all of this is achieved with far less power and far faster clock times than traditional CPU processing.
This is because FPGAs are actually SoCs.
WTF?
SoC – System on a Chip.
What this means is that you can put a board into the computer processor, and on that board is all the analytical system requirements to run fast cycle analytics.
So the software and hardware are married as one on the chip – a complete System on a Chip (SoC).
Imagine therefore that you have massive amounts of data flowing in high frequency and high volume across low latency engines throughout the world’s markets looking to trade.
Then you have lots of FPGAs deployed across those processes, looking at the data and tracking risk, opportunity, collateral, liquidity, credit and more.
Each FPGA is given a specific analytical function as a SoC, and it allows you to run huge volumes of trading without concern about the processing power limitations of old, or analytical delays due to hitting CPU limits.
This is why FPGA is big news in trading, allowing traders to take huge amounts of streaming data flows, analysing the data concurrently, and all with easy implementation and deployment.
So we’ve gone from the technical low latency discussions to how to analyse these streams of data – these massive volume, high speed systems – and working out how to analyse all that data in real-time using FPGAs.
We’ve gone from process and processing to analysis and data flow.
That’s the architectural discussion of today.
Chris M Skinner
Chris Skinner is best known as an independent commentator on the financial markets through his blog, TheFinanser.com, as author of the bestselling book Digital Bank, and Chair of the European networking forum the Financial Services Club. He has been voted one of the most influential people in banking by The Financial Brand (as well as one of the best blogs), a FinTech Titan (Next Bank), one of the Fintech Leaders you need to follow (City AM, Deluxe and Jax Finance), as well as one of the Top 40 most influential people in financial technology by the Wall Street Journal's Financial News. To learn more click here...