apacciooutlook logo

Innovations in Market Surveillance and Monitoring

By Dr.John Bates, CTO, Intelligent Business Operations & Big Data, Software AG


Dr.John Bates, CTO, Intelligent Business Operations & Big Data, Software AG

Crises in capital markets are often hitting the headlines today. In 2010, a “Flash Crash” caused the Dow Jones Industrial  average to tumble almost a thousand points before recovering most losses within 30 minutes. In 2011, a rogue trader cost Swiss banking group UBS $2Bn by hiding risky trades using knowledge of front, middle and back office systems. In 2012, an automated trading algorithm at Knight Capital went haywire and fired inappropriate orders at high frequency into the market for 30 minutes before it was identified and stopped—costing Knight Capital $450Mn in trading losses and near bankruptcy. Also, in 2012, a new crisis arose from trader collusion fixing the LIBOR inter bank rate. In 2013, a similar crisis emerged around the fixing of foreign exchange benchmark rates through market manipulation and trader collusion. These are just a few  high profile cases—many “mini crises” are happening with more frequency. A capital markets trading firm must prepare to deal  with, and ideally prevent,these incidents.

Catching the Rogues Red-Handed

Market surveillance, the compliance function aimed at monitoring market abuse, market manipulation and improper behaviours,  has been a laggard compared to its trading counterparts. Some liken this to traders driving Ferraris and the surveillance  team chasing them on bicycles! However, innovations in technology has seen the development of realtime market surveillance  p platforms that now equip compliance teams with Ferrari police cars. These platforms utilize similar technology for real-time  pata analysis and response as used in advanced trading systems. They look for patterns in data (trade data, social networks, news data etc.) that indicate potential market manipulation, trader collusion, rogue trading or wild algorithms. Occurrences are flagged to compliance staff for investigation and cross-referencing.

"Innovations in technology has seen the development of real-time market surveillance platforms that now equip compliance teams with Ferrari police cars"

Technologies converging to form next generation surveillance platforms include in-memory data management, analytic and decision tools. In-memory technologies, such as high performance messaging and in-memory data grids, enable data to be collected and made memory resident, ensuring data is highly available and can be processed rapidly. Complex Event Processing (CEP) (sometimes called “streaming analytics”) enables data to be analyzed as it is collected and patterns representing relationships between data items (events) identified. Visual analytics allows discovered alerts to be represented as graphs,  heatmaps and other intuitive, visually explorable metaphors. Humans can drill into complex alerts to see what the root cause analysis is. Alerts can also trigger processes, such as automatically creating a case and launching a workflow for a compliance officer to investigate an incident.

One example is the collection of news events, trade events, and trader behaviour, and observing an unusually large trade placed by a trader that doesn’t usually trade in that instrument seconds before a news event comes out that significantly moved the market. Perhaps it was luck or intuition; perhaps not. A visual representation of the alert can then be examined and an investigation workflow launched.

Tuning the Machine

If you have the right technology, and you know what you’re seeking, you can detect the rogues. However, one particular  challenge that has kept surveillance analysts busy for years is figuring out how to calibrate the monitoring logic for best results.

Too few alerts raise concerns that improper behaviours are going undetected, i.e. false negatives. Too many alerts are a clear sign that behaviours that are perfectly reasonable are being flagged as improper, i.e. false positives.

The ‘calibration’ of behavioural monitoring is a non-trivial task with the profiles of institutional vs. retail, electronics,voice, exchange traded vs. OTC and other factors all playing a role in how monitoring should be configured. For example,  'front running’ is commonly thought of as placing a proprietary order in the market ahead of a large client order that will  likely move the market. How much time is allowed between the proprietary and agency orders? What is large? What defines  movement in the market? These parameters vary widely based on the profile of business being transacted and so calibration is  a specialised challenge that should be considered unique to each firm.

Over the years, market surveillance vendors provided clients with the means to help ‘calibrate’ their behaviour monitoring  with varying degrees of success. At one end of the spectrum, vendor platforms simply generate alerts (or not) and provide  tools to fine-tune parameters over time. Surveillance analysts are expected to learn on-the-job and gradually improve their monitoring through trial and error.

Amazingly, this time-consuming, error-prone method was the norm for years. Not only is it an enormously wasteful approach, it exposes the firm to serious regulatory risk while their monitoring gradually improves to the point where it could be  considered genuinely effective.

Further along the spectrum are tools to replay, or back-test, large quantities of historical data to conduct what-if  cenarios, i.e. what-if the time between the proprietary order and the agency order was three seconds? Again in this area, CEP  technology has been used to allow surveillance analysts to fine tune the parameters in a protected environment. Analysts are  still learning through trial and error, but at a greatly accelerated rate and not with live orders/trades and hence reduce  the firm’s exposure to regulatory risk. But this technique still requires a brute force approach to understanding what is"normal” behaviour in an attempt to detect what is abnormal.

Charting the Unknown

Lately, I have seen the market demanding innovation to spot the potential emergence of new patterns. To achieve this, data is  recorded and replayed over many months into analytics in a CEP/streaming analytics engine to calculate and benchmark what is  "normal”, including trader behaviour, market behaviour etc. This analytic process continues in real-time to continually refine the model of “normal” and to monitor for unusual events. When anything beyond a certain tolerance or standard deviation from “normal” is detected, an alert is triggered possibly indicating new abnormal behaviour. A member of the compliance team can investigate and categorize it accordingly. In this way, surveillance becomes a self-evolving system, combining machine pattern detection with human intuition and experience.

Magazine Current Issue

magazine current issue

Leaders Speak

Jeffrey Keisling, CIO and SVP, Pfizer

Formula Five For Biotech IT

By Jeffrey Keisling, CIO and SVP, Pfizer

Ray Harris,

The Move to Managed Services is the Smarter Way to Invest IT Dollars

By Ray Harris,

Chief Information Officer,

Ironbow Technologies

Sam Schoelen,Chief InformationTechnology Officer, Continental Resources

What is CLOUD doing to our networks?

By Sam Schoelen,Chief InformationTechnology Officer, Continental Resources

Patrick Hale, CIO, VITAS Healthcare

Three Steps to CIO Success

By Patrick Hale, CIO, VITAS Healthcare

Bill Dyer, CTO & Head of Strategy, Alcatel-Lucent Motive

Integration Enables Omni-channel Experiences

By Bill Dyer, CTO & Head of Strategy, Alcatel-Lucent Motive

Alvina Antar, CIO, Zuora

The New Quote-to-Cash Architecture

By Alvina Antar, CIO, Zuora