The Options Clearing Corporation taps AI to stay ahead of hackers

OCC uses machine learning and AI to understand and model how attackers could enter and move around the network.

virtual eye / digital surveillance, privacy / artificial intelligence / machine learning
Vijay Patel / Getty Images

The world of finance is under constant attack. The financial information it hosts – along with the money it holds and moves – makes the industry an incredibly attractive target for hackers. According to a report from IBM X-Force, it is one of the top five commonly targeted industries.

Yet breaches in the financial sector are relatively rare. Due to the heavily regulated nature of the industry and often heavily publicized aftermath, they are often very costly when they do occur. The Equifax breach, for example, is estimated to have cost the company $439 million.

In an effort to harden their security posture, many organizations within the financial services sector are turning to artificial intelligence (AI) and machine learning. The Options Clearing Corporation (OCC), for example, is using AI to be more proactive in spotting criminal activity on its systems.

Financial organizations promote security within the company

OCC is a financial utility that facilitates the exchange of options, futures and securities lending transactions. The Chicago-based clearinghouse is involved in over 4 billion contracts a year totalling more than $120 billion.

Mark Morrison – who previously had stints at Boston-based financial services company State Street Corporation, MITRE and various government bodies including the Department of Defense and the Office of the Director of National Intelligence – joined OCC in May of 2017 as the company’s first CSO. His appointment was part of a new program by OCC to “redefine where security was going as a Counterparty Clearing House (CCP),” something the company has been putting significant effort into. Last year OCC president and COO John Davidson said security was the area of the business on which spending had increased the most because it was “the largest risk facing financial infrastructures everywhere in the world.”

“Previously they had a director of information security that worked for the CIO,” says Morrison. “They converted that role to CSO, elevated the position within the company from where it was before, and disassociated it from the information technology organization [within the company]. I work for the chief risk officer, and I have a direct reporting line to our president and to the board of directors. I think that seems to be the general direction where a lot of companies are going; to separate what used to be chief information security outside of the IT organization and make it more of a first and second line of defense.”

Since joining, Morrison has doubled the size of the security team to around 60, focusing on adding more experience and expertise. “We didn't have a security engineering team, so we've built that. We did not have a red and blue team, so we built that,” he explains.

“I'm a firm believer of 'you train as you fight, you fight as you train,' and so we're doing a lot of testing and putting a lot of emphasis of actually acting and adapting. The red team doing adversarial testing; the blue team looking for flaws in the process of technology.”

Clearing needs availability, looks to AI to help

Since joining OCC, Morrison has been involved in looking at ways to stay ahead of cybercriminals. While the company still faces and deals with some monetary threat and data exfiltration attempts, the main concern for OCC’s security team is ensuring availability in the face of attacks that are specific to financial market utilities, clearing and CCPs.

“As a utility, we don't have the same fraud-related cyber-attacks that a retail bank would encounter on a daily basis,” he says. “We look heavily at being able to counter availability-based attacks—data destruct, ransomware, data manipulation attacks. Not being able to clear options is a bad day for us.”

Given the very specific use of clearing within finance, targeted attacks against OCC are often more complicated than your average hack attempt. These attacks often come with false-flag diversionary tactics; a DDoS attempt or low-level exploit might be covering up a more complicated and serious attack elsewhere in the network. “Anytime we believe we may have a system incident or a business operational issue, my team is involved in to see if it is a cyber element associated with that,” Morrison says.

To augment this, OCC is starting to look to AI and machine learning to predict and anticipate where an attacker plans to go, versus just being reactive to what the attacker has done. “The attacker has the advantage if you're chasing after them continuously. What we're trying to do is get in front of them and use these more advanced AI and machine learning techniques to try to anticipate how an attacker would attack us and, if and when they did achieve access, where they would go next, and what the more attractive targets for them are in our environment.”

Attackers often have preferred tools and techniques and replicate their methods across different attacks – likely toolsets, potential protocols that could be exploited and used to exit the data. By using a variety of data sources – data gathered internally, public data, information from security companies – OCC is taking these attack profiles and laying that over the company’s critical business processes to anticipate potential attacks methods and routes.

“You're not building the old WW2 Maginot Line of defense. You need to be able to be agile. You can either move in the direction that attackers are moving in or anticipate where they're going to go next and try to get in front of them,” Morrison says.

How to bring AI into security

OCC has what Morrison calls the AI shop, a “small but talented” team that sits under the IT organization and supports the company’s AI and machine learning efforts across the whole business. “They had some spare cycles, so we asked them if we could collaborate on how to apply some of the techniques that they've been using on the business side to address some of our security issues. We got together and we identified what we thought would be the most plausible use cases.”

The company identified two initial use cases; one very specific to OCC’s operations to look at how an actor would potentially manipulate data within the clearing system, and another to explore how threat actors could manipulate user credentials.

On the user credentials use case, OCC took a set of privileged users the company believed would have the most impact if they fell into the wrong hands, modeled the potential impact those accounts could have, and then added additional controls on the appropriate accounts. “It turned out some of them we thought they would have more impact than they actually did, then others had more impact on our critical business operations so, we increased the threat characterization of those accounts,” says Morrison.

“A big part of identifying what abnormal is, is you've got to really thoroughly map what normal is,” he adds. “You've really got to understand what normal activity is, and that takes time. Doing that mapping was a significant part of the use case as well.”

Though the first use case needed rework after the initial trial, both proofs of concept have been considered a success within OCC, and the company is currently whiteboarding ideas for new use cases. One that Morrison is considering is applying AI to ensure its SWIFT payments are secure. The SWIFT financial messaging system, used to transfer financial transaction information, has been involved in several financial attacks including the loss of $81 million from the central bank of Bangladesh. “We don't move a lot of money, but we want to make sure we're secure given there's been SWIFT-related attacks on banks that use it.”

Morrison says the biggest challenge for his team was understanding AI and what it can and can’t do for OCC, and from there identifying the correct use cases. “It's a new technology, so we've spent a lot of time trying to understand it. My team didn't have a lot of experience in this, so we had to really learn. We talked to a hell of a lot of people out there, and that took us some time to make sure we weren't wasting the AI teams' time. Then we felt we could identify that we were going to get a return on our investment. We're still figuring out what we can do and what we can't do given our resources and our environment.”

Looking to the future, Morrison hopes the technology becomes more commoditized as it matures so that it’s more a case of taking commercial offerings and adapting them to OCC environments instead of developing in-house. “A lot of the work is taking the inputs from your security devices — your firewalls, your proxies, your SEIM tool — and being able to feed those into the system. Having the capability to create those APIs and create those feeds so we don't have to create those connectors ourselves is going to be a big lift. I think it's the direction security is going because of the need for agility and being able to work at network speed, rather than sitting in a SOC watching a light go from green to red and then reacting.”

Copyright © 2018 IDG Communications, Inc.

Subscribe today! Get the best in cybersecurity, delivered to your inbox.