• United States



by David Braue

Are Australian companies putting too much faith in security AI, too soon?

Nov 09, 20175 mins
Cloud SecurityData and Information SecuritySocial Engineering

Security analysts have long spruiked the benefits of automation, artificial intelligence (AI) and machine learning (ML) – but with Australia racing towards security automation at world-beating pace, one security expert has warned potential adopters to maintain healthy scepticism and a commitment to continuous process improvement.

Many organisations had tended towards treating AI as a cure-all for a flood of security data that had been accumulating as better monitoring and reporting turned enterprise customers into “victims of our own success”, ServiceNow head of security strategy Myke Lyons told CSO Australia.

“They said that they wanted to know everything, but the vendors got too good with their sensors and there is so much information coming at them on a regular basis. Customers are realising very quickly that having a valid and understood process for prioritising and escalating as needed, is massively valuable.”

Australia is at the forefront of this trend, with 52 percent of respondents to the company’s recent CIO Global Survey reporting that security was the most valuable business area for automating decision-making and 30 percent saying this was already happening.

Australian CIOs were adopting security automation far more aggressively than those in other countries, with 89 percent expecting security decisions would be largely or completely automated within the next three years – well ahead of the global mean (69 percent) and the rates in the US (65 percent) and New Zealand (49 percent).

While AI offers a powerful mechanism for implementing that process, Lyons warned that the three-year timeframe “may be challenging” and reiterated the need for organisations to ensure they don’t implement it without consideration for underlying issues of risk management and process flow.

Given that AI can be “corrupted and manipulated” if wrong conclusions are repeatedly reinforced – or if it is trained using spurious data, as happened when AI-powered Microsoft chatbots variously began spewing racist and hate-filled tweets and proclaiming that Windows 10 is spyware – organisations must be sure that they maintain oversight of the process whereby AI learns about their environment.

This oversight requires supervision and review by AI-trained humans. These may well not be IT practitioners who, Lyons said, are often “intimidated by automation – not because they are worried about losing their jobs, but because they are fearful of breaking something.”

Many organisations also risk being led down the wrong path by expecting too much from the AI they apply, Gartner warned in a recent analysis naming AI and intelligent apps as a key strategic driver in 2018.

“Although using AI correctly will result in a big digital business payoff, the promise (and pitfalls) of general AI – where systems magically perform any intellectual task that a human can do and dynamically learn much as humans do – is speculative at best,” Gartner vice president and fellow David Cearley warned.

Rather, narrow AI – consisting of “highly scoped machine-learning solutions that target a specific task with algorithms optimised for that task” – would help create “a new intelligent intermediary layer between people and systems” that could “transform the nature of work”.

Working around the human shortfall. That dynamic has turned AI into a new operational challenge, particularly for smaller organisations that are struggling to attract and keep experts with niche skills in areas like security and AI. Fully 41 percent of respondents to the ServiceNow CIO survey said a key barrier to adoption of ML was the lack of human skills to manage ever-smarter machines.

Improving service-management procedures offers a way of streamlining that process by accelerating the process of identifying appropriate skilled people – whether inside the organisation or at a security service provider – to deal with particular types of security incidents.

Machine-language systems “can learn how things are solved on the business side, then take those patterns and apply them on the IT side of the house and get them into the hands of the individuals who are going to be remediating them,” Lyons explained. “It cuts out that begging and shoulder tapping.”

Despite its promise, many companies continued to struggle when considering how much of their operations to hand over to AI. By focusing early AI initiatives on “repeatable work”, even smaller organisations have been able to streamline their vulnerability response “and have had massive success very, very quickly,” Lyons said.

“We have seen that when we integrate the IT and security sides of the house together, they see a significant uptick in current data and in correct information – which collectively will help them remain more secure. And as you start with what you’re doing today, you may realise there are areas for automation that you hadn’t thought of before.”

That cycle of self-awareness was one of the key goals outlined by Gartner in recent advice on targeting AI investments on critical business priorities. The others included learning lessons that are unique to the organisation, while minimising those that are “more mainstream in nature”, and engaging with employees to identify mundane aspects of their roles that may offer the greatest benefit from automation using AI.

Upcoming Events:

Nov 28th @11am | CSO Live Webinar: You may have security under control – but is your business still at risk? Register Today

Nov 30th @11am | CSO Live Webinar: It’s easy to protect yourself from ransomware – if you plan aheadRegister Today