Monday, December 9, 2024

Tech News, analysis, updates, comments, reviews

Exploring the cutting edge of AI in cybersecurity

With the number of cybersecurity threats increasing daily, the ability of today’s cybersecurity tools and human cybersecurity teams to keep pace is being overwhelmed by an avalanche of malware.

According to Cap Gemini’s 2019 Reinventing Cybersecurity with Artificial Intelligence: The new frontier in digital security report, 56% of survey respondents said their cybersecurity analysts cannot keep pace with the increasing number and sophistication of attacks; 23% said they cannot properly investigate all the incidents that impact their organization; and 42% said they are seeing an increase in attacks against “time-sensitive” applications like control systems for cars and airplanes.

“In the Internet Age, with hackers’ ability to commit theft or cause harm remotely, shielding assets and operations from those who intend harm has become more difficult than ever,” the report states. “The numbers are staggering — Cisco alone reported that, in 2018, they blocked seven trillion threats on behalf of their customers. With such ever-increasing threats, organizations need help. Some organizations are turning to AI [artificial intelligence], not so much to completely solve their problems (yet), but rather to shore up the defenses.”

Even though AI and machine learning (ML) have been used for years to reduce the noise from myriad cybersecurity tools and platforms, at first glance the cutting edge of AI has not progressed very far from this seemingly basic functionality. It is still focused on reducing false positives and filtering out unnecessary alerts, and other distractions that hamper cyber security teams’ effectiveness.

“It’s a bit tongue in cheek for people to talk about AI and cybersecurity when we’re not there yet,” said Chase Cunningham, vice president and principal analyst, Security & Risk at Forrester. “AI is good at looking at large chunks of data and then figuring out what the anomalies are and then suggesting a remediation action to those anomalies. That’s the crux of it kind of in totality.”

What has changed over the past decade or so is the enormity of this deceptively simple undertaking, said Frank Dickson, IDC’s program vice president, Security & Trust.

“You’re under-appreciating the complexity of the task,” he said. “When you think of any particular infrastructure, I have 10 million end points. I have thousands of applications …I have a potpourri of environments, whether they be SaaS, PaaS, IaaS. I have IoT [internet of things] devices all over the place. I’ve got contractors coming in …working on my networks. And, in the middle of this, I’ve got business people launching new services that I don’t know about. The sea of complexity is just so extreme; that’s the task at hand.”

One million new malware samples generated per day

According to Eric Chien, a fellow at Symantec (now a division of Broadcom), there are one million new malware samples generated everyday. There is no way a human could even begin to analyze this tsunami of malicious code to see if their organization is at risk. But, fortunately, this is what today’s AI is particularly good at. It can spot, and help stop, 99.9% of these threats because they are often variations on existing malware.

This frees up human analysts to focus on the remaining .001% of malware that can be some of the most damaging because it is net-new; it is part of a larger, multi-layered attack designed to obfuscate and confuse; a highly targeted attack; or, perhaps, a combination of all of these elements.

“The real advancements are, how do we go after that last remaining gap?,” said Chien. “That last remaining gap tends to be some of the most impactful threats; the ones that are going to cause the most damage.”

Three AI cybersecurity use cases

There are three advanced use cases where cutting-edge AI products are making their way into the cyber risk management marketplace, Chien said.

The first is applying AI machines and application behavior to spot patterns of suspicious behavior over time and flag them for further analysis. This is more commonly known as user and entity behavioral analytics (UEBA).

“When you begin to bring in these other types of attributes — not just what does this file do, but what is this machine doing at 5pm versus what it’s doing at 9am versus what it’s doing at midnight?,” he said. “You bring in these sort of more behavioral attributes — that, right now, is on the cutting edge.”

The second use case is using AI technology to look across the entire network for advanced persistent threats (APTs). Attackers setting up attacks often lurk on the network or beforehand to understand users’ behaviors and to look for vulnerabilities. They move around, stealing credentials to escalate their privileges and, generally, doing things that turn into a pattern of activity that, when looked at holistically, indicate the presence of a bad actor.

“We have something called targeted attack analytics, which is basically using machine learning that is able to correlate across all of these control points,” said Chien. “It just tells you if something bad is happening in your environment. It can maybe pinpoint that there were some suspicious things on this machine and, over here, it saw this little behavior…but just broadly. It’s just going to tell you, you need to investigate because something’s happening in your whole environment.”

The third use case is what Symantec calls ‘adaptive security’. Because no two organizations are identical, their cybersecurity needs and cybersecurity strategy will also be different.

“If I want to get to 100 percent [security] I need to adapt each and every environment that I’m in,” said Chien. “What we’re starting to use machine learning for is actually to learn about the environment we’re in and understand it and then deploy self-learning models just for that environment or just for that machine or just for that user.”

A common example of this is using AI to flag and report on all files that have the word ‘confidential’ in them. That may be fine in an organization that does not deal with a lot of confidential information. In government agencies, however, that type of document is common. Having AI flag all of those documents as suspicious is useless. This type of AI would learn that the movement of these documents is not something that should raise alarms.

But, the reverse may also be true. If an organization suddenly starts creating a lot of ‘confidential’ documents, the ML algorithms that support AI solutions and help them adapt will understand that a change has occurred and stop flagging as suspicious the movement of confidential documents around the network.

For the people who are hoping that AI will soon put the malware threat to rest, there’s a long way to go. Bad actors also employ AI to develop their malware, so it’s a bit of an arms race.

The good news is that AI is helping the good guys stay one step ahead. The bad news is the bad guys only need to find one unknown exploit, one unpatched server, one network endpoint among millions to infect, or one gullible employee to download an infected document or give up their credentials to a phishing scam, and even the best defenses can quickly be undone.

“A CISO [chief information security officer] looked at me one time,” said Dickson, “and he goes, ‘Machine learning is done in Python, AI is done in PowerPoint’. What it means is machine learning is a really mature science because it’s really good at pattern recognition. AI is really purely conceptual at this point. It’s about doing things based on known patterns, it’s based on implementing known policy. But self-learning, self healing? That’s a lot to ask at this point.”

Get notified whenever we post something new!

spot_img

Migrate to the cloud

Make yourself future-proof by migrating your infrastructure and services to the cloud. Become resilient, efficient and distributed.

Continue reading

Salesforce Flaw Allows Full Account Takeover

A critical vulnerability has been discovered in Salesforce applications, which could potentially lead to a full account takeover. The flaw was identified during a penetration test and is tied to misconfigurations within Salesforce Communities, specifically within the Salesforce Lightning...

Concerns about the ICT Bill 2024 in Kenya

THis post has been updated after the attention it is gannering. The original post can be found here: https://web.archive.org/web/20240813033032/https://blog.blancorpsolutions.com/kenya/concerns-about-the-ict-bill-2024-in-kenya/ Kenya's tech industry has been a beacon of innovation and growth, thanks in part to a regulatory environment that has allowed...

What are the real intentions of tracking IMEI numbers?

Imagine if you had a magic map that could show you where all your favorite toys were at any time. Sounds pretty? Well, in Kenya, the government wants to do something similar, but with people’s phones. They plan to...

Enjoy exclusive discounts

Use the promo code SDBR002 to get amazing discounts to our software development services.