Firms Embrace AI for Faster, Smarter Cybersecurity Solutions

cybersecurity

Google CEO Sundar Pichai recently noted that artificial intelligence (AI) could boost online security, a sentiment echoed by many industry experts.

AI is transforming how security teams handle cyber threats, making their work faster and more efficient. By analyzing vast amounts of data and identifying complex patterns, AI automates the initial stages of incident investigation. The new methods allow security professionals to begin their work with a clear understanding of the situation, speeding up response times.

AI’s Defensive Advantage

“Tools like machine learning-based anomaly detection systems can flag unusual behavior, while AI-driven security platforms offer comprehensive threat intelligence and predictive analytics,” Timothy E. Bates, chief technology officer at Lenovo, told PYMNTS in an interview. “Then there’s deep learning, which can analyze malware to understand its structure and potentially reverse-engineer attacks. These AI operatives work in the shadows, continuously learning from each attack to not just defend but also to disarm future threats.”

Cybercrime is a growing problem as more of the world embraces the connected economy. Losses from cyberattacks totaled at least $10.3 billion in the U.S. in 2022, per an FBI report.

Increasing Threats

The tools used by attackers and defenders are constantly changing and increasingly complex, Marcus Fowler, CEO of cybersecurity firm Darktrace Federal, said in an interview with PYMNTS.

“AI represents the greatest advancement in truly augmenting the current cyber workforce, expanding situational awareness, and accelerating mean time to action to allow them to be more efficient, reduce fatigue, and prioritize cyber investigation workloads,” he said.

As cyberattacks continue to rise, improving defense tools is becoming increasingly important. Britain’s GCHQ intelligence agency recently warned that new AI tools could lead to more cyberattacks, making it easier for beginner hackers to cause harm. The agency also said that the latest technology could increase ransomware attacks, where criminals lock files and ask for money, according to a report by GCHQ’s National Cyber Security Centre.

Google’s Pichai pointed out that AI is helping to speed up how quickly security teams can spot and stop attacks. This innovation helps defenders who have to catch every attack to keep systems safe, while attackers only need to succeed once to cause trouble.

While AI may enhance the capabilities of cyberattackers, it equally empowers defenders against security breaches.

Vast Capabilities

Artificial intelligence has the potential to benefit the field of cybersecurity far beyond just automating routine tasks, Piyush Pandey, CEO of cybersecurity firm Pathlock, noted in an interview with PYMNTS. As rules and security needs keep growing, he said, the amount of data for governance, risk management and compliance (GRC) is increasing so much that it may soon become too much to handle.

“Continuous, automated monitoring of compliance posture using AI can and will drastically reduce manual efforts and errors,” he said. “More granular, sophisticated risk assessments will be available via ML [machine learning] algorithms, which can process vast amounts of data to identify subtle risk patterns, offering a more predictive approach to reducing risk and financial losses.”

Detecting Patterns

Using AI to spot specific patterns is one way to catch hackers who keep getting better at what they do. Today’s hackers are good at avoiding usual security checks, so many groups are using AI to catch them, Mike Britton, CISO at Abnormal Security, told PYMNTS in an interview. He said that one way that AI can be used in cyber defense is through behavioral analytics. Instead of just searching for known bad signs like dangerous links or suspicious senders, AI-based solutions can spot unusual activity that doesn’t fit the normal pattern.

“By baselining normal behavior across the email environment — including typical user-specific communication patterns, styles, and relationships — AI could detect anomalous behavior that may indicate an attack, regardless of whether the content was authored by a human or by generative AI tools,” he added.

AI systems can distinguish between fake and real attacks by recognizing ransomware behavior. The system can swiftly identify suspicious behavior, including unauthorized key generation, Zack Moore, a product security manager at InterVision, said in an interview with PYMNTS.

Generative AI, especially large language models (LLMs), allows organizations to simulate potential attacks and identify their weaknesses. Moore said that the most effective use of AI in uncovering and dissecting attacks lies in ongoing penetration testing.

“Instead of simulating an attack once every year, organizations can rely on AI-empowered penetration testing to constantly verify their system’s fortitude,” he said. “Furthermore, technicians can review the tool’s logs to reverse-engineer a solution after identifying a vulnerability.”

The game of cat and mouse between attackers and defenders using AI is likely to continue indefinitely. Meanwhile, consumers are concerned about how to keep their data safe. A recent PYMNTS Intelligence study showed that people who love using online shopping features care the most about keeping their data safe, with 40% of shoppers in the U.S. saying it’s their top worry or very important.