Gen AI Poised to Revolutionise Cybersecurity in the Next 3-5 Years

Harshil Doshi

AI’s ability to enhance every stage of the attack lifecycle makes it necessary for the defenders to prepare for faster and more sophisticated threats. In addition to this, security practitioners are concerned about the increased volume and sophistication of malware and the internal risks posed by shadow AI, where employees use generative AI tools without organizational oversight, potentially exposing sensitive information, shared Harshil Doshi, Country Director, Securonix, in an exclusive interaction with Srajan Agarwal of Elets News Network (ENN).

How do you see generative AI evolving in cybersecurity in the next 3-5 years?

In my opinion, in the next 3-5 years, generative AI will transform cybersecurity by shifting from reactive to proactive and predictive measures. Advanced machine learning models will enhance the ability to identify and predict complex threat patterns, allowing for preemptive action against potential cyber-attacks. In the coming years, Gen AI will automate threat investigation, reducing the need for manual intervention, enabling the security teams to focus on strategic tasks. This will dynamically enrich incident contexts with data from various sources, facilitating faster and more accurate threat mitigation for organisations.

To continue, generative AI’s adaptive capabilities will improve anomaly detection by continuously refining its understanding of normal versus suspicious/malicious behavior. This will lead to fewer false positives and more precise threat identification. Additionally, Gen AI will enhance collaboration within security teams by curating and presenting relevant information, supporting informed decision-making and coordinated responses to cyber threats.

Also Read | Generative AI tools to power the upcoming wave in the fintech ecosystem of housing finance: Rachit Gehani, CTO, IIFL Home Finance

To conclude, it’ll be safe to say that as the industry adopts these advancements, cybersecurity will become more anticipatory, neutralising potential threats before they cause harm.

What guardrails do enterprises need to enforce to navigate the risks associated with AI, such as bias, hallucinations, etc.?

As AI advances and becomes more integrated into our daily lives, the potential risks and dangers associated with its use become more apparent. Privacy infringement, bias in algorithms, and ethical implications are just a few of the issues that have arisen as AI technology has progressed. It is crucial that we establish clear guidelines and standards for the development and implementation of AI systems to ensure that they are used responsibly and ethically. To navigate the risks successfully, enterprises need a full stack solution, not just a set of tools that can plug and play. To elaborate, some of the key measures include:

  1. Compliance: Flag language that violates legal standards, incorrect terminology, and potential data loss, ensuring global compliance and plagiarism detection.
  2. Factual Accuracy: Highlight claims needing fact-checking to prevent false information.
  3. Brand Alignment: Ensure content adheres to brand guidelines and uses inclusive, unbiased language.

These guardrails can use a mix of rules-based models, deep learning, transformer-based classifiers, and third-party APIs, creating a comprehensive, full-stack solution tailored to enterprise needs.

In your opinion, who will gain the upper hand in leveraging Gen AI tools: cybersecurity defenders or threat actors?

In the contest between cybersecurity defenders and threat actors leveraging Gen AI tools, a slight advantage may tilt toward cybersecurity defenders. The primary reason for this can be their access to advanced AI-driven systems that enhance threat detection, prevention, and response capabilities. AI allows defenders to identify patterns, predict attacks, and automate responses more effectively, giving them a proactive edge.

However, threat actors are also becoming increasingly adept at utilising Gen AI to scale and sophisticate their attacks. They use AI to automate reconnaissance, generate evasive malware, and create hyper-realistic phishing attempts, making their operations more efficient and harder to detect. The democratisation of AI tools has further lowered the barrier to entry, allowing even less sophisticated attackers to execute complex attacks.

Also Read | Gen AI X BFSI: Redefining the Future of Financial Success

Despite this, defenders have the upper hand due to their ability to integrate AI across multiple layers of security infrastructure, continuously monitor and adapt to new threats, and leverage various frameworks to understand and mitigate AI-specific vulnerabilities. Moreover, defenders can harness AI to develop more resilient systems, conduct adversarial training, and ensure data integrity. Ultimately, the battle will depend on cybersecurity teams’ continuous innovation and agility to stay ahead of evolving threats.

Are organizations feeling the impact of AI-powered cyber threats?

Yes, organizations are increasingly feeling the impact of AI-powered cyber threats. A recent survey conducted by Darktrace’s State of AI Cybersecurity Report, revealed that 89% of security professionals think that AI-powered threats will continue to be a major challenge in the future. The survey also found that 56% of respondents view these threats as different from traditional threats. Still, the difficulty lies in not having reliable ways to detect AI’s role in cyber attacks.

AI’s ability to enhance every stage of the attack lifecycle requires defenders to prepare for faster and more sophisticated threats. In addition, security practitioners are concerned about the increased volume and sophistication of malware and the internal risks posed by shadow AI, where employees use generative AI tools without organizational oversight, potentially exposing sensitive information. As AI continues to evolve and become more integrated into everyday processes, it is crucial for organizations to stay ahead of the curve in terms of cybersecurity.

How are you integrating Gen AI in your solutions to help enterprises?

At Securonix, we leverage Generative AI (Gen AI) to empower enterprises against evolving cybersecurity threats. Our Securonix EON suite incorporates AI-reinforced capabilities to transform CyberOps and address AI-powered attacks.

Securonix EON stands on three pillars: AI-reinforced platform, Cybersecurity Mesh, and Frictionless Security Experience. Our AI-reinforced platform enables precise, rapid security decisions, optimizing human intervention. Its key features include Insider Threat Psycholinguistics, using AWS bedrock-based Large Language Models (LLMs) to discern user intent and identify malicious activity. InvestigateRX enhances incident response by extracting context from various data sources, saving analysts approximately 15 minutes per incident. Adaptive Threat Modeling employs machine learning for dynamic threat models, improving real-time threat detection and response.

By combining human expertise with AI capabilities, we aim to empower security teams to proactively protect organizational assets from emerging threats and ensure that our customers/partners stay ahead of emerging threats.

"Exciting news! Elets technomedia is now on WhatsApp Channels Subscribe today by clicking the link and stay updated with the latest insights!" Click here!

Elets The Banking and Finance Post Magazine has carved out a niche for itself in the crowded market with exclusive & unique content. Get in-depth insights on trend-setting innovations & transformation in the BFSI sector. Best offers for Print + Digital issues! Subscribe here➔ www.eletsonline.com/subscription/

Get a chance to meet the Who's who of the Banking & Finance industry. Join Us for Upcoming Events and explore business opportunities. Like us on Facebook, connect with us on LinkedIn and follow us on Twitter, Instagram & Pinterest.