Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

How Securonix is Building Cybersecurity for LLMs

Like the two sides of a coin, generative AI too is oscillating between being a boon and a bane. According to a recent report by Microsoft and OpenAI, while generative AI has had a positive impact in many areas, it has also wreaked havoc for Cybersecurity. Malicious actors use the same technology to tip the balance in their favour. 

The report stated that countries like Russia, North Korea, China, and Iran have attempted to use LLMs like GPT-4 to find targets and improve their cyberattacks. 

Nayaki Nayyar, chief executive of AI-powered cybersecurity giant Securonix, told AIM, “If you thought cyber attacks were bad before, they would only worsen. The use of AI by threat actors will have a direct impact on an organisation’s ability to leverage AI to defend themselves.” 

Nayyar has been with the company for over a year and is one of the most influential women leaders in this space. She was joined by Scott Sampson, chief revenue officer, Haggai Polak, chief product officer, and Harshil Doshi, country manager (India and SAARC), at the company’s flagship ‘Spark’ conference, held in Bengaluru on 23rd February 2023. 

Last year, the company integrated OpenAI’s ChatGPT into its Unified Defense SIEM platform to improve security operations. With this, users can ask AI models questions in natural language and view results alongside relevant context gathered by the platform. Customisable security controls are implemented to prevent data leaks and protect sensitive information. 

Securonix also scrubs responses from ChatGPT and employs audit logs to detect compliance issues or data leaks. 

India as a Market

“Currently over 50% of our employees are from this country and we only see the number growing. About 20% of our global revenue comes from APMEA/India. We aim to take that revenue share to about 30% in our next phase of growth, especially since enterprises are shifting their data to the cloud,” said Nayyar. 

The company aims to have 70% of its employees based in India by the end of the year.

It has two centres of excellence – in Pune and Bengaluru – with the majority of the company’s product and R&D hires based in India. It works with nearly 22 channel partners, including managed security service providers (MSSPs), system integrators (SI), distributors, and others.

“We want to focus primarily on channel partnerships and AI investments as a part of our growth strategy,” Sampson told AIM. The CRO confirmed that AI is considered a key force multiplier for the company, and it includes developing AI-driven tools such as developer copilots and customer care assistants. 

“Since establishing our sales team in India in 2016, the company has doubled its go-to-market team size and secured approximately 12% market share. We expect to double it again in the next few years,” Sampson confirmed. 

With a revenue of about $100 million, the company sees 60-63% of it coming through channel partners. They intend to increase this to about 75% in the next few years by introducing a more formal channel program and strengthening partnerships with hyperscalers like AWS. 

Emerging Threats due to AI

According to Polak, generative AI has brought about new problems for defenders. These involve constructing attack chains from unknown actors, linking disparate events like suspicious website visits, anomalous processes on devices, or unusual communications to identify potential threats. 

This task is complicated for humans and machines, but again, AI helps adapt to evolving attack methods, evaluates risks, and highlights critical issues for analysts.

In recent discussions with customers in India, Polak highlighted the prevalent issue of “alert fatigue” in cybersecurity, which is experienced globally. This fatigue stems from the overwhelming volume of alerts generated by Security Information and Event Management (SIEM) or User and Entity Behavior Analytics (UEBA) solutions. 

The primary concern is to distinguish real threats from false positives or negatives, necessitating a reduction in noise.

Furthermore, the evolving nature of attacks, which are increasingly machine-oriented, calls for AI-based algorithms to address cybersecurity challenges. Human interpretation alone is inadequate to keep pace with rapidly evolving threats. 

“At the same time, historically, India wasn’t highly regulated in data privacy and regulation, but it is changing. For example, India’s DPDP Act draws extensively from Europe’s GDPR,” said Doshi. Consequently, customers are now scrutinising the validation of AI algorithms to ensure they align with enterprise interests and comply with regulations. 

Customers are also concerned about AI systems acting autonomously and beyond control. 

Doshi noted that in India, besides the obvious financial sector, the healthcare sector has become increasingly vulnerable, especially post-COVID. Retail and wholesale, especially e-commerce giants, which have large customer bases also saw a significant rise in cyber threats. 

But Polak noted that despite the global challenge of the shortage of skilled tech workers, this seems less of an issue in India. 

Hence, talking about how we can use AI to prevent attacks, Polak said that his team follows the “human on the loop” instead of “human in the loop” and have seen positive impact. 

“So basically, this means humans will be supervising and acting as an escalation point for AI systems rather than being directly involved in every process,” he added. This approach still provides human oversight but allows AI to act more autonomously compared to the latter design. 

.

The post How Securonix is Building Cybersecurity for LLMs appeared first on Analytics India Magazine.



This post first appeared on Analytics India Magazine, please read the originial post: here

Share the post

How Securonix is Building Cybersecurity for LLMs

×

Subscribe to Analytics India Magazine

Get updates delivered right to your inbox!

Thank you for your subscription

×