-9.2 C
New York
Wednesday, January 22, 2025
pCloud Premium

Cisco’s ‘Radical’ Approach to AI Security


Cisco is taking a radical approach to AI security in its new AI Defense solution.

In an exclusive interview Sunday with Rowan Cheung of The Rundown AI, Cisco Executive Vice President and CPO Jeetu Patel said that AI Defense is “taking a radical approach to address the challenges that existing security solutions are not equipped to handle.”

AI Defense, announced last week, aims to address risks in developing and deploying AI applications, as well as identifying where AI is used in an organization.

AI Defense can protect AI systems from attacks and safeguard model behavior across platforms with features such as:

  • Detection of shadow and sanctioned AI applications across public and private clouds;
  • Automated testing of AI models for hundreds of potential safety and security issues; and
  • Continuous validation safeguards against potential safety and security threats, such as prompt injection, denial of service, and sensitive data leakage.

The solution also allows security teams to better protect their organizations’ data by providing a comprehensive view of AI apps used by employees, create policies that restrict access to unsanctioned AI tools, and implement safeguards against threats and confidential data loss while ensuring compliance.

“The adoption of AI exposes companies to new risks that traditional cybersecurity solutions don’t address,” Kent Noyes, global head of AI and cyber innovation at technology services company World Wide Technology in St. Louis, said in a statement. “Cisco AI Defense represents a significant leap forward in AI security, providing full visibility of an enterprise’s AI assets and protection against evolving threats.”

Positive Step for AI Security

MJ Kaufmann, an author and instructor at O’Reilly Media, operator of a learning platform for technology professionals, in Boston, affirmed Cisco’s analysis of existing cybersecurity solutions. “Cisco is right,” she told TechNewsWorld. “Existing tools fail to address many operationally driven attacks against AI systems, such as prompt injection attacks, data leakage, and unauthorized model action.”

“Implementers must take action and implement targeted solutions to address them,” she added.

Cisco is in a unique position to provide this kind of solution, noted Jack E. Gold, founder and principal analyst at J.Gold Associates, an IT advisory company in Northborough, Mass. “That’s because they have a lot of data from their networking telemetry that can be used to reinforce the AI capabilities they want to protect,” he told TechNewsWorld.

Cisco also wants to provide security across platforms — on-premises, cloud, and multi-cloud — and across models, he added.

“It’ll be interesting to see how many companies adopt this,” he said. “Cisco is certainly moving in the right direction with this kind of capability because companies, generally speaking, aren’t looking at this very effectively.”

Providing multi-model, multi-cloud protection is important for AI security.

“Multi-model, multi-cloud AI solutions expand an organization’s attack surface by introducing complexity across disparate environments with inconsistent security protocols, multiple data transfer points, and challenges in coordinating monitoring and incident response — factors that threat actors can more easily exploit,” Patricia Thaine, CEO and co-founder of Private AI, a data security and privacy company in Toronto, told TechNewsWorld.

Concerning Limitations

Although Cisco’s approach of embedding security controls at the network layer through their existing infrastructure mesh shows promise, it also reveals concerning limitations, maintained Dev Nag, CEO and founder of QueryPal, a customer support chatbot based in San Francisco.

“While network-level visibility provides valuable telemetry, many AI-specific attacks occur at the application and model layers that network monitoring alone cannot detect,” he told TechNewsWorld.

“The acquisition of Robust Intelligence last year gives Cisco important capabilities around model validation and runtime protection, but their focus on network integration may lead to gaps in securing the actual AI development lifecycle,” he said. “Critical areas like training pipeline security, model supply chain verification, and fine-tuning guardrails require deep integration with MLOps tooling that goes beyond Cisco’s traditional network-centric paradigm.”

“Think about the headaches we’ve seen with open-source supply chain attacks where the offending code is openly visible,” he added. “Model supply chain attacks are almost impossible to detect by comparison.”

Nag noted that from an implementation perspective, Cisco AI Defense appears to be primarily a repackaging of existing security products with some AI-specific monitoring capabilities layered on top.

“While their extensive deployment footprint provides advantages for enterprise-wide visibility, the solution feels more reactive than transformative for now,” he maintained. “For some organizations beginning their AI journey that are already working with Cisco security products, Cisco AI Defense may provide useful controls, but those pursuing advanced AI capabilities will likely need more sophisticated security architectures purpose-built for machine learning systems.”

For many organizations, mitigating AI risks requires human penetration testers who understand how to ask the models questions that elicit sensitive information, added Karen Walsh, CEO of Allegro Solutions, a cybersecurity consulting company in West Hartford, Conn.

“Cisco’s release suggests that their ability to create model-specific guardrails will mitigate these risks to keep the AI from learning on bad data, responding to malicious requests, and sharing unintended information,” she told TechNewsWorld. “At the very least, we could hope that this would identify and mitigate baseline issues so that pen testers could focus on more sophisticated AI compromise strategies.”

Critical Need in the Path to AGI

Kevin Okemwa, writing for Windows Central, notes that the launch of AI Defense couldn’t come at a better time as the major AI labs are closing in on producing true artificial general intelligence (AGI), which is supposed to replicate human intelligence.

“As AGI gets closer with each passing year, the stakes couldn’t be higher,” said James McQuiggan, a security awareness advocate at KnowBe4, a security awareness training provider in Clearwater, Fla.

“AGI’s ability to think like a human with intuition and orientation can revolutionize industries, but it also introduces risks that could have far-reaching consequences,” he told TechNewsWorld. “A robust AI security solution ensures that AGI evolves responsibly, minimizing risks like rogue decision-making or unintended consequences.”

“AI security isn’t just a ‘nice-to-have’ or something to think about in the years to come,” he added. “It’s critical as we move toward AGI.”

Existential Doom?

Okemwa also wrote: “While AI Defense is a step in the right direction, its adoption across organizations and major AI labs remains to be seen. Interestingly, the OpenAI CEO [Sam Altman] acknowledges the technology’s threat to humanity but believes AI will be smart enough to prevent AI from causing existential doom.”

“I see some optimism about AI’s ability to self-regulate and prevent catastrophic outcomes, but I also notice in the adoption that aligning advanced AI systems with human values is still an afterthought rather than an imperative,” Adam Ennamli, chief risk and security officer at the General Bank of Canada told TechNewsWorld.

“The notion that AI will solve its own existential risks is dangerously optimistic, as demonstrated by current AI systems that can already be manipulated to create harmful content and bypass security controls,” added Stephen Kowski, field CTO at SlashNext, a computer and network security company, in Pleasanton, Calif.

“Technical safeguards and human oversight remain essential since AI systems are fundamentally driven by their training data and programmed objectives, not an inherent desire for human well-being,” he told TechNewsWorld.

“Human beings are pretty creative,” Gold added. “I don’t buy into this whole doomsday nonsense. We’ll figure out a way to make AI work for us and do it safely. That’s not to say there won’t be issues along the way, but we’re not all going to end up in ‘The Matrix’.”



Source link

Odisha Expo
Odisha Expohttps://www.odishaexpo.com
Odisha Expo is one of the Largest News Aggregator of Odisha, Stay Updated about the latest news with Odisha Expo from around the world. Stay hooked for more updates.

Related Articles

Stay Connected

0FansLike
0FollowersFollow
0SubscribersSubscribe
Best Lifetime Deals on SaaSspot_img

Latest Articles

Black Stars Striker Earns Praise from Frank Lampard After Key Goal

0
Frank Lampard praised Ghanaian striker Brandon Thomas-Asante after his decisive goal led Coventry City to a 1-0 win over Bristol City. The 26-year-old...

Trump orders all federal DEI employees placed on paid leave starting Wednesday

0
The Trump administration is ordering all federal employees in diversity, equity and inclusion roles placed on paid leave by Wednesday evening, according to...

Ram Mandir anniversary: Here’s how Ayodhya is celebrating one year of Ram Lalla’s Pran...

0
Ayodhya is immersed in the colour of spirtualism and joy to mark a year of Ram Lalla's Pran Pratistha ceremony held on January...

Mohamed Salah: Liverpool striker scores 50th European goal for Reds

0
Another day, another Mohamed Salah goal and another Liverpool win.The Egypt forward has now scored three Champions League goals this season and 22...

Elephants can’t pursue their release from a Colorado zoo because they’re not human, court...

0
DENVER — Five elephants at a Colorado zoo may be “majestic” but, since they’re not human, they do not have the legal right...