cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
480
Views
1
Helpful
1
Comments
Omar Santos
Cisco Employee
Cisco Employee

We recently hosted a live stream with Cisco DevNet discussing AI security. The session covered a range of topics, from current vulnerabilities in AI systems to future trends, the risks associated with large language models (LLMs), and strategies for securing Retrieval-Augmented Generation (RAG) implementations.

Highlights from the Live Stream:

AI Security Vulnerabilities

We began by discussing several vulnerabilities inherent in AI systems:

Securing Retrieval-Augmented Generation (RAG) and Large Language Models (LLMs) Implementations

One of the most practical segments was on securing RAG systems:

  • Data Integrity: Ensuring the data used for retrieval is accurate and free from manipulation.
  • Access Controls: Implementing strict access controls to protect the RAG system from unauthorized use.
  • Vector Database Security: How to protect sensitive data stored in vector databases like ChromaDB, Pinecone, MongoDB Atlas Vector Search, pgvector, etc.
  • Embedding Models: Embedding models convert text, images, and other data types into vector representations. However, they also introduce specific security challenges. These models can be vulnerable to adversarial attacks and can inadvertently leak sensitive information if trained on proprietary or personal data.
  • Continuous Monitoring: The importance of ongoing monitoring to detect and respond to any security incidents promptly.

Missed the live stream? Don’t worry! You can watch the recorded session here

1 Comment
Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: