We recently hosted a live stream with Cisco DevNet discussing AI security. The session covered a range of topics, from current vulnerabilities in AI systems to future trends, the risks associated with large language models (LLMs), and strategies for securing Retrieval-Augmented Generation (RAG) implementations.
Highlights from the Live Stream:
AI Security Vulnerabilities
We began by discussing several vulnerabilities inherent in AI systems:
- Adversarial Attacks: How malicious inputs can trick AI models, leading to incorrect outputs.
- OWASP AI Resources: The OWASP AI Exchange, OWASP Machine Learning Security Top 10 project, and the OWASP Top 10 for LLM Applications.
- MITRE Adversarial Threat Landscape for Artificial-Intelligence Systems (ATLAS): ATLAS is a knowledge base of adversary tactics and techniques against Al-enabled systems..
Securing Retrieval-Augmented Generation (RAG) and Large Language Models (LLMs) Implementations
One of the most practical segments was on securing RAG systems:
- Data Integrity: Ensuring the data used for retrieval is accurate and free from manipulation.
- Access Controls: Implementing strict access controls to protect the RAG system from unauthorized use.
- Vector Database Security: How to protect sensitive data stored in vector databases like ChromaDB, Pinecone, MongoDB Atlas Vector Search, pgvector, etc.
- Embedding Models: Embedding models convert text, images, and other data types into vector representations. However, they also introduce specific security challenges. These models can be vulnerable to adversarial attacks and can inadvertently leak sensitive information if trained on proprietary or personal data.
- Continuous Monitoring: The importance of ongoing monitoring to detect and respond to any security incidents promptly.
Missed the live stream? Don’t worry! You can watch the recorded session here.