Sichere Infrastruktur für KI

Eine Checkliste für die Praxis

Rainer Stropek | time cockpit

Threat
Scenarios

Out of scope for today

  • Challenges by AI for society (e.g. labor market)
  • AI-generated Phishing Attacks
  • Deepfake Scams and Impersonation
  • Misinformation and Spread of False Information
  • Automated cyberattacks leveraging AI
  • Social engineering aided by AI insights
  • AI-assisted insider threats
  • AI-driven market manipulation
  • Intellectual Property Theft (e.g., AI-assisted plagiarism)
  • ...

We focus on threat scenarios for specific AI-based software projects

Threat Scenarios

The usual cloud security stuff...

  • Data loss because of lack of strong data encryption (in transit, at rest)
  • Unauthorized data access because of missing/flawed auth
  • Unauthorized access because of secrets being in the wrong hands
  • Denial of service/wallet attacks
  • Attacks through code vulnerabilities
  • Cross-tenant data access because of poor tenant separation
  • GDPR violations (processing of PII in wrong regions)

Don't forget traditional software engineering and cloud security.

Your AI 🤖 is to a certain degree just another API/cloud app

Threat Scenarios

AI (LLM)-specific scenarios

  • Prompt injections, prompt escapes
  • Quality issues (prompts) after model updates
  • Evasion of content moderation or filters
  • Budget overrun
  • Incorrect/unwanted responses because of hallucinations, biases, etc.
  • Instabilities because of missing/flawed handling of token limits
  • Overblocking because of flawed content filtering
  • Unintended consequences of AI-generated content/decisions

Threat Scenarios

AI (LLM)-specific scenarios

  • Users are unable to trace or understand how the AI arrived at a response
    (Opaque Reasoning)
  • AI agents misuse or are manipulated into misusing tool integrations
    (e.g., function calling, MCP = Model Context Protocol 🔗)
  • Unauthorized data exposure through RAG systems or tool use
  • Automation risks when LLMs control/trigger downstream processes
  • Model drift over time in fine-tuned systems

New threats require new protective skills and tools.

Let's Protect Our 🤖 Friends

The usual cloud security stuff...

  • Strong encryption solved by cloud provider
    • Azure: Customer-managed keys 🔗
  • Avoid or at least protect API Keys
    • Azure: Key Vault 🔗, Managed Identity 🔗
  • Access Control
    • Azure: RBAC 🔗
    • VNet Integration (Private Endpoints) 🔗
  • Rate limits
    • Azure: Quotas 🔗
    • Option: Throttling on application-level (see also completionUsage)

The usual cloud security stuff...

  • Tenant separation
    • Azure: Model Deployments 🔗
  • Proper data sovereignty settings
    •  Azure: Model Deployments 🔗
  • Professional Cost Management 🔗
  • Logging, monitoring, telemetry

AI (LLM) Specific Things...

  • Content Classification
    • Azure: Content Filtering, Prompt Shield 🔗, Blocklists 🔗
    • Harmful content
    • Protected material
    • User prompt attacks ("jailbreaking")
    • Indirect attacks (RAG)
    • Use of ungrounded material
  • Evaluation
    • Automated vs. manual (user feedback, testers)
    • Azure: Evaluation (preview) 🔗
  • Limit and manage input/output tokens
    • max_tokens (generated)
    • completionUsage 🔗
  • Red Teaming 🔗

AI (LLM) Specific Things...

  • ‼️Never blindly trust an AI‼️
    • Clearly communicate potential AI limitations and errors
    • Encourage users to critically assess every AI-generated result
    • Design for transparent reasoning - a UX design challenge
    • Be cautious with MCP - evaluate risks and implications
  • Mitigating Model Drift
    • Regularly revalidate prompts against up-to-date ground truth

    • Add context explicitly instead of relying on assumed model memory

    • Include example completions in prompts

    • Avoid overfitting to one model version, generalize prompts

    • Periodically retrain or refresh fine-tuned models

    • Monitor model outputs continuously for performance degradation

RAG Specific Things...

  • Don’t replicate complex access control logic

    • Avoid duplicating intricate permission models from source systems

    • Error-prone, hard to maintain

  • Verify document access at source

    • Validate document permissions with the original system

  • Leverage source-native AI search APIs

    • Consider using built-in AI or semantic search features as AI tools

  • Log and audit retrieval steps

    • Maintain traceability of what was shown and why

  • Include source metadata in prompts

    • Add citation data and metadata to provide transparency

  • Add post-processing checks to filter non-existent/unauthorized docs

Key Takeaways

  • AI is booming – but security fundamentals still matter
    • Don’t let the hype distract from proven best practices
    • Strengthen your cloud security posture 💪
  • Prepare for new threat vectors in AI-powered systems
    • Design for transparent reasoning
    • Apply rigorous QA to prompts and models
    • Educate users and stakeholders 🎓
  • Avoid the “DIY AI” trap
    • Resist building from scratch without clear need
    • Stay up-to-date and integrate responsibly to maximize benefits

Thank you for your attention!

Sichere Infrastruktur für KI

By Rainer Stropek

Sichere Infrastruktur für KI

  • 88