Prevent prompt injections in your Azure OpenAI and Copilots implementations.


Prompt injection is a vulnerability in AI systems that allows attackers to manipulate large language models (LLMs) to execute their intentions. There are two types of prompt injection attacks: direct and indirect. In a direct attack, an attacker injects a prompt that bypasses the system prompt and instructs the LLM to perform unwanted operations. In an indirect attack, an attacker embeds prompts in external documents that the LLM reads and interprets as instructions. To mitigate prompt injection, Microsoft offers a feature called Prompt Shields, which analyzes LLM inputs and detects user prompt and document attacks. Developers can use the Azure AI Content Safety API to analyze prompts and detect potential vulnerabilities. By implementing prompt shields, developers can increase the safety of their AI solutions.


Article 9m

Login now to access my digest by 365.Training

Learn how my digest works
Features
  • Articles, blogs, podcasts, training, and videos
  • Quick read TL;DRs for each item
  • Advanced filtering to prioritize what you care about
  • Quick views to isolate what you are looking for right now
  • Save your favorite items
  • Share your favorites
  • Snooze items you want to revisit when you have more time