Note
Access to this page requires authorization. You can try signing in or changing directories.
Access to this page requires authorization. You can try changing directories.
This article provides an overview of the resources for building and deploying trustworthy agents. This includes end-to end security, observability, and governance with controls and checkpoints at all stages of the agent lifecycle. Our recommended essential development steps are grounded in the Microsoft Responsible AI Standard, which sets policy requirements that our own engineering teams follow. Much of the content of the Standard follows a pattern, asking teams to Discover, Protect, and Govern potential content risks.
At Microsoft, our approach is guided by a governance framework rooted in AI principles, which establish product requirements and serve as our "north star." When we identify a business use case for generative AI, we first discover and assess the potential risks of the AI system to pinpoint critical focus areas.
Once we identify these risks, we evaluate their prevalence within the AI system through systematic measurement, helping us prioritize areas that need attention. We then apply appropriate protection at the model and agent level against those risks.
Finally, we examine strategies for managing risks in production, including deployment and operational readiness and setting up monitoring to support ongoing governance to ensure compliance and surface new risks after the application is live.
In alignment with Microsoft's RAI practices, these recommendations are organized into three stages:
- Discover agent quality, safety, and security risks before and after deployment.
- Protect – at both the model output and agent runtime levels – against security risks, undesirable outputs, and unsafe actions.
- Govern agents through tracing and monitoring tools and compliance integrations.
Security alerts and recommendations
You can view Defender for Cloud security alerts and recommendations to improve your security posture in the Risks + alerts section. Security alerts are the notifications generated by Defender for AI Services plan when threats are identified in your AI workloads. You can take action in Azure portal or in the Defender portal to address these alerts.
- To learn more about security alerts, see Alerts for AI workloads (Preview).
- To learn more about security recommendations, see Review security recommendations.