Someone in the organization owns the decisions about where AI operates and where it does not. This is a named person, not a committee.
The organization has stated, in writing, what AI may and may not be used for. Not in a legal document. In a policy people can read.
The AI cannot surface, summarize, or act on information that the user would not otherwise be authorized to see. This requires reviewing the permission model, not just the AI settings.
Leadership can see, at any point, what AI is doing inside the platform and where the exposure is. Not after an incident. Continuously.
The governance does not end at enablement. It evolves as the platform evolves. Each new AI capability is a new governance surface.
That last question is more consequential than most organizations realize. When AI-assisted outputs influence formal work artifacts, those artifacts may become subject to regulatory review, audit, or legal discovery. Approvals, performance reviews, service responses, risk assessments, project decisions: if AI touched it, the organization must be able to explain how the output was generated, who reviewed it, and what safeguards were in place.
This is not a theoretical concern. The EU AI Act, which entered its
This is the section that most governance whitepapers skip, and
it is the one that matters most if you are an Atlassian customer.
The major governance frameworks, NIST AI RMF, the OECD AI Principles, ISO/IEC 42001, are designed primarily for organizations building AI systems. They address model training, bias mitigation, algorithmic transparency, and the lifecycle management of AI as a technology product.
Even when organizations are not training their own models, they remain responsible for how enterprise data is exposed during inference. The common assumption is that because the AI capability is vendor-provided and runs on a vendor-managed platform, the data risk belongs to the vendor. It does not. The vendor manages the model. The customer manages the data.
Guardrails are not expressions of distrust in AI. They are expressions of institutional maturity. The organizations that deploy them are not the ones that fear AI. They are the ones that intend to use it seriously and at scale.
Three groups. Three sets of priorities. One shared exposure.
Organizations with complex regulatory environments, multiple Atlassian instances, or significant third-party integrations will need a more detailed assessment beyond this checklist.
1. NIST AI Risk Management Framework (AI RMF 1.0). National Institute of Standards and Technology, January 2023.
2. NIST AI 600-1: Generative Artificial Intelligence Profile. National Institute of Standards and Technology, July 2024.
3. OECD AI Principles (updated 2024). Organisation for Economic Co-operation and Development.
4. ISO/IEC 42001:2023. Artificial Intelligence Management Systems. International Organization for Standardization.
5. OWASP Top 10 for Large Language Model Applications, 2025 edition. Open Worldwide Application Security Project.
6. Best Practices for Securing AI Data. CISA, NSA, FBI, and international partners. May 2025.
7. The State of AI, 2024 Global Survey. McKinsey and Company.
8. Global AI Regulations Fuel Billion-Dollar Market for AI Governance Platforms. Gartner, February 2026.
9. Survey: Organizations with AI Governance Platforms Are 3.4x More Effective. Gartner, Q2 2025.
10. PEX Report 2025/2026: AI Governance Policy Adoption. Process Excellence Network.
11. Shadow AI in the Enterprise. WalkMe / SAP, 2025.
12. Stanford AI Index Report, 2025. Stanford University Human-Centered Artificial Intelligence.
13. EU AI Act. European Parliament and Council, 2024. Penalty phase effective August 2025.
14. Atlassian Trust Center and Rovo Data Privacy Guidelines. Atlassian, 2025.
15. 42% of AI Projects Fail in 2025. DZone, 2025.