White paper

AI Governance
And Guardrails

Chapters
    Chapter 01

    Which AI Rollout
    Mishap Looks Familiar?

    Three situations. Pick the one that applies to you.
    01
    The enthusiastic
    rollout
    An executive team enables AI capabilities across their work platform after a compelling demo. Within weeks, product managers are pasting roadmap drafts into AI for refinement. HR is summarizing performance feedback. Engineers are generating documentation from design discussions. Service desk agents are drafting responses with AI assistance.

    Nothing seems to be wrong.

    Three months later, security discovers that sensitive internal project details crossed a boundary the organization never defined. No breach. No regulator. But the next AI proposal dies in committee because nobody trusts the controls anymore. The cost was not a security incident. It was twelve months of stalled AI adoption across the entire organization.
    An executive team enables AI capabilities across their work platform after a compelling demo. Within weeks, product managers are pasting roadmap drafts into AI for refinement. HR is summarizing performance feedback. Engineers are generating documentation from design discussions. Service desk agents are drafting responses with AI assistance.

    Nothing seems to be wrong.

    Three months later, security discovers that sensitive internal project details crossed a boundary the organization never defined. No breach. No regulator. But the next AI proposal dies in committee because nobody trusts the controls anymore. The cost was not a security incident. It was twelve months of stalled AI adoption across the entire organization.
    An executive team enables AI capabilities across their work platform after a compelling demo. Within weeks, product managers are pasting roadmap drafts into AI for refinement. HR is summarizing performance feedback. Engineers are generating documentation from design discussions. Service desk agents are drafting responses with AI assistance.

    Nothing seems to be wrong.

    Three months later, security discovers that sensitive internal project details crossed a boundary the organization never defined. No breach. No regulator. But the next AI proposal dies in committee because nobody trusts the controls anymore. The cost was not a security incident. It was twelve months of stalled AI adoption across the entire organization.
    Chapter 02

    The Numbers Behind
    the Problem

    The scale of the problem, in numbers.
    0 %
    of enterprises use AI in at least one function. Fewer than 1% have fully operationalized responsible AI.
    McKinsey, 2024
    0 %
    of employees use unapproved AI tools at work. Shadow AI is not a fringe problem.
    WalkMe, 2025
    0 %
    of enterprise AI projects fail. The leading cause is not technical. It is organizational.
    DZone / ISACA, 2025
    0 x
    more effective. Organizations with AI governance platforms outperform those without by this margin.
    Gartner, 2025
    0
    harmful AI incidents recorded in 2024. A 56% increase year on year. $67.4B in losses.
    Stanford HAI, 2025
    0 %
    of organizations have an AI governance policy. 29% have none at all.
    PEX Report, 2026
    Two numbers in that set deserve particular attention.
    The 78% shadow AI figure is not a compliance problem. It is an enablement failure.
    Nearly four in five employees are already using AI tools their organization has not approved, assessed, or governed. These are not rogue actors. They are people trying to do their jobs faster. When the organization does not provide a governed alternative with clear boundaries, employees govern themselves. The 42% project failure rate sits directly underneath this: ungoverned AI adoption creates the conditions for ungoverned AI failure.
    The 3.4x Gartner multiplier is the commercial case for governance in a single number.
    Organizations that govern AI well do not just avoid incidents. They get more value from AI. Governance is not the cost of doing AI. It is the condition under which AI actually works at scale.
    Most organizations start governing AI after the first incident. The best ones start before the first demo.
    Chapter 03

    Your AI Rollout Is
    Already Behind on Governance

    Five things governance actually means. Most organizations skip all of them.
    Most AI initiatives inside work management platforms begin the same way. Someone demonstrates a capability. A Confluence page summarized in seconds. A dashboard enriched with AI-generated insights. The reaction is always the same: we need this everywhere.

    That is the moment governance either appears or disappears. In most organizations, it disappears. Not deliberately. It simply never gets raised. The enthusiasm is real, the executive mandate is clear, and governance feels like something that can be addressed later. Later rarely arrives.

    The NIST AI Risk Management Framework makes a point that is often cited but rarely acted on: governance is not a downstream activity. It is a continuous, cross-cutting function that shapes how AI systems are selected, integrated, and monitored throughout their lifecycle. In practical terms, governance means five things.

    Defined accountability

    Someone in the organization owns the decisions about where AI operates and where it does not. This is a named person, not a committee.

    Clear usage boundaries

    The organization has stated, in writing, what AI may and may not be used for. Not in a legal document. In a policy people can read.

    Data access controls

    The AI cannot surface, summarize, or act on information that the user would not otherwise be authorized to see. This requires reviewing the permission model, not just the AI settings.

    Risk visibility

    Leadership can see, at any point, what AI is doing inside the platform and where the exposure is. Not after an incident. Continuously.

    Continuous oversight

    The governance does not end at enablement. It evolves as the platform evolves. Each new AI capability is a new governance surface.

    None of these slow innovation down. They prevent innovation from outrunning responsibility.

    When AI becomes embedded in work management platforms, where issues, documentation, service tickets, and project decisions live, the stakes are fundamentally different from a personal productivity tool. These systems are not peripheral. They are institutional memory. Every ticket, every Confluence page, every Jira workflow represents a decision the organization may need to explain later.
    Most organizations start governing AI after the first incident. The best ones start before the first demo.
    Chapter 04

    Turn On Rovo for Everyone

    What the request does not include: a policy.
    This section is for every Atlassian administrator who has received that email, that Slack message, or that hallway request.

    The request sounds simple. It is not. Rovo respects existing user permissions. If a user does not have access to a Confluence space or a Jira project, Rovo cannot see that content either. That is a meaningful baseline. But it is not a governance framework. It is a technical control. And technical controls on their own are not enough.

    Consider the permission models that most enterprise Atlassian instances actually have. In our experience, they have grown organically over years. Permissions have accumulated. Spaces created for one team have been shared more broadly. Projects intended to be private have been opened up for convenience. The permission model reflects years of decisions made by dozens of administrators with varying levels of care.

    In that environment, Rovo does not expose anything that was not already technically accessible. But it makes things findable that were previously hidden by obscurity. A user who technically had access to an HR Confluence space but never thought to look will now get summaries of that content surfaced in response to a natural language query. The information was accessible before. It was not discoverable. Rovo changes that.
    The questions that matter are not technical. They are behavioral and organizational. Should users be able to ask Rovo to summarize private HR documentation they technically have read access to? Should AI surface information across project boundaries configured specifically to keep teams focused? Should AI-generated insights be treated as formal records, subject to the same retention and audit requirements as human-authored content? If an AI-generated summary is wrong and someone acts on it in an approval workflow, who owns that outcome?

    In most organizations, nobody has answered these questions before the administrator is asked to flip the switch.
    Impact of Enabling Rovo Without a Policy
    Without a policy
    After Rovo is enabled
    Access
    HR space is technically accessible to some users – but nobody thinks to look there
    Access
    HR content is now actively discoverable – summaries surface in any natural language query
    Behaviour
    Permissions were set by dozens of admins over years, inconsistently, organically
    Behaviour
    of organizations have an AI governance policy. 29% have none at all.
    Risk
    Risk is latent, information exists but is scattered and hard to surface
    Risk
    Risk is active, AI makes previously obscure content instantly findable at scale
    AccountabilityAccess
    No one has answered the governance questions – no one has been asked
    Accountability
    The admin is now the de facto risk owner – by default, not by design
    Without a policy
    After Rovo is enabled
    Access
    HR space is technically accessible to some users – but nobody thinks to look there
    HR content is now actively discoverable – summaries surface in any natural language query
    Behaviour
    Permissions were set by dozens of admins over years, inconsistently, organically
    of organizations have an AI governance policy. 29% have none at all.
    Risk
    Risk is latent, information exists but is scattered and hard to surface
    Risk is active, AI makes previously obscure content instantly findable at scale
    Accountability
    No one has answered the governance questions – no one has been asked
    HR content is now actively discoverable – summaries surface in any natural language query
    Without a policy, the administrator becomes the de facto risk owner. They make judgment calls about enablement scope, permission boundaries, and data exposure that should be made by executives, legal, and compliance. Not because the administrator wants that authority. Because nobody else has claimed it.

    Governance exists to prevent operational teams from carrying unspoken strategic risk. When an organization invests in a clear AI usage policy before enabling capabilities, the administrator’s job becomes what it should be: technical implementation within defined boundaries.
    A policy written for auditors shapes nothing. A policy written for people shapes behavior.
    Chapter 05

    Your Policy Exists to Shape
    Behavior, Not Satisfy Auditors

    Five questions your policy must answer before go-live.
    Many organizations react to AI risk by drafting a policy that reads like a compliance artifact. Dense. Abstract. Disconnected from daily work. Written by legal, reviewed by compliance, published to the intranet, read by nobody.

    That approach misunderstands the role of policy. An effective AI usage policy does not exist to satisfy auditors. It exists to shape the behavior of the people who use AI every day.

    The OECD AI Principles emphasize transparency, accountability, and human-centered oversight as core pillars of responsible AI systems. ISO/IEC 42001, the first international standard for AI management systems published in 2023, reinforces that AI must be governed within a management system that defines roles, responsibilities, and controls. These frameworks matter. Any serious governance effort should be aware of them. But the practical questions every organization must answer before scaling AI in a work management environment are not abstract.
    THE FIVE QUESTIONS YOUR POLICY MUST ANSWER
    Before any AI capability goes live, get written answers to each of these.
    01
    What classes of data are prohibited from being entered into AI systems?
    02
    When must AI-generated outputs be reviewed or verified by a human before they are acted on?
    03
    Who owns the decision if an AI-assisted recommendation turns out to be wrong?
    04
    Are AI interactions logged in a manner consistent with existing audit policies?
    05
    Does the AI operate within existing permission models, or does it have access to information users would not otherwise see?

    That last question is more consequential than most organizations realize. When AI-assisted outputs influence formal work artifacts, those artifacts may become subject to regulatory review, audit, or legal discovery. Approvals, performance reviews, service responses, risk assessments, project decisions: if AI touched it, the organization must be able to explain how the output was generated, who reviewed it, and what safeguards were in place.


    This is not a theoretical concern. The EU AI Act, which entered its

    penalty phase in August 2025 with fines reaching up to 7% of global annual turnover, explicitly covers AI systems used in employment and HR contexts. Governance serves not only operational discipline but legal defensibility.

    A useful AI usage policy should be readable in under fifteen minutes. It should be understood by executives, administrators, and end users alike. It should answer three questions plainly.
    Where AI is permitted. Where AI is prohibited. Where human judgment is required.
    Without that clarity, experimentation spreads faster than oversight. The 38% of employees sharing confidential data with unapproved AI tools are not malicious. They are uninformed. A policy they can read and understand is the first line of defense.

    The frameworks were written for organizations building AI. Most Atlassian customers are consuming it.
    Chapter 06

    Where the Frameworks
    Fall Short

    The frameworks were built for builders. You are a consumer.

    This is the section that most governance whitepapers skip, and 
it is the one that matters most if you are an Atlassian customer.

    The major governance frameworks, NIST AI RMF, the OECD AI Principles, ISO/IEC 42001, are designed primarily for organizations building AI systems. They address model training, bias mitigation, algorithmic transparency, and the lifecycle management of AI as a technology product.

    Most Atlassian enterprise customers are not building AI. They are consuming it. They are enabling vendor-provided AI capabilities inside platforms they already use to manage work. The governance challenge is fundamentally different.
    AI Governance: Builder vs Consumer Responsibilities.
    If you are building AI,
    you govern…
    If you are consuming AI,
    you govern…
    Organizations that apply a builder governance framework to a consumer context end up governing the wrong things. They spend months on AI ethics statements and algorithmic bias assessments when the actual risk is that a Confluence space with sensitive HR content has overly broad read permissions and Rovo is now surfacing summaries of it to anyone who asks.

    NIST published AI 600-1 in July 2024, a generative AI specific profile of the RMF, which gets closer to the consumer use case.

    But even that document is oriented toward organizations with dedicated AI teams. Most Atlassian enterprise customers do not have a dedicated AI team. They have platform administrators, IT leaders, and users being asked to adopt AI capabilities already baked into the tools they use every day. The governance must meet them where they are.
    ISO/IEC 42001 is valuable for establishing a management system. NIST AI RMF is useful for understanding risk categories. The OECD Principles provide a solid ethical foundation. But none of them will tell you whether Rovo should be enabled in your HR Confluence space, how to audit AI-generated content in your Jira Service Management instance, or what to do when an AI-assisted triage rule starts misclassifying high-severity tickets. Those answers must come from people who understand the specific operating environment. The frameworks are the starting point, not the answer.
    A policy tells people what to do. A guardrail ensures the system behaves accordingly.
    Chapter 07

    Policies Are Paper.
    Guardrails Are Infrastructure.

    What guardrails look like inside a work management platform
    Policies define expectations. Guardrails operationalize them. Without guardrails, a policy is a document nobody references until something goes wrong.

    The OWASP Top 10 for Large Language Model Applications, updated to its 2025 edition, identifies risk categories that most traditional IT security frameworks were not designed to catch. The table below maps the four most relevant risks to their practical implications inside a work management environment.

    LLM01

    Prompt injection

    1plus.svg

    LLM01

    A user or attacker crafts input that the model interprets as a new instruction rather than content to process. In Jira or Confluence, a malicious instruction embedded in a ticket description or page could redirect AI behavior for anyone who queries that content.
    close.svg

    LLM02

    Sensitive information disclosure

    1plus.svg

    LLM02

    The AI surfaces content from its context that the querying user was not intended to see. In a work management environment with imperfect permission hygiene, this is the most common real-world risk.
    close.svg

    LLM06

    Excessive agency

    1plus.svg

    LLM06

    The AI takes actions beyond its intended scope. As Rovo gains agent capabilities including the ability to modify tickets, trigger workflows, and act on behalf of users, this risk becomes directly operational.
    close.svg

    LLM07

    System prompt leakage

    1plus.svg

    LLM07

    Underlying instructions or configuration that define how the AI behaves are exposed to users. Relevant when organizations configure custom Rovo behaviors or connect third-party applications.
    close.svg
    In practice, guardrails inside a work management environment include the following. Enforcing permission inheritance so AI cannot surface content a user would not otherwise access. Restricting AI from automatically triggering irreversible workflow transitions such as closing tickets, advancing approvals, or reassigning ownership without human confirmation. Logging every AI-assisted change in tickets and documentation so there is always an auditable trail. Blocking sensitive data from being sent to external generative AI interfaces. Segmenting experimentation environments from production workflows.

    CISA and a coalition of international security agencies released joint guidance in 2025 on securing AI data across its lifecycle.

    Even when organizations are not training their own models, they remain responsible for how enterprise data is exposed during inference. The common assumption is that because the AI capability is vendor-provided and runs on a vendor-managed platform, the data risk belongs to the vendor. It does not. The vendor manages the model. The customer manages the data.

    Guardrails are not expressions of distrust in AI. They are expressions of institutional maturity. The organizations that deploy them are not the ones that fear AI. They are the ones that intend to use it seriously and at scale.

    Governing an email draft and governing an approval workflow are not the same problem.
    Chapter 08

    Not All AI Carries the
    Same Risk

    The risk is not uniform. The governance should not be either.
    It is tempting to treat all AI use as equivalent. It is not. Getting this distinction wrong leads to governance that is simultaneously too tight in the wrong places and too loose where it matters most.

    An employee using a general-purpose AI tool to refine an email draft carries limited organizational risk. The data exposure is narrow. The output is ephemeral. The consequences of a bad suggestion are minor and easily corrected.

    An AI capability embedded directly into the system that governs work assignments, documentation, approvals, and service delivery carries a different order of consequence. It operates on institutional data. It influences decisions that become part of the organization’s formal record. It touches workflows that other systems, people, and processes depend on.
    Personal productivity AI can often be governed through data classification rules and contractual safeguards. System-of-work AI must align with enterprise permission models, role-based access controls, audit requirements, regulatory obligations, and record retention policies. The governance surface is larger. The failure modes are less visible because they accumulate gradually rather than presenting as a single incident.

    Here is a pattern we see regularly. An engineer cannot use ChatGPT to brainstorm solutions because the organization has banned all external AI tools. Meanwhile, Rovo has unrestricted access to every Confluence space in the instance, including spaces containing legal strategy, HR actions, and M&A planning, because nobody reviewed the permission model before enablement. The visible risk gets the policy. The structural risk gets overlooked.
    Discipline means applying proportional control. Tight governance where the consequences are institutional. Lighter guardrails where the risk is personal and contained. Most organizations have this backwards.
    Three groups. Three sets of priorities. None of them talking to each other before go-live.
    Most organizations start governing AI after the first incident. The best ones start before the first demo.
    Chapter 09

    The Enthusiasm Comes from the Top.
    The Risk Lands in the Middle.

    Three groups. Three sets of priorities. One shared exposure.

    Responsible AI governance requires alignment between leadership ambition and platform reality. These are not always the same thing. They are almost never aligned before AI capabilities are enabled.
    Executives
    focus on…
    Platform administrators focus on…
    Users
    focus on…
    Productivity gains and time savings
    Permission inheritance and data exposure
    Spending less time on repetitive tasks
    Competitive differentiation
    Workflow integrity and audit traceability
    Getting answers faster
    Innovation signaling to the market
    Who owns the risk if something breaks
    Avoiding new process overhead
    Return on the AI investment
    How to contain scope creep
    Access to the most capable features
    Speed of deployment
    What to do when the platform updates
    Not being blamed when AI is wrong
    These perspectives are complementary, not conflicting. But they are rarely aligned before AI goes live. The executive announces the initiative. The administrator implements it under time pressure. Users start experimenting before either group has agreed on the boundaries. The enthusiasm comes from the top. The risk lands on the platform team. The workarounds come from the users.

    The 78% shadow AI figure is not a failure of compliance. It is a failure of enablement. When the organization does not provide a governed alternative, people govern themselves. And when 38% of employees share confidential data with unapproved AI tools, the cause is usually not malice. It is that nobody told them clearly what they were and were not allowed to do.
    Governance is the mechanism that aligns all three groups before something breaks. When leadership declares that the organization will use AI to accelerate how it manages work, the follow-up question must be immediate: within what boundaries, and who has the authority to change them? That question defines organizational maturity.
    AI governance does not have a go-live date because AI does not have a go-live date.
    Chapter 10

    Governance Does Not
    Have a Go-Live Date

    Platforms update continuously. Governance must keep pace.
    AI governance does not end at enablement. It evolves alongside the technology.

    Modern SaaS platforms update AI models and capabilities continuously. Features expand. Interfaces change. Underlying models may improve or behave differently. Atlassian has shipped significant changes to Rovo’s capabilities multiple times since its initial release, including expanded agent functionality, MCP server integrations, and enhanced search across connected third-party applications. Each update potentially changes the governance surface. A policy written for Rovo at launch may not adequately cover Rovo six months later.

    Organizations must define who has authority to approve new AI capabilities as they become available, how model or feature updates are reviewed before broad enablement, what metrics indicate acceptable performance, and what thresholds trigger reassessment. Without these answers, each platform update becomes a governance event that nobody is managing.

    Continuous oversight requires measurable signals: how often AI-assisted decisions require human correction, incident reports tied directly to AI outputs, exception patterns to the AI usage policy, and drift in classification or triage accuracy over time.
    Governance also requires a response model. When AI outputs create harm, confusion, or regulatory exposure, the organization should be able to temporarily disable affected capabilities, investigate prompt and output history, review the human oversight process that was supposed to catch the issue, and communicate findings to affected stakeholders. If the organization cannot do any of these things, its governance is symbolic. The policy exists on paper. The operational reality is unmanaged.

    The EU AI Act compliance requirements for high-risk AI systems, which become fully enforceable in August 2026, make continuous governance a legal obligation for organizations operating in or serving European markets. But even without regulatory pressure, the commercial case is clear. Organizations that succeed with AI treat it as a capability requiring the same continuous attention as any other part of their operational infrastructure, not a project with a go-live date.
    Fifteen items. Answer them before you enable AI at scale.
    Chapter 11

    The AI Governance
    Readiness Checklist

    Fifteen items. Answer them before you enable AI at scale.
    This checklist is designed for Atlassian enterprise customers preparing to enable or expand AI capabilities inside their work management environment. It is a starting point for the conversations that must happen before the technology goes live, not a certification standard.
    Item
    What to check
    OWNERSHIP AND ACCOUNTABILITY
    Executive sponsor identified
    A named executive owns AI governance decisions, not just AI adoption enthusiasm.
    Governance authority defined
    It is clear who can approve new AI capabilities, change usage boundaries, and disable features if needed.
    Administrator role scoped
    The platform admin’s responsibility ends at technical implementation. They are not carrying strategic risk.
    POLICY AND USAGE BOUNDARIES
    AI usage policy written and readable
    Under fifteen minutes to read. Covers what is permitted, what is prohibited, and where human judgment is required.
    Data classification applied to AI
    The organization knows which data categories are prohibited from AI systems and has communicated this clearly.
    Shadow AI acknowledged
    The organization has recognized that employees are already using unapproved tools and has provided a governed alternative.
    TECHNICAL CONTROLS
    Permission model reviewed for AI
    Atlassian instance permissions have been audited with AI discoverability in mind, not just access control.
    Third-party connectors assessed
    Any Rovo integrations with third-party applications have been reviewed for data exposure implications.
    Irreversible actions restricted
    AI cannot trigger workflow transitions, approvals, or escalations without human confirmation.
    Logging enabled for AI actions
    AI-assisted changes in tickets, documentation, and workflows are logged and auditable.
    MONITORING AND RESPONSE
    Performance metrics defined
    Measurable signals exist for AI accuracy, correction rates, and exception patterns.
    Incident response plan documented
    The organization can disable AI capabilities, investigate outputs, and communicate findings when needed.
    Governance review cadence set
    AI governance is reviewed on a defined schedule, not only when an incident occurs.
    REGULATORY AWARENESS
    EU AI Act applicability assessed
    For organizations in or serving European markets: whether AI use cases fall under high-risk classifications.
    Record retention policy updated
    AI-influenced work artifacts are subject to the same retention and audit requirements as human-authored content.

    Organizations with complex regulatory environments, multiple Atlassian instances, or significant third-party integrations will need a more detailed assessment beyond this checklist.

    Governance and guardrails are not the
    brakes on AI. They are the steering system.
    Organizations that skip this layer may move quickly, but they will also move blindly. Organizations that invest in governance first create the conditions under which AI can be trusted. Not because it is flawless, but because it is bounded, monitored, measurable, and accountable. That trust becomes the foundation for everything that follows.
    References

    1. NIST AI Risk Management Framework (AI RMF 1.0). National Institute of Standards and Technology, January 2023.
    2. NIST AI 600-1: Generative Artificial Intelligence Profile. National Institute of Standards and Technology, July 2024.
    3. OECD AI Principles (updated 2024). Organisation for Economic Co-operation and Development.
    4. ISO/IEC 42001:2023. Artificial Intelligence Management Systems. International Organization for Standardization.
    5. OWASP Top 10 for Large Language Model Applications, 2025 edition. Open Worldwide Application Security Project.
    6. Best Practices for Securing AI Data. CISA, NSA, FBI, and international partners. May 2025.
    7. The State of AI, 2024 Global Survey. McKinsey and Company.
    8. Global AI Regulations Fuel Billion-Dollar Market for AI Governance Platforms. Gartner, February 2026.
    9. Survey: Organizations with AI Governance Platforms Are 3.4x More Effective. Gartner, Q2 2025.
    10. PEX Report 2025/2026: AI Governance Policy Adoption. Process Excellence Network.
    11. Shadow AI in the Enterprise. WalkMe / SAP, 2025.
    12. Stanford AI Index Report, 2025. Stanford University Human-Centered Artificial Intelligence.
    13. EU AI Act. European Parliament and Council, 2024. Penalty phase effective August 2025.
    14. Atlassian Trust Center and Rovo Data Privacy Guidelines. Atlassian, 2025.
    15. 42% of AI Projects Fail in 2025. DZone, 2025.

    About
    Trundl is an Atlassian Solution Partner specializing in service management, enterprise AI governance, and platform modernization for complex organizations. We work with healthcare systems, financial services firms, and technology companies that need their Atlassian investment to operate at the level their business demands.

    Our Rapid Deploy methodology uses AI to convert structured discovery sessions into production-ready Atlassian configurations in days rather than months. Our governance and advisory practice helps organizations establish the policy, guardrail, and oversight frameworks necessary to deploy AI capabilities responsibly inside enterprise work management environments.

    If the scenarios in this paper sounded familiar, we should talk.