Are employees at your company surreptitiously using artificial intelligence tools like ChatGPT, Claude, Copilot, and Gemini for everyday business tasks? It’s likely. An October 2024 Software AG study found that half of all employees use “shadow AI” tools to enhance their productivity, and most would continue using them even if explicitly banned by their employer.

Increased productivity is a good thing, but unsanctioned and unregulated AI use poses risks. A February 2025 TELUS Digital survey found that 57% of enterprise employees admit to entering high-risk information into publicly available chatbots. This includes personal data about employees or customers, product or project details, and confidential financial information like revenues, profit margins, budgets, and forecasts.

A clear AI policy will help a business minimize the risks of using AI tools. These risks include leaks of confidential information, compliance failures, accidental copyright violations, and reputational damage. As AI becomes a routine part of knowledge work, every business—even small firms—must establish an AI policy to maximize the benefits of using AI while safeguarding the company, its employees, and its clients.

Risks Addressed by a Formal AI Policy

Unauthorized AI use can create several types of problems:

Essential Elements of an AI Policy

The specifics of an AI policy vary by the type and size of company, but at minimum, most AI policies should include the following:

Building Your AI Policy

If your company doesn’t already have an established process for generating policies, AI tools can themselves provide a starting point when used thoughtfully. Here’s an approach:

  1. Prompt an AI tool like ChatGPT or Claude to generate a basic AI policy template. Be explicit about your company’s size, industry, and other relevant details, and be sure to specify that it must cover the elements listed above—you can paste them in. Iterate as necessary until the template has all the required sections.
  2. Review the generated template carefully, removing generic content and noting areas that need company-specific details.
  3. Ask for feedback on the draft from key stakeholders, including:
    • Leadership to align with company goals and values
    • IT team to verify technical feasibility and security measures
    • Legal counsel to ensure compliance with relevant regulations
    • Department heads to confirm that it will be practical to implement the policy
  4. Incorporate the feedback to create a policy that reflects your company’s specific needs while maintaining necessary protections.

Remember: An AI-generated template is for starting the conversation. The final policy must be tailored to your organization’s specific needs and thoroughly vetted by relevant stakeholders.

The rise of AI tools in the workplace isn’t just a trend—it’s a fundamental shift in how work gets done. Whether your employees are already using AI tools without oversight or are hesitant to use them due to uncertainty, now is the time to establish a formal AI policy. Start with the template approach outlined above, engage your stakeholders, and develop guidelines that work for your organization. A well-crafted AI policy will help your business harness the benefits of AI while minimizing its risks.

(Featured image by iStock.com/girafchik123)


Social Media: Shadow AI is commonplace in workplaces, with half of employees using unauthorized AI tools and many sharing sensitive data. Learn why your business needs a formal AI policy to harness the benefits of AI while safeguarding against its significant risks.