AI

How we use AI: creating an AI policy to minimise risk

19th Jun 2025 | 9 min read

How we use AI: creating an AI policy to minimise risk

AI is no longer avoidable. If you think your employees aren’t using it, there’s a high chance that they are. You just don’t know it.

While AI does bring significant rewards when used correctly, it also brings risk. This largely relates to how data is used, with the potential for breaches and leaking of IP that harms your reputation, competitive advantage and finances.

Despite the risk, data highlights over half of businesses still do not have an AI policy in place. These policies are crucial to govern usage of AI across your business, giving clear guidelines around how it should and shouldn’t be used to protect your staff, customers and operations.

We crafted our AI policy to reduce the threat and protect our organisation, while still encouraging safe AI usage that drives value. Here’s how we did it, alongside our top tips for creating your own.

The challenge

As a business, we clearly see the worth of AI in driving capacity and enabling us to achieve more. However, in order to be a truly AI-driven business, we needed to mitigate any potential risks and ensure safety across our operations.

Common AI challenges we, like many other organisations, faced included:

  • Preventing data being inadvertently leaked beyond the organisation through AI tools (including IP)
  • Handling customer data compliantly
  • Ensuring core business decisions are not subject to AI bias for fairness and transparency
  • Preventing AI tools from being exploited by criminals to gain access to our systems and data
  • Maintaining accuracy across information given out to employees, stakeholders or clients

Without an AI policy, battling these challenges would be an uphill struggle. We would have no grasp of which tools staff were using, how they handled data and how safe they were.

In the event of open AI tools, shared data is often used to train the model, running the risk sensitive information can be regurgitated to those outside the organisation. This could put us at risk of breaching customer data or even giving away sensitive information to competitors. Data could even be leveraged by cyber criminals to fuel social engineering and attempted attacks.

If we didn’t have control over how AI was used internally, it could also impact quality. If people used inaccurate or biased tools for core decisions, business performance could decline, leading to headaches later. Plus, it would become harder to audit our processes in the event of an incident, bringing compliance into question.

Louise Otton, Head of Talent Development and Culture at Infinity Group, explains: “As AI became more known, people were starting to use different tools. But the problem was, without governance in place, there would be no way of us understanding what data is being used and how”.

How we created our policy

Our policy had to encompass two strands: HR and IT. IT would take care of the technical logistics and vetting of AI tools, while HR would cover what the policy meant for staff.

Rory Molloy, Internal IT Manager at Infinity Group, emphasises the importance of this combined approach: “We needed to be backed by a policy covering IT and HR. It meant that, when we did implement technical protocols to govern usage, this would be backed by the right processes to intervene if someone broke the rules. It also ensured the policy met our cultural values, getting the balance between our security and our employee’s freedom right”.

The IT foundations: ensuring safety

Our IT team has implemented robust technical safeguards as part of our overall IT security strategy. A key component of this is the deployment of Microsoft Defender for Cloud Apps. This system enables us to monitor and secure how cloud applications are being used, including AI tools, giving us valuable insights into potential risks.

Different apps are also scored against Microsoft’s unique criteria, largely relating to how data is handled. If an app is given anything other than a green score, suggesting a risk, it is not permitted to be used under the policy.

By controlling access to AI tools and apps we know are safe, we limit the exposure of sensitive data and reduce the risk of breaches. This also protects valuable IP.

Strict access policies and monitoring systems are also in place to detect and prevent potential misuse of tools or use of unapproved apps. By having these protocols in place, we can also ensure compliance with the likes of GDPR.

The human element: cultivating an ethical AI culture

Next, our HR team established a comprehensive AI Ethics and Data Usage Policy that clearly outlines the principles, practices and boundaries for using AI and generative AI tools responsibly. This includes a focus on:

  • Ethical and transparent AI use
  • Strict data handling and privacy standards
  • Robust security protocols
  • Clear accountability and compliance measures
  • Specific guidelines (do’s and don’ts) for generative AI tools
  • Strategies for identifying and mitigating bias

The policy prioritises responsible innovation through several key measures. This includes a commitment to ethical AI use with principles of fairness, transparency and accountability, alongside efforts to ensure explainable AI decisions.

The policy also places an emphasis on AI training, so we can ensure people have the skills and awareness they need to leverage AI correctly, mitigating the risk and empowering them to excel.

Everyone across the business has been asked to review and sign this policy, which will also hold them accountable to the IT element. This ensures everyone is aware of what’s expected of them and makes it easier to follow up if an issue arises.

The results

The implementation of our comprehensive AI policy is already driving safety across our organisation while ensuring us to achieve our technology vision.

By establishing clear guidelines and deploying advanced security tools like Microsoft Defender for Cloud Apps, we have successfully strengthened our security baseline. This enhanced posture provides a robust defence against potential data breaches, insider threats and compliance risks, ensuring our sensitive information remains protected.

Secondly, and perhaps most importantly, this structured approach has been instrumental in allowing us to push internal AI initiatives and truly be AI-first. With a clear framework for responsible use, teams are empowered to explore and scale AI efforts with confidence, knowing that ethical and security considerations are proactively addressed.

This shift in focus enables us to concentrate on the rewards of AI rather than being solely preoccupied with the risks, fostering an environment where innovation can thrive securely and strategically.

What to consider in your AI policy

Having your own internal AI policy is crucial in the AI era. Here’s our top tips for building yours.

1. Ensure business-wide alignment

Don’t create your policy in a silo. In our experience, the IT and HR unification was crucial to creating our policy, but this may extend even further.

By ensuring alignment, you can connect with the overall goals and values of your entire business. This involves engaging stakeholders from different departments (like IT, HR, legal and individual business units) to ensure the policy reflects the diverse needs and perspectives within your organisation.

A policy that is well-understood and supported across the board is more likely to be effectively implemented and adhered to.

2. Tailor the policy to your context

No AI policy should be the same as another organisation. Yours should be specifically tailored to your unique culture, operations and risk appetite.

Consider the types of data you handle, the AI tools you are likely to use and the specific challenges and opportunities relevant to your industry.

A bespoke policy will be more practical and effective in the long run, allowing you to choose tools and guidelines that meet your risk criteria and needs.

3. Thoroughly review AI tool terms and conditions

Before integrating any AI tool, it’s essential to carefully examine its terms and conditions.

Pay close attention to clauses related to data privacy, intellectual property, data usage rights and security responsibilities. Ensure that the tool’s terms align with your internal policies and legal obligations. This proactive step can prevent potential compliance issues and unexpected risks down the line.

Most tools should share the T&Cs on their website, so spend time reading through them. Remember to check if they apply to both paid and free versions of the tool. If no terms are available, it’s wise to avoid using the tool.

4. Establish strong data governance

Effective data governance is foundational to a responsible AI policy. It involves establishing clear accountability for data management, defining protocols for the entire data lifecycle from collection to disposal and ensuring data accuracy.

Key elements of data governance include maintaining explicit data retention and deletion policies to minimise unnecessary data storage and ensure timely and secure disposal. You may also want to restrict company data access solely to authorised personnel, based on their roles and responsibilities.

Finally, data governance necessitates that all data handling practices consistently align with internal confidentiality standards. This requires clear communication of these standards, coupled with training and procedures that ensure sensitive information is protected throughout its lifecycle, particularly when used in conjunction with AI tools.

5. Implement continuous monitoring

Your AI and data usage policy shouldn’t be a static document. It’s vital to establish processes for continuous monitoring of AI usage, data handling practices and the effectiveness of your policy.

This includes tracking compliance, identifying potential risks or breaches, and staying informed about new AI developments and evolving regulations. Regular monitoring allows you to adapt your policy proactively and ensure its ongoing relevance and effectiveness.

Explore AI without risk

We know first-hand that AI can have significant rewards for your business, including time-savings and preventing staff burnout. However, it’s critical to understand risk factors and address them to get true value.

If you’re looking to embed AI safely, alongside your AI policy, we’ve got some resources to help you:

We’re also always happy to talk to you about our AI policy and the lessons we’ve learnt so far. Just reach out below.

Related Content

How we use AI: Getting great marketing output with perfect prompts_
AIHousing

How we use AI: Getting great marketing output with perfect prompts_

Anyone who uses AI does so to hopefully save time. But if you’re struggling to get the output you ...

How we use AI: Saving PMO time with automatic task creation
AI

How we use AI: Saving PMO time with automatic task creation

Getting the most out of AI is often a game of exploring, learning and testing. By understanding what...

How we use AI: Eliminating repetitive question-answering with a product knowledge agent_
AI

How we use AI: Eliminating repetitive question-answering with a product knowledge agent_

The beauty of AI is its ability to streamline processes and alleviate people from mundane, repetitiv...

We would love
to hear from you_

Our specialist team of consultants look forward to discussing your requirements in more detail and we have three easy ways to get in touch.

Call us: 03454504600
Complete our contact form
Live chat now: Via the pop up


Feefo logo