From chatbots to image generators, generative AI is rapidly changing the way we work. But without clear rules, it can also create real risks. That’s why LEX HR has developed a comprehensive Generative AI Policy you can adopt or adapt - giving your team clear, practical guidance on how to use AI tools responsibly, safely, and effectively in the workplace.
Generative AI Policy
Purpose and Scope
1.1 This policy explains how our organisation manages the use of generative artificial intelligence systems within our workplace. It outlines clear standards and provides guidance to everyone on how to work responsibly with generative AI tools. All departments and teams are expected to follow the points set out in this document.
1.2 This policy covers all forms of generative AI, such as tools that produce text, images, code, or other content in response to user prompts. It applies to all staff, contractors, volunteers, and any other individuals who carry out work on our behalf.Definitions
2.1 “Generative AI” refers to computer systems that analyse data and produce new content closely resembling human-created output. Examples include systems that create written text, visual artwork, or code snippets when prompted.
2.2 “Sensitive material” means any content about people’s private matters, our organisation’s strategies, finances, or any other details that could cause harm, embarrassment, or unfair advantage if made public.
2.3 “Confidential data” refers to internal plans, client information, project briefs, trade details, or similar content that should be kept private.Approved Tools and Access
3.1 We will offer thorough information on which generative AI tools are allowed for use. This may include online platforms, installed software, or other applications specifically chosen to suit our working environment.
3.2 Staff must not use any generative AI tools that are not approved or listed by the organisation. If anyone wishes to test a new tool, they are asked to speak with their manager or a technical advisor first.
3.3 Access to generative AI tools will be monitored to ensure compliance with this policy. Checks may be necessary to confirm that only approved features are being used.Acceptable Use
4.1 Staff may use generative AI tools to improve efficiency, research new ideas, create first drafts, summarise text, or assist with routine tasks. When used properly, these tools can reduce repetitive work and free up time for more detailed activities.
4.2 Staff should apply critical thinking when using outputs from generative AI. Content offered by these tools can sometimes be inaccurate or lack sufficient detail. Always remain aware that the final responsibility for decisions or content lies with the user, not the system.Unacceptable Use
5.1 Generative AI tools must not be used to undermine, mock, or damage the reputation of the organisation, clients, colleagues, or any other parties.
5.2 Staff must not post snippets of our private plans, confidential data, personal employee details, or other restricted information into public-facing tools. This includes project details, internal documents, or any identifiable data related to our workers or clients.
5.3 Staff should not rely on generative AI outputs as the sole basis for making critical decisions about colleagues, clients, or projects. Always double-check the accuracy of any content generated and, where appropriate, consult a manager or another qualified member of staff.
5.4 Staff are prohibited from using generative AI tools to generate any form of harassing, discriminatory, or inappropriate content. This includes text or images that reflect bias, or that could potentially offend or harm individuals or groups.
5.5 The tools must not be used for repeated personal tasks unrelated to the organisation’s work, such as non-work research or personal creative experiments that distract from core duties.Responsibilities
6.1 Managers’ Responsibilities
6.1.1 Managers must ensure that their teams understand and follow this policy. This includes explaining proper usage, letting staff know about the tools allowed, and offering ongoing support where necessary.
6.1.2 Managers are in charge of identifying any duties within their teams that may benefit from the use of generative AI and guiding staff in adopting these tools. They must also address any misuse swiftly and consistently.Employees’ Responsibilities
7.1 All staff must uphold this policy when selecting, using, or experimenting with generative AI in their work.
7.2 Staff must check and validate any important output from a generative AI tool by reviewing external sources or consulting subject specialists before sharing it publicly or internally.
7.3 When in doubt about whether a use of the tool is suitable, staff must ask a manager or a suitable advisor for guidance.Data Protection and Confidentiality
8.1 Staff must be careful when entering prompts or questions, especially if these prompts contain confidential details. Even short phrases or examples of internal data might end up stored in external systems.
8.2 Anyone uncertain about whether certain data is permissible to include in tool prompts should seek advice from a manager or a data specialist within the organisation.
8.3 Any outputs that could reveal confidential information must be safeguarded with the same level of care required for internal documents. Do not share these outputs without confirming they do not contain sensitive content.Quality Control
9.1 Generative AI can produce errors or partial information. Before any content is used or shared, ensure it is correct, updated, and fits our established approach or viewpoints.
9.2 Staff should proofread all text generated by AI tools to identify possible inaccuracies, incomplete elements, or text that doesn’t reflect our professional tone.
9.3 If the generated content appears unusually biased, out of date, or offensive, discontinue using that output and inform the relevant manager immediately.Training and Support
10.1 The organisation will arrange basic training to help staff understand and use generative AI in ways suited to our work. This may include online tutorials, in-person demonstrations, or written guidelines.
10.2 Staff are encouraged to share tips and experiences of using generative AI with their teams, so that best practices can be learned collectively.
10.3 Anyone struggling with the tools or unsure how to apply them to their tasks can request additional help or more guidance from a manager or a technical advisor.Intellectual Property
11.1 The organisation retains ownership of any final work product created or refined by a member of staff for business purposes, whether the content was originally drafted by generative AI or not.
11.2 Staff must not submit any text or data created by another person or party to generative AI tools without checking that sharing is acceptable. If unsure, consult a manager in advance.
11.3 When staff adapt or merge AI-generated materials into organisational products, references or acknowledgements may be required in certain contexts. Clarify with your manager when such credit might be needed.Ethical Considerations
12.1 Staff should remain mindful of the indirect effects generative AI might have on work tasks, such as potential shifts in workload, reliance on automated suggestions, or changes in creative processes.
12.2 We encourage everyone to keep pace with developments in AI and propose ways to improve our adoption of these tools for the good of the organisation. However, no one should attempt large-scale changes without consultation.
12.3 Generative AI should never diminish collaborative teamwork nor replace considered human judgement when we make important decisions. Where a matter might affect staff members, clients, or public trust in our organisation, a balanced approach by human decision-makers is required.Security Measures
13.1 Our IT department will regularly review and test any AI tools that we permit to confirm they meet minimal security requirements.
13.2 If staff suspect that a generative AI tool has led to a security problem or has put confidential details at risk, they must report this to their manager and the relevant technical team immediately.
13.3 Staff should be aware that certain AI tools might retain and analyse user prompts for future improvement. This can risk the privacy of business-related data. Always remain cautious about what information you input.Monitoring and Review
14.1 The organisation may log the activity of generative AI systems to ensure that use remains compliant with this policy. By continuing to work here, staff acknowledge that monitoring might take place for legitimate reasons.
14.2 Regular assessments of generative AI usage will be carried out. Findings will guide future updates to this policy, highlight the need for staff training, and identify any further security measures.
14.3 We will keep an eye on published guidelines and evolving common practice. As new standards and suggestions for managing generative AI arise, we will review and update this policy accordingly. Staff may be asked to read or sign revised versions.Managing Concerns and Reporting
15.1 Staff should feel comfortable raising questions or concerns related to generative AI. Where there is any confusion about appropriate or safe usage, please speak to your manager to seek clarity.
15.2 If anyone becomes aware of or suspects a breach of any rules in this policy, they must promptly report it so that the matter can be addressed.
15.3 The organisation will deal with suspected misuse in a fair and consistent way. Line managers will look into the circumstances, and any necessary action will be taken according to our standard procedures.Personal Projects and External Activities
16.1 Staff are generally free to experiment with generative AI for their own private interests outside official work hours and away from work devices, as long as there is no confusion that these personal activities are official business outputs.
16.2 Personal use should never risk disclosing the organisation’s confidential details or be presented in a manner suggesting that it embodies our official viewpoint.
16.3 Should staff become aware that personal use of generative AI might conflict with our professional interests or reputation, they should speak to a manager immediately for guidance on how best to proceed.Future Developments
17.1 Generative AI technology is evolving. Tools may rapidly change or be replaced by new solutions. The organisation reserves the right to add, remove, or alter the approved tools list based on changes in reliability, cost, safety, or other relevant factors.
17.2 As part of our commitment to innovation, we encourage staff to remain aware of emerging developments. Suggestions for adopting or trialling new tools are welcome, but any proposals should be directed to managers so that risk and suitability can be properly examined.Good Practice Guidelines
18.1 Double-check any AI-generated text for errors. Do not directly copy material without reading it thoroughly.
18.2 Maintain a polite and non-offensive style when using system prompts, remembering that these prompts and generated content may be seen by management if problems arise.
18.3 Keep in mind that a generative AI system may recycle patterns seen in public data. Be vigilant about revealing experiences or details that might be unique to our organisation.
18.4 Stay alert for any bias in AI-produced results. If the AI output consistently shows any unfair patterns, inform a manager so we can take corrective steps.Consequences of Breach
19.1 Any misuse of generative AI or breach of this policy may be treated as a serious matter. We rely on staff to use these tools in line with professional standards and our organisation’s guiding principles.
19.2 Minor or accidental breaches may result in additional training or support, but serious misuse could lead to formal actions.
19.3 Managers will review each breach on a case-by-case basis, considering factors such as damage to the organisation, risk to clients, and whether the individual sought to understand policy rules beforehand.Implementation and Communication
20.1 This policy will be shared with everyone who works for or with our organisation. It must be easily available, and staff are encouraged to read it thoroughly.
20.2 If any part of this document seems unclear or too general, staff are asked to raise their queries promptly rather than make assumptions.
20.3 All new starters or external contractors will be guided on the basics of this policy as part of their introduction to the organisation’s working style.Ongoing Feedback
21.1 We value feedback about this policy and encourage staff to suggest improvements. Generative AI is an area of quick change, so open discussion helps us respond effectively and keep practices current.
21.2 If staff have ideas for better ways to introduce AI into everyday tasks, or new areas where it might help us grow as an organisation, they can direct suggestions to their line manager.
21.3 Our commitment is to build a safe, collaborative environment where AI’s benefits can be enjoyed responsibly, in line with ethical habits and mutual respect for one another’s work.Review Date
22.1 This policy will be reviewed at least once a year. If significant changes in generative AI occur sooner, updates may happen more often. Any revised version will be distributed through our usual channels, and staff may be required to confirm their acceptance of any changes.Document Control
23.1 This policy takes effect from the date of distribution.
22.2 Managers and designated leads will arrange for any updated copies to be provided to staff and ensure older revisions are removed from circulation.Contact for Questions
24.1 Distance or confusion should never prevent staff from asking for guidance or clarifications. If managers are uncertain about how to respond, they should raise the question with more senior leads or relevant advisors.
24.2 Through transparent discussion, we aim to ensure a clearer understanding of generative AI, balancing innovation and caution, and ensuring a supportive environment for everyone who works here.