Artificial Intelligence Policy
PURPOSE
The purpose of this policy is to define the acceptable use of Artificial Intelligence (AI) tools (e.g., ChatGPT, GitHub, Copilot, Adobe AI) within HIC to ensure the confidentiality, integrity, and availability of organisational information assets in compliance with required standards and applicable internal policies.
AI tools can enhance productivity by assisting with tasks such as drafting content (including text and images), conducting research, summarising documents, and supporting planning. However, their use must be responsible, secure, and in compliance with relevant data protection and organisational standards.
This policy is supported by team guides, which focus on the specific use of AI for their operational team within HIC (Health Informatics Centre).
SCOPE
This policy applies to all employees, contractors, consultants, temporary staff, and third parties who use AI tools in the context of HIC's business operations.
RESPONSIBILITIES
ROLE | RESPONSIBILITY |
Team Leads |
|
HIC Staff |
|
HIC Clients, Third Parties and Suppliers |
|
Process Manager |
|
DEFINITIONS
Application: The implementation of a service (web application, console application, etc.)
Data: Information held in electronic or paper form.
HIC Clients: Refers to an individual or organisation that receives services from Health Informatics Centre (HIC) and agrees to follow HIC's contractual obligations, policies, and procedures, ensuring compliance with legal, ethical, and professional standards.
Personal Data: Information relating to an identified or identifiable living person. The 8 Data Protection Principles in relation to protecting personal data are listed in the Policy document.
Policy: Overall intention and direction as formally expressed by management.
Service: Combination of people, processes, and technology to support client's business.
POLICY
1. Definition of AI Tools
For the purposes of this policy, “AI tools” refers to publicly available or licenced Artificial Intelligence systems (both online services and offline applications), these include but are not limited to:
Large Language Models (e.g. ChatGPT)
Code assistants (e.g. GitHub CoPilot)
Content generation tools (e.g. Adobe AI)
2. Acceptable Use
AI tools may be used for the following purposes, provided they do not breach this policy:
Drafting non-sensitive communications (e.g. emails, reports)
Generating ideas for problem-solving or process improvement
Researching publicly available, non-confidential information
Summarising non-sensitive documents to extract key points
Supporting planning, scheduling, and workflow organisation
Offering suggestions for communication style and formatting
Support for code review or creation (e.g. boilerplate code, debug of error)
Support for Research and Development activity (e.g. project work to validate ML)
All outputs should be reviewed, validated and approved by a human before use. See section 4 for further guidance on validation and responsibility.
3. Prohibited Use
AI tools integrated into applications must be user-controllable, with the ability to be enabled or disabled on demand, and must not run persistently in the background.
AI tools must not be used for the following:
Inputting or processing confidential, sensitive, proprietary, or personal data
Making autonomous decisions without appropriate human oversight
Making binding business decisions or commitments
Generating or sharing false, misleading, or unethical content
Any activity that violates legal, regulatory, ethical, or internal policy requirements.
4. Security and Data Protection Requirements
Confidentiality: Do not input sensitive or personally identifiable information (PII) into AI tools unless specific approvals are provided by HIC Governance.
Integrity: AI-generated content must be validated for accuracy and appropriateness before use.
Accountability: Users remain fully responsible for any content or outcomes produced using AI tools.
Compliance: AI use must align with HIC’s Information Security Policy, and applicable UK data protection laws.
Bias and Ethics: Review AI outputs for accuracy, bias, or discriminatory language.
5. Good Practices
Employees are responsible for ensuring all AI use is in line with this policy and for validating all AI-generated outputs.
Managers must ensure their teams are aware of this policy and monitor its implementation.
The Information Security Team will provide guidance, enforce compliance, and investigate incidents related to AI misuse.
6. Monitoring and Enforcement
HIC reserves the right to monitor AI usage to ensure compliance with this policy. Breaches may result in disciplinary action, including termination of employment or contract, and may lead to legal consequences.
APPLICABLE REFERENCES
N/A
DOCUMENT CONTROLS
Process Manager | Point of Contact |
|---|---|
Keith Milburn |
Revision Number | Revision Date | Revision Made | Revision By | Revision Category | Approved By | Effective Date |
|---|---|---|---|---|---|---|
1.0 | 01/07/25 |
| Jenny Johnston, Keith Milburn | Material | Leadership Team | 07/08/25 |
1.1 | 03/11/25 |
| Symone Sheane | Superficial | Governance Co-Ordinator: Symone Sheane | 03/11/25 |
Copyright Health Informatics Centre. All rights reserved. May not be reproduced without permission. All hard copies should be checked against the current electronic version within current versioning system prior to use and destroyed promptly thereafter. All hard copies are considered Uncontrolled documents.