Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

👋 Introduction

Artificial Intelligence/ Machine Learning (AI/ML) models have introduced new demands of TRE providers. As discussed in this article, whether a model is being brought into or being taken out of the TRE, it is crucial to assess the risk of any individual-level data in the model. Our primary goal is to ensure the safety of all data within the TRE and to mitigate risk.

...

Info

If a request is a AI/ML model transfer from Project A to Project B within the TRE (in segregated workspaces), it will be treated as a separate output request (egress) and input request (ingress). This ensures data security is maintained in both TRE projects.

...

➡️ Requesting AI/ML model into the TRE

Working with Users, we aim to understand project requirements and will have already asked you if you intend on requesting a trained AI/ML model into our TRE. This request should be made through HIC Support. We focus on three key elements:

...

  • We will work with you to understand your project requirements using AI/ML methods. If these change part-way through the project, then the project governance may need to be re-assessed, which may lead to additional resources and costs.

...

⬅️ Requesting AI/ML model out of the TRE

Just like other output requests, you must undergo our ‘egress’ process to take your model out of the TRE. There are 2 elements we focus on:

...

Note

We reserve the right to not permit the AI/ML model to be provisioned in the TRE and/or released from the TRE

...

🔍 Disclosure control of AI/ML models

📚 Risk Assessment

There are various factors that we will consider when assessing the risk impact:

  1. Safe People: We expect Users working with these methodologies to understand the associated risks. We will discuss the process and risks with you to ensure you understand the reason behind this risk impact assessment. It is crucial that you have read our Guidance on Artificial Intelligence and Machine Learning Models in our TRE and have an up-to-date signed TRE User Agreement.

  2. Data Minimisation: We validate the data used to train the model. We will check the pseudonymisation and derived data.

  3. Contractual Agreement: If a third party is involved, appropriate documentation may be required.

  4. Ethics and transparency: We ensure that appropriate information governance is in place from the User Organisation, especially when a third party is involved. Depending on the TRE User and information governance pathway, we will check compliance with all relevant ethical standards.

  5. Data Protection Impact Assessment (DPIA): A DPIA helps to identify and reduce risks to project data. Project-specific DPIA can be crucial for managing data security effectively.

👀 Model Assessment: Disclosure control of AI/ML

As TRE providers, we perform a ‘white box attack’ on models. This means that we have full access to the model and its parameters, allowing us to perform a direct assessment and range of different attacks. When assessing the model, we will consider the following:

...

Info

As with all HIC processes, documentation, communication and evidence are all stored on our project management system. This creates an audit trail and is crucial for our compliance with our security standards.

...

For queries or comments regarding HIC How To Articles contact, HICSupport@dundee.ac.uk

Filter by label (Content by label)
showLabelsfalse
max5
spacescom.atlassian.confluence.content.render.xhtml.model.resource.identifiers.SpaceResourceIdentifier@925
maxCheckboxfalse
showSpacefalse
sortmodified
typepage
reversetrue
labelskb-how-to-article
cqllabel = "kb-how-to-article" and type = "page" and space in ( "IN" , "HKB" )