HELSENORGE

Ethical assessments

Norwegian

Ethical assessments in research 

When planning and carrying out research projects, thorough assessments of ethical issues must be made, including assessments of whether the beneficial value of the research is greater than the disadvantages for participants. 

Regional committees for medical and healthcare research ethics assess applications authorized by the Research Ethics Act and the Health Research Act. Projects that are in the category of medical and healthcare research must send an application to the Regional Ethics Committee (REK) for prior approval of research on humans, human biological material or health information. 

Ethical principles for artificial intelligence 

In the field of artificial intelligence, several entities (particularly the EU, World Health Organization, and Norway's national AI strategy) have developed a set of ethical principles that should be considered during development and implementation. Based on these, ethical principles have been drawn up which are recommended to apply to AI projects in Helse Nord. These are explained in Helse Nord's strategy for artificial intelligence 2022-2025. 

Many of the principles form the basis of the EU's upcoming legislation for AI, so that the principles may become legally binding in the future to a large extent. Below is a summary of the ethical principles as they are discussed in Helse Nord's AI strategy: ​

Responsibility and accountability 

The development and use of AI should be organized in a way that ensures clear responsibilities. Internal control and external supervision should be facilitated in all phases of development. This means that transparency in algorithms, data sets, and design/development procedures must be ensured throughout the development and life cycle of an AI system. All projects should be based on proportional risk and consequence analyses. Such analyses evaluate e.g. patient safety, ethics, information security, and privacy. 

The use of AI in clinical practice should be in line with professional standards. Work processes involving AI should be documented. Organizations that use AI must ensure that employees can use the systems appropriately. This includes responsibility for training, establishing procedures for appropriate use, and ongoing monitoring and quality assurance of AI use. In addition, each healthcare professional must be aware of their independent responsibility for proper professional practice, cf. the Health Personnel Act (helsepersonelloven) § 4. The duty of care may require caution with recommendations from AI systems, particularly in an early implementation phase. 

​Reliability

AI systems must meet high standards of accuracy and stability over time. This is crucial to avoid unintended errors and disadvantages for patients and healthcare professionals. Reliability also means that work processes in which AI is used must be adapted so that the consequences of system downtime or errors are minimized. The introduction of AI should aim to increase the quality of services for the benefit of patients. 

​Transparency and interpretability 

Development and use of AI must take place in a transparent manner. Transparency involves, among other things, transparency about the content of training data and where the data base comes from, transparency about the system's logic, and transparency about AI being used and for what purposes. When developing AI, interpretable algorithms and systems must be pursued as far as is feasible based on the current state of technology and science ("state of the art"). Interpretability means that, as far as possible, a justification should be given for the system's conclusions and recommendations. As a minimum, the rationale must be comprehensible to healthcare personnel and others who will use the system. ​

​Self-determination and human control 

The introduction of AI in clinical operations must not come at the expense of patients' right to self-determination and participation. AI must also not exceed the professional autonomy of healthcare personnel. The patient has the right to meaningful information about his own state of health and the content of the health care. 

Work processes involving the use of AI must facilitate human control. Information or recommendations from AI systems that provide support for decisions must normally be subject to a real professional assessment by qualified healthcare personnel before decisions are made. The systems must be designed with user interfaces that enable users to control the systems when they are in use. AI systems must be able to be overridden by professional users if necessary. 

Diversity and non-discrimination ​

In the development and use of AI, efforts should be made to counteract the marginalization of vulnerable groups in society, including various minority groups. The risk that the quality of healthcare when using AI may vary depending on group membership should be assessed. The assessment should be documented and followed up with preventive measures. 

When using machine learning to train algorithms, a diverse dataset should be sought. Consideration must be given to the population composition in areas where an AI system is to be used. At the same time, it must be considered that a dataset that corresponds to the representation of different groups in a population is not always sufficient to achieve AI systems that work well for the various groups. Minorities in a population will usually be minorities in the datasets used to train algorithms. Therefore, measures should be considered to ensure the quality of AI-based services for minority groups. If AI systems have different accuracy for different groups, this should be documented and disclosed in the documentation accompanying the system, as well as in the user manual. 

Helse Nord has a particular responsibility for the Sami population in Northern Norway. The development and use of AI in Helse Nord should take into account special conditions that apply to the Sami population. 

Sustainability 

The storage and use of large amounts of data, and the training of algorithms, partly involves significant energy consumption, which can have an impact on the climate and the welfare of future generations. The consequences for the climate and future generations must be given weight when deciding on the prioritization of AI projects in Helse Nord and when implementing the projects. This means, for example, that emission-neutral energy sources and other climate-friendly solutions should be chosen where such solutions are available. ​