How standards enable the delivery of trustworthy healthcare

Digital transformation has accelerated across industries since the onset of the COVID-19 global pandemic, as the world continues to adapt to new ways of living, working, learning and delivering key services, such as healthcare.

For example, artificial intelligence (AI) triage systems have been used increasingly in crowded hospitals to collect patient information (symptoms and medical history), compare it with similar cases and decide quickly who will be treated first, while telemedicine has reduced the number of face-to-face consultations in healthcare facilities, allowing workers to focus on COVID-19 patients.

AI technologies enhance healthcare and raise concerns

Machine learning algorithms perform better than humans in finding patterns in large amounts of data, thus enabling the early detection of diseases and conditions, such as cancer, sepsis, age-related macular degeneration and strokes. In addition to providing more accurate diagnoses and tailored treatments, there is great potential for AI technologies in other areas, such as pathology and laboratory work, where huge data sets remain untapped.

Despite these benefits, there are concerns. Patients will need to trust new technologies used by healthcare professionals for their diagnoses and treatments, while considering the privacy and security of their personal medical data.

These are some of the challenges faced by authorities, regulators, doctors, healthcare service providers, insurers and patients with the ongoing digitalization of healthcare.

e-tech spoke with Georg Heidenreich Convenor of the IEC and ISO joint working group for safe, effective and secure health software and health IT systems, including those incorporating medical devices, and Martin Meyer, the IEC and ISO joint committee for artificial intelligence liaison representative to IEC TC 62 for electrical equipment in medical practice, to find out how international standards can help address these issues.

What are some of the key issues around algorithms in healthcare?

GH: How do AI systems learn? In one type of system, the algorithm is trained with the manufacturer, validated and shipped to the market as a medical device, with the Certification Europe (CE) mark, which ensures safety considerations. But even with this system, we cannot see the reasoning of why the machine does what it does; its behaviour will still be a black box.

What if systems could learn continuously without scope limitations once deployed, and how would they be controlled? In the US, the Food and Drug Administration (FDA) is considering a total product lifecycle-based regulatory framework for medical devices which use AI and machine learning technologies. This would allow for modifications to be made from real-world learning and adaptation, while ensuring the safety and effectiveness of the software as a medical device. It has also launched the Digital Health Center of Excellence, which aims to rapidly review, categorize and clear cutting-edge digital health technologies in the US.

Data sets are key. What type of data sets have AI systems been trained on and what are the limitations? For example, if a patient presents with a rare disease, would there have been sufficient data to train the system to recognize such a disease, and how can we avoid inherent bias? Looking ahead, what role would a human doctor play as AI systems become more widely adopted and sophisticated? For instance, would human input be required at different points of the use of such systems?

Currently, if some algorithms continue to learn and then diagnose more patients, those patients are in fact subject to a device that has been trained without the validation that would normally be assured by supervised clinical trials. In other words, the algorithm has been trained on data sets, which have not been checked for errors.

MM: While continuously adaptive systems are technically possible, they are a challenge from the regulatory perspective and a societal concern. What is society willing and able to accept, what do patients consider to be trustworthy? Imagine in 10 years, if your doctor uses the same instruments from a decade earlier, rather than advanced technologies, you may question if you are getting the best treatment possible.

How can we find a balance between protecting data and enabling innovative healthcare?

The European Union General Data Protection Regulation (GDPR) implemented in 2018 protects any data that is related to a person's physical or mental health, genetic data, such as lab results relating to a biological sample analysis, and biometric data, for instance facial images, fingerprints, gait traits and more.

In the US, the Health Insurance Portability and Accountability Act (HIPPA) established in 1996, ensures the protection of personally identifiable information maintained by the healthcare and healthcare insurance industries from fraud and theft, as well as addresses limitations on healthcare insurance coverage.

MM: A lot of this boils down to what data is available to build such systems and allow them to evolve. While consumers are protected by GDPR it is difficult for ethical companies to access and know how to ensure data is used correctly.

Again, this is more of a societal concern. What personal medical data are you willing to make available and how much do you trust the infrastructure? If you agree to upload your latest health exam results to your e-patient history file, trust is key.

All levels of healthcare providers must be covered. In addition to manufacturers and companies that harvest and use data to develop machine learning based AI systems, this must include general practitioners, hospital wards and health data spaces, to avoid situations where personal patient data could have been misused unintentionally.

GH: People want affordable, safe healthcare. In order to improve healthcare for providers and patients, society needs to find a new balance between innovation and privacy, which allows AI technologies to use large data collections, which have been anonymized properly.

Proper anonymization can be a tricky task: for example, if we take away personal identifying information from a set of records, such as the name or birth date, if the group of participants is small enough, other factors could still help someone reidentify individuals, such as hair, skin or eye colour, body size.

Standards may help by providing best practices to achieve this.

What lessons have we learned from the pandemic?

MM: The pandemic has illustrated some of the shortcomings of the current infrastructure and the need to have access to vital health data in order to be able to track the disease, share findings and ultimately find a cure. For example, in Germany, there are different ways of collecting data and bringing them to the federal level, which can be a challenge. Gathering data at European level across member states, in addition to regulations, means considering interfaces and infrastructure that will make it possible to speak a common language.

This is where standardization can play a role in making data available in the first place. COVID-19 highlighted these points. How do you develop an app for tracking something like COVID-19 and convince populations to consent to using it? What data do you collect and how do you collect it? We have seen different approaches across countries offering centralized to decentralized solutions. This real scenario demonstrates how important these topics are.

International standards for data and privacy

Through its joint technical committee (ISO/IEC JTC 1) IEC and ISO develop international standards for information and communication technologies which cover AI, data management, cyber security and much more.

Some of the AI standards being developed cover data quality management guidelines and requirements, trustworthiness, AI applications, AI management systems.

Work is underway to develop data framework standards to address risks throughout the data lifecycle, including data quality and management, how it is generated, used, stored and protected. Other considerations are the level of sensitivity of the personal information (PI) captured in the data, known as the PI factor.

The ISO/IEC 27000 series of standards with over 40 parts, covers security techniques for information security management systems, including privacy, risk management and code of practice for protection of personally identifiable information (PII) in public clouds acting as PII processors.