In manufacturing plants, programmed robotic arms and humans work in close proximity side by side. Transport uses more and more automated systems. Self-driving vehicles deploy advanced driver assistance systems, while modern airline autopilot and safety systems use manoeuvring augmentation characteristics systems. Both rely on sensor data processing algorithms to analyze data gathered from many sensors around the vehicles and airplanes in order to ensure safe, efficient journeys. In healthcare, diverse professionals are using analysis from big data mined by machine learning algorithms, to help diagnosis diseases.
“A key barrier to adoption of artificial intelligence is concerns about the trustworthiness of the system. Led by SC 42/WG 3, the projects that the committee is pursuing in this area, not only try to identify and put a framework around these emerging issues but also provide technical approaches to mitigating the concerns and link to the non-technical requirements such as ethical and societal challenges. This revolutionary approach that SC 42 is taking by looking at the full AI ecosystem will enable wide scale adoption of AI and the promise it has as a ubiquitous technology enabling the digital transformation”, said Wael Diab, who leads the standardization work on AI, through Subcommittee 42 of the IEC and ISO joint technical committee (ISO/IEC JTC 1) for information technology.
In these and many more situations, humans put their trust in machines which is why it is imperative that nothing goes wrong. As new products and services evolve and incorporate AI technologies, their broad adoption will only be successful if people feel they can be trusted. This means that if there is an issue it will be possible to understand what has happened, how it happened, how to avoid it in future.
e-tech caught up with David Filip, Convenor of the SC 42 Working Group 3 to learn more about the work on trustworthiness of AI.
What is trustworthiness and why is it so crucial?
In our standards work we have identified certain characteristics of trustworthiness, such as accountability, bias, controllability, explainability, privacy, robustness, resilience, safety and security. But for me, before all these aspects can be considered, it always comes back to transparency or transparent verifiability of AI systems’ behaviour and outcomes. Is an outcome of an AI system transparently verifiable or is the system a so-called black box, in other words, is it trustworthy or opaque? Is there someone who can assess it for vulnerabilities or unintended consequences? In order for it to be trustworthy, we need to be able to understand the algorithm’s internal workings.
Since machine learning is functionally defined and based on the huge amounts of data for the training sets, the machines will only be as good as the data they have been fed. So in order to achieve trustworthiness, humans will still need to be part of the process, to vet and control what are the underlying AI algorithms and that associated training data don’t introduce unfair or otherwise unwanted bias.
How can standards help achieve transparency?
Standards are behind all systems that make our civilization work as we know it and there are many examples, such as railways, with a legacy system over 100 years old, or HTML 5, which defines the properties and behaviours of webpage content and without which you could not see all the things you can see in your browser.
We are at an important time for writing the horizontal standards for innovative AI technologies that in five or ten years will be taken for granted by all of us. The pace for standards being taken for granted has obviously increased. This is why we need to get it right now and make sure we consider as many angles as possible, including ethical and societal concerns.
Although standards are voluntary, they are used by policy makers and regulators. For example, many countries are working towards achieving the UN Sustainable Development Goals. Standards will need to ensure many aspects of trustworthy AI use, including privacy and security or functional safety of devices, systems and infrastructures in which AI technologies are embedded. We have a broad range of stakeholders – academia, consumer protection bodies, industry and regulators, who are defining standards which will help regulators to do their work as they have been mandated. However, you cannot enforce something that does not have the right handles defined at the technical level.
Which standards are you working on to address these issues?
We are working on a number of deliverables at the moment including:
Finally, we anticipate starting work on Part 2 of the Robustness of neural network series. This would eventually become an international standard and will consider formal methods for assessment of robustness in neural networks. It would be of great use to insurers of heavy machinery, such as ships or construction machines, which contain neural networks. It will help industry demonstrate that their systems containing machine learning technology do still work in a functional, predictable, and explainable way, and that their robustness characteristics that insurers must consider can be formally proven.
Find out more about the work of SC 42