e-tech spoke with Jed Horner, Strategic Advocacy Manager at Standards Australia. Horner is responsible for developing and supporting new areas of work in a largely digital portfolio and explains how the work for artificial intelligence (AI) contributes at national and international levels.
What is your role at Standards Australia?
My role as strategic advocacy manager is to connect government, companies and other stakeholders to standards activity, including globally. Very often the challenge for national bodies and standards development organizations (SDOs) is to maintain that relevance, talk to policy-makers and regulators and listen to them.
I have covered elements of data privacy, and I wrote the report An Artificial Intelligence Standards Roadmap: Making Australia’s Voice Heard, which was commissioned by the Australian government. Canada put forward the proposal for a management system standard on AI. We hosted a Canadian delegation before lockdown, together with a range of business representatives and the Australian Ambassador for Cyber Affairs and Critical Technology Dr Tobias Feakin, from the Department of Foreign Affairs and Trade, so it was very collaborative. This is how we manage to support proposals globally. We try to provide key decision and policy makers with insights and tool kits based on standards to ensure we don’t reinvent the wheel.
Additionally, we have had commitments to standardization embedded in a lot of Australian policy, for example the New South Wales (NSW) government’s AI Strategy explicitly references the AI work of IEC and ISO joint technical committee, (ISO/IEC JTC 1). Standards Australia contributes to this work through its mirror committee. It is a strong starting point and has been great working with Wael Diab who leads the joint AI committee, which brings together stakeholders from across the entire AI ecosystem, including national committees.
ISO/IEC JTC 1 develops international standards for information technologies and covers 22 key areas, including AI, biometrics, cyber security, cloud computing, data management and exchange, IoT and more. How we work in this joint committee is constructive globally, because we talk through all the issues that arise in the different subcommittees. This is a powerful way to approach standardization.
What are some of the key areas you are working on?
The most exciting piece we are working on is about harmonization and making sure that we channel Australia’s voice. We are borrowing the best from the global community, which is the piece around ISO/IEC 27701, covering security techniques for privacy information management. This standard is ground-breaking in terms of the potential role it plays for privacy.
What we have done with our modified adoption is look at that standard and say what does it look like with an Australian annex? Currently, that is the European Union’s General Data Protection Regulation (GDPR), but we see it as a way to build a tool kit that is accessible to businesses and gives them a competitive advantage, if you consider all the businesses that might be doing trade with Europe and dealing with data that is personal in nature, identifiable and then falls within the jurisdiction of GDPR.
If you are a business with a system that allows you to implement a set of controls and approach in Australia, you will also have a fairly seamless operation compared to one that is starting with whatever local advice it can find.
The ISO/IEC 27000 series of standards which covers information technology security techniques, has been adopted by businesses of all sizes. ISO/IEC 27701 provides requirements for information security management systems.
A securer government and businesses that are more aware of how they deal with personal data add a range of benefits beyond business alone.
When you talk to people, for example, in banking and healthcare, whatever standards you mention, and they may not be cyber security, they see the value because they are thinking about how an enterprise would implement something across the entire business; standards help in that sense and add great value.
Another project is the Hub, which leads to the key topic of AI assurance and how we build certification around AI with a user-centric approach. In other words, if you are a financial or educational institution, or you have very complex supply chains, how can we work with you in a more agile way to ensure trust and be able to audit the supply chain?
The Hub aims to improve collaboration between standards-setters, industry certification bodies, and industry participants and to trial new more agile approaches to AI standards for Australia.
We know companies will trial approaches in the interim because they must for their supply chains. The question is, how can we work with them in a pre-standardization sense and start saying to different entities, what are your common practices and how can we develop something that is user friendly?
To this end, the Hub will provide a test-bed where specific propositions, which could form the basis of content for standards, could be tested with industry and other stakeholders. This would ensure that any proposed solutions are proportionate and fit-for-purpose.
The outcome of the Hub project may not be a standard, but it could be a handbook or an annex check list, that would be operational at scale almost as soon as it is complete. We could then take it to a global setting to be developed internationally, such as an IEC technical committee or one of the joint IEC and ISO committees. It’s a practical application of act local think global.
In NSW we spend roughly AUD 3,6 billion dollars/annum on IT systems alone, which given population size is a significant amount. The government is turning its mind to what does that mean for assurance? Federal government, through the Department of Industry, Science, Energy and Resources (DISER), is trialling a set of AI principles but whenever we talk principles the question is how do we make them meaningful, particularly across complex supply chains?
The Hub will allow people to work together without the commercial tensions of working with competitors and share what they are comfortable sharing, to shape global standards and receive insights from them. This is a unique place for all stakeholders. The project starts with the financial sector in the first quarter of 2021.
What other work is in the pipeline for 2021?
In 2021, we are going to keep the focus on developing the technical standards necessary for AI to thrive, in a responsible way, and will be working through the international standards community to this end. We are, however, watching with close interest the growing focus on how liberal democratic values are accounted for in developing and deploying AI responsibly. You can expect to see us turning our minds to this, given that both the European Union and United States are doing so too.