interesting information from the Australian commission

0

The report examines how technological advancements in areas such as facial recognition and AI can be balanced with the protection of human rights. Credit: PhotoMIX Company / Pexels

The conundrum is a conundrum that many governments face: How to make the most of technological advances in areas such as artificial intelligence (AI) while protecting the rights of people? This applies to government both as a user of the technology and as a regulator with a mandate to protect the public.

The Australian Human Rights Commission recently undertook an exercise to examine this same issue. His final report, Human Rights and Technology, was published recently and includes some 38 recommendations – from creating an AI Safety Commissioner to introducing legislation so that a person is notified when a company uses AI in a business environment. decision that concerns her.

We’ve put together some of the report’s recommendations for governments on how to ensure that greater use of AI informed decision-making does not turn into a human rights catastrophe.

Supporting regulations

A series of recommendations in the report relate to improving the regulatory landscape around AI technology.

The report places particular emphasis on facial recognition and other biometric technologies. It recommends legislation, developed in consultation with experts, to explicitly regulate the use of this technology in contexts such as policing and law enforcement where there is “a high risk to human rights. the man “.

More generally, the report calls for the creation of an independent and statutory office of an AI security commissioner. This body would “work with regulators to strengthen their technical capacities regarding the development and use of AI”.

The AI ​​Security Commissioner is also expected to “monitor and investigate developments and trends in the use of AI, particularly in areas of particular risk to human rights,” independent advice to policy makers and issuing guidance on compliance.

Along with this, the report notes that the AI ​​Security Commissioner is expected to advise the government on “ways to incentivize … good practice. [in the private sector] through the use of voluntary standards, certification systems and public procurement rules ”.

Explain and involve

Several of the report’s recommendations focus on people who could be affected by AI. It calls for greater public involvement in decisions about how AI should be used, and more transparency in indicating when a member of the public is affected by an AI-assisted decision.

For example, the report suggests the introduction of legislation that would require any ministry or agency to conduct a Human Rights Impact Assessment (HRIA) before an AI-based decision-making system is put in place. used to make an administrative decision. Part of this HRIA is expected to be a “public consultation focused on those most likely to be affected,” the report says.

The report also notes that governments should encourage businesses and other organizations to conduct a HRIA before developing AI-informed decision-making tools. As part of the recommendations, the authors suggest that the government appoint an agency, such as the AI ​​Safety Commissioner, to create a tool that helps people in the private sector complete the assessments.

In addition, the report recommends legislation “to require that any affected person be notified when artificial intelligence is physically used to make an administrative decision” within government. There should also be equivalent laws obliging private sector users to do the same.

The report also states: “The Australian government should not be making administrative decisions, including using automation or artificial intelligence, if the decision maker cannot generate technical reasons or explanation for an affected person.”

Improve capacity

Other recommendations also suggest that the Australian government is improving its ability to work ethically with AI-informed decision-making tools.

The government should “convene a multidisciplinary task force on AI-based decision-making, led by an independent body, such as the AI ​​Security Commissioner,” the report said. Responsibilities should include promoting the use of “human rights by design” in AI.

In line with the theme of transparency, the report also recommends that centers of expertise, such as the Research Council of Australia’s Center of Excellence for Automated Decision-Making and Society, “should prioritize research on The ‘explainability’ of AI-based decision making ‘.

Share.

About Author

Comments are closed.