Reading 3 min Views 5 Published Updated
The Australian government has announced an eight-week snap consultation to see if any “high-risk” AI tools should be banned.
Other regions, including the US, the European Union, and China, have also taken steps to understand and potentially mitigate the risks associated with the rapid development of AI in recent months.
On June 1, Minister for Industry and Science Ed Husik announced the release of two papers – the Safe and Responsible AI in Australia Discussion Paper and the Generative AI Report from the National Science and Technology Council.
The documents were published along with consultations that will last until July 26.
The government wants feedback on how to support the “safe and responsible use of AI” and is debating whether it should either use voluntary approaches such as an ethical framework if specific regulation is required, or use a combination of both approaches.
The consultation question explicitly asks, “Should any high-risk AI applications or technologies be banned outright?” and what criteria should be used to identify which AI tools should be banned.
A draft risk matrix for AI models has been included for feedback in a comprehensive discussion paper. Though just to give examples, he classified AI in self-driving cars as “high risk”, while a generative AI tool used for purposes such as generating patient medical records was considered “medium risk”.
#AI is already part of our lives. As the technology develops, we need to ensure it meets Australians’ expectations of responsible use. Be part of the @IndustryGovAu discussion, below. https://t.co/Gz11JCXlsG
— Australia’s Chief Scientist (@ScienceChiefAu) June 1, 2023
The paper highlighted the “positive” uses of AI in the medical, engineering and legal industries, as well as its “harmful” uses such as deepfake tools, use to create fake news, and instances where AI bots have encouraged self-harm.
Bias in AI models and “hallucinations” – meaningless or false information generated by AI – have also been raised as issues.
Related: Microsoft CSO says AI will help people thrive, but signs doomsday letter anyway
The discussion paper claims that AI adoption in the country is “relatively low” as it has “a low level of public trust.” He also pointed to AI regulation in other jurisdictions and Italy’s temporary ban on ChatGPT.
Meanwhile, a report from the National Council on Science and Technology states that Australia has some advantageous AI capabilities in robotics and computer vision, but its “main fundamental capabilities in [больших языковых моделях] and related areas are relatively weak”, and added:
“The concentration of generative AI resources in a small number of large multinational and mostly US technology companies creates potential [так в оригинале] risk for Australia.
The report also discussed the global regulation of AI, giving examples of generative AI models and opining that they are “likely to affect everything from banking and finance to government services, education and the creative industries.”
AI Eye: 25,000 traders bet on ChatGPT stock picks, AI sucks on dice rolls and more