sydney — The Australian government is considering new laws to regulate the use of artificial intelligence in “high-risk” areas such as law enforcement and self-driving vehicles.
Voluntary measures also are being explored, such as asking companies to label AI-generated content.
The country has outlined its plan to respond to the rapid rise of artificial intelligence, or AI.
Under the Canberra government’s plan announced Wednesday, safeguards would be applied to technologies that predict the chances of someone again committing a crime, or that analyze job applications to find a well-matched candidate.
Australian officials have said that new laws could also mandate that organizations using high-risk AI must ensure a person is responsible for the safe use of the technology.
The Canberra government also wants to minimize restrictions on low-risk areas of AI to allow their growth to continue.
An expert advisory committee will be set up to help the government to prepare legislation.
Ed Husic is Australia’s federal minister for industry and science. He told the Australian Broadcasting Corp. On Wednesday that he wants AI-generated content to be labeled so it can’t be mistaken as genuine.
“We need to have confidence that what we are seeing we know exactly if it is organic or real content, or if it has been created by an AI system. And, so, industry is just as keen to work with government on how to create that type of labeling,” he said. “More than anything else, I am not worried about the robots taking over, I’m worried about disinformation doing that. We need to ensure that when people are creating content that it is clear that AI has had a role or a hand to play in that.”
Kate Pounder, the head of the Tech Council of Australia, which represents the technology sector, told local media that the government’s AI proposals strike a sensible balance between fostering innovation and ensuring systems are developed safely.
The Australian Parliament defines artificial intelligence as “an engineered system that generates predictive outputs such as content, forecasts, recommendations…without explicit programming.”
Recent research shows that most Australians still distrust the technology, which they see as unsafe and prone to errors.