Artificial intelligence is facing loads of issues. From privacy concerns, to biased results and ethical dilemmas. The potential of AI can be a boon to humanity with healthcare solutions, but because it writes itself, it can potentially cause the doom of our society.
Which is why governments and legal bodies are becoming serious about regulating AI.
Today, AI is used in almost all aspects of our lives. Be it facial recognition on our devices, help with email drafting, or chat boxes providing on-demand customer support. It's becoming more ubiquitous in our lives, faster than we realise.
Enterprise implementations of AI-based technologies tripled in 2018.Gartner
When it comes to implementation of laws to protect consumers, there is one major concern. The government regulatory bodies are not comprised of people who understand how AI works.
Without strong understanding of the subject and keeping up to date with the exponential advancement, it is nearly impossible to have frameworks and ethical guidelines make a difference.
We need an independent body that consists of policy makers, AI developers and investors, who can form a basis for AI software certification. Similar to how we currently have ISO 27001 and SOC 2.
One simple way to address "AI gone rogue" potential is to make companies liable to financial and legal consequences for not following the regulations.
It worked for insider trading, and should work for AI as well. We need subject matter experts making the regulations, not government bodies made of 50+ year olds who do not even completely understand how the internet works.
Read more about this idea here.