1 min read

Should AI be regulated after all the things you have seen over the last weeks?

Should AI be regulated after all the things you have seen over the last weeks?
Image generated by DALLE

In my opinion, innovation (including AI) should have a kind of mission and a kind of value-based purpose and that should be for the improvement of people’s lives.

But AI could cause harm. It could be biased. It could be opaque. It could be wrong. This are the forseeable harms, which we now from research. But what are the unforeseeable harms? Can an AI turn into SkyNet and destroy the world after running over years in Auto-GPT mode? 😱

To protect humanity from this, the White House released a more than 70-page document called “A Blueprint for an A.I. Bill of Rights” last year in October.  And the word “blueprint” there, that is a much more important word in that title than “rights.” This document is not, for the most part, enforceable at all. These are not rights Americans can sue to protect.

But its release, its creation was a recognition that at some point soon the government probably would need to think about creating something enforceable. And so they needed to start thinking about how society thick with A.I. should look.

Deep neural networks (which ChatGPT is based on) are black boxes. And so we’re in this situation of known unknowns and unknown unknowns. This makes regulation hard but necessary – without any doubt in my opinion.

But what do you think? Can we already decide that regulation is necessary, or do we not have enough data to conclude on this? Am I paranoid?