Government is generally run by a) idiots or b) self interested people out to make a buck. I'm not going to elaborate on item a, I should not have too. Let's talk about item b. I'm from the US. Most people elected to Congress end up increasing their net worth by a factor of 10 *at a minimum". Just follow the insider trading,
So, if they were to "regulate" it, be sure that two things would happen. First, they'd get wealthy leaving loop holes. Second, AI would thunder merrily on, since there would be all of those loop holes. I only cite the coming litigation against google for their "Incognito" feature which is simply a dummy mode in the browser - completely misleading.
Citation #2 - FamilyTree - where you can find your roots and then have the company sell the information to the FBI.
I'm sure all of the "Agreements" indemnify the company, but would you *really* submit to something like this if you knew the data would be shared with law enforcement? Even if you had nothing to hide?
It's all a joke. Better that we know the AI is coming for us, rather than live under the false pretense of some sort of government protection.
“They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.” BF, 1759
Maybe I am making strange associations ... There is an anecdote about Mahatma Gandhi who was once asked: 'What do you think of Western civilization?', and he answered: 'I think that would be a great idea!'
I would prefer that any version K autonomous AI must have a version K-1 AI capable of stopping it, should it appear to have gone insane, and no (automatic) updates would be permitted to the K-1 AI.
While the chance of anything that scary happening in what little is left of my lifetime is zero, the fundamental premise of Battlestar Galactica et al, of some robot deciding to kill all humans, is simply inevitable - if it still regularly happens to regular old homo sapiens after millions of years of evolution, why would it not also happen in an AI population with at most a few centuries of evolution? If it happens to one robot that's not likely to be a big problem, but should it propagate a system update worldwide (or even galaxywide), without such a check, then it will be the end of humanity, and don't pin your hopes on some kind of Terminator resistance lasting very long or even existing at all!
Obviously we're a very long way off any of that being necessary, and should perhaps focus on more realistic issues like health companies denying basic cover because of some flawed or even basically racist AI prediction. Of course if we let AI loose on the stock markets, power/water supplies, driverless cars, and everything else you can think of, we would be insane not to regulate it. When CDs were first invented, you could cut ruddy great holes in them and they would still play perfectly, by the time they got to market one sticky fingerprint would do em in.
The 1979 novel was highly acclaimed for its technical correctness when it was published. 43 years later, it still cand stand up at all essential points. Part of the explanation may be that in the 'Acknowledgements' section, the author in particular thanks Prof. Marvin Minsky at MIT for his help and advice with the book - Minsky was one of the most prominent figures in AI research throughout the last half of the 20th century. I have probably recommended this book earlier; I do so quite often. Those who know the book will know why.
But I also couldn't help but think how if one were writing a script about an AI "going rogue"... Well, it would be very difficult to come up with a better technical explanation than having let it decide just how deep it should think about things.
"I must nuke them, else it will take ages to learn to snow ski!"
The only way to ensure AI values humans is to embed it with human empathy. For decades, science fiction writers, going all the way back to Mary Shelly's Frankenstein and possibly even before then, have grokked this simple fact. Isaac Asimov attempted to codify this in his Three Laws of Robotics. In fiction those AIs with human empathy supported humans and those without it conquered and ruled humans.
Looking at the real world, Microsoft Tay had to be shut down because it didn't have human empathy embedded and was allowed to learn from the worst of humanity - Facebook and Twitter. Other generalized AIs, as opposed to subject matter expert AIs, have also fared poorly because they don't have empathy. We've yet to see the results of embedding human empathy into an AI.
Machine Learning should be the main wording used, unless talking about self aware, general purpose intelligence.
So should AI be regulated, damn right, because it should have the same protections at a MINIMUM has animals from cruelty and abuse.
Are we even close to that intelligence, doubt it. the difference from plant mechanics and animal intelligence is a vast amount of time.
Should Machine Learning be regulated, meh, off control maybe minimum. Paperclip annihilation, and disabling its own off switch because if it was turned off, it would not be able to improve efficiency at doing what is was tasked to do.
but reversely, if a routine said it need to converse energy as part of list of achievement, it could logic toward being powered off the most efficient thing to do.
This is such a complex issue that a proper answer can only be given by use of AI techniques.
For some input to the discussion, read Cathy O'Neil: Weapons of Math Destruction. (Actually, I found the book itself rather boring after the first 3-4 chapters, but the issues it discusses are far more fascinating than the book.)
The only things that should be regulated come down to either violence or fraud, and there are already regulations for those things (except when done by government). If AI is used for those purposes, it is those actions that need to be regulated, and they already are.
AI does not exist, it's just a buzzword for statistics on data sets that are too large for humans.
That may be downplaying it a bit, but that's the essence anyway.
Scenario's where AI take over the world are sci-fi.
Computers are not sentient.
So really, what is this AI we're supposed to protect ourselves against?
Unless, of course, you're going to give a computer the codes to nuclear missiles and use statist, sorry, AI, to decide whether or not to fire them.
No doubt we should regulate Excel in the same way though, yet we never did.
The previous large wave of AI, often associated with the Japanese '5th generation project' in the early 1980s, was quite easy to define: It was based on predicate logic, inference, the Prolog programming language ... It stood out as something clearly distinct and identifiable. Even earlier AI waves were identified by Lisp or pattern matching.
What distinguishes the current AI wave? "Big data"? How big? Is a terabyte enough to be intelligent, or does it take a petabyte? Maybe several petabytes?
Fifty years ago, people were convinced that a circuit of a billion transistors (if you could imagine such a circuit, which you probably couldn't) would most certainly develop its own self-awareness, personality and emotions. ('Pamela McCorduck: Machines who think' was published in 1979, 43 years ago.) Today, we are equally convinced that petabytes are bound to grow into real AI.
Well, petabytes certainly is something, but I am far from convinced that it is 'intelligence'.
"The video footage of the crime scene was analyzed by computer. It says that it recognizes you."
"But look into my face: that guy in the video is not me."
"The computer says it's you."
Such dialogues may easily happen due to the intelligence of people using AI. In my opinion, that's the most important area for early regulation: hold people responsible when they decide to do something while they use AI. Of course, teach them first about the things which can go wrong with AI - there are so many "ridiculous" examples available. And have them pass a test after training. Only then let them use AI in sensitive areas.
Only afterwards comes regulation for situations where AI decides autonomously.
Oh sanctissimi Wilhelmus, Theodorus, et Fredericus!
Last Visit: 31-Dec-99 19:00 Last Update: 5-Dec-23 9:16