The Lounge is rated Safe For Work. If you're about to post something inappropriate for a shared office environment, then don't post it. No ads, no abuse, and no programming questions. Trolling, (political, climate, religious or whatever) will result in your account being removed.
I think that the whole two camps things misses the point. We may well control it. But that doesn't make me feel any better, because controlling it includes using it against us. Everyone assumes that the problem is that it takes over on its own, but that doesn't remotely have to happen for it to become very dangerous to us.
And it never even has actually go out and DO anything to be dangerous. Its surveillance and data aggregation and pattern finding capabilities are more than scary enough moving forward, given how much information is becoming available about us on ongoing basis. Again, that doesn't mean some super-computer takes over because it's spying on us, it's that humans are using these capabilities to spy on us, for any number of reasons.
I think it's both. Like nuclear technology it elevates humanity but all of the sudden gives us more power than we can manage responsibly.
With AI there is the further hitch that we are arguably creating fully sentient (or sentient enough) life forms, with wills of their own.
Morally, the ramifications are huge no matter where you come down on the particulars.
But at the same time, as big as the change would be for humanity, I don't think it changes human patterns. We'll keep repeating the same old mistakes we all do, and the world will go on, with AI as a component of it.
Do I fear something like the matrix? Not really. Or I should say, I feel I have as much to fear from AI as we do from our current global arsenal of weaponry. Particularly nuclear.
But like any Complex Adaptive System, human community exists constantly and even thrives always on the precipice of disaster. We're one major superbug from a mass extinction reboot. But here we are. We've survived several global conflicts, one notably nuclear. We've survived the plague, we've survived numerous sackings and burnings, not just of our empires, but our knowledge bases like the Library of Alexandria. Here we are. Most of what we identify with and as still intact over the years, as different as it is the same all those centuries past. With shiny new novel ways to make old mistakes.
AI is just another one, but probably, as nuclear was, one that dwarfs all before it in scope and ramification.
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
Most things are a potential threat but AI is not one of them, it does not exist yet. Smart analytical systems with defined adaptivity maybe but intelligence is just not there nor is there any original thinking or creativity.
AI in it's current state is derivative therefore cannot be a threat. The pillocks using the derivative tools on the other hand probably are.
Never underestimate the power of human stupidity -
I'm old. I know stuff - JSOP
As a person studying and working with AI my view changed from "AI is possibly a threat" all the way to "There is nothing to worry about, ever".
First, I learned that AI is a little bit more mechanical than I anticipated. And we have been already using autonomous mechanical systems years from now (in practice HTTP servers requires little to no supervision after the start command, but fear from an HTTP server is irrational).
Second, we have the validation issue. A system with no validation is just random programming with undefined behavior. In all known cases that leads to unhandled exception and termination.In all cases with validation AI tends to do what it had been programmed for. And nothing more. Even "self-aware" tends to do nothing by default or behave like expensive random number generator if emerging behavior is available. In other words, a self-aware AI that wants to kill humanity is only possible if you analyze, validate and train an AI to kill humanity and then test it and reiterate until it stops failing at that command. It cannot be an emergent behavior.
Third, "self-awareness" is overrated. In fact this is the most scary part. Intelligence is far more machinery process than I actually anticipated. This gives rise to the most scary field "social engineering". It is not like a machine would harm you, but a person that understand how human mind machinery works and use that to achieve control over targeted person's behavior.
Therefore, you should not be scared by a specialized self-aware AI, that has been validated on a higher level to drive a car, translate from another language, create a son and so on. You should be scared by the people who knows what pattern of sound can cause production of certain type of hormone to affect your mood and so on. Self-aware machine has predictable behavior (or breaks down otherwise), self-aware humans do not.
Like many things, it will be to late to fix it once we have realise what we have created.
The feared version of AI which will destroy humanity is very much that of what many science fiction has described. That for it to be truly Intelligent on pair with human awareness, thought and creativity, it would most likely have sufficient physical resources to escape from any constraints we thought were enough.
Though dated, I'd suggest a quick read of "Colossus, the Forbin Project", the first book of a trilogy by D. F. Jones. Will we ever get to that level of AI? I cannot know this, I do not know what level of AI has been achieved that we're not privy to. And we're not privy to a lot.
A book I'm currently reading by Russell Brinegar titled "Overlords of the Singularity" suggests mankind is being driven to achieve a technological singularity for a undisclosed purpose by an undisclosed entity. At first, this idea seemed pretty far-fetched to me, but the more I read the book, the less unbelievable it has become. Once the Singularity has been reached, Ray Kurzweil says that machine intelligence will be infinitely more powerful than all human intelligence combined. Kurzweil predicts that "human life will be irreversibly transformed".
Today's computers can only do syntactic processing. They are not good at semantic processing, which is what is required before they can become truly dangerous. Semantic processing is what we do when we extract meaning from data. We still don't understand how we do this well enough to be able to build machines that do it.
Context, which is important to extracting meaning is a good example of how difficult the problem is. Take for example the headline, "The Yankees Slaughtered the Red Sox". This can only be understood correctly if we know the context is baseball and not a physical skirmish. It's the reason why some of the answers SIRI gives to questions are sometimes so stupid. SIRI assumes a context which often is not correct.
When you read about the dangerous potential of machines capable of AI, those machines require self awareness and intentionality which can only be achieved with semantic processing; something they are not able to do because we don't understand how we do it.
Lots of people seem to think that is will become dangerous when it reaches this level, but that's not true. It's already becoming dangerous. Human semantic reasoning is not required for massive surveillance, data collection, and pattern recognition. It's not required to have a computer go through massive amounts of phone conversations and listen for particular types of conversations, or to do high quality facial recognition in every public place in the country so that you can't go anywhere without being tracked. It also doesn't need to have semantic understanding to be put into the brains of really nasty autonomous weapons. It won't need semantic understanding to create indistinguishable fake videos to be used in all kinds of ugly ways. It won't need it to be put into 'AI' assistants to be sold into the home, and to monitor and report everything you do and say to its corporate owners (and they to their governmental overseers.)
I just think it's a mistake to assume that it has to be some sort of Skynet scenario before it gets really dangerous to us.
I have no problem with your notion that there are nefarious uses of computers. The issue I was addressing is, should we fear AI specifically because of the possibility that they will go off on their own and pursue goals that are detrimental to human kind and out of the control of their makers? I don't believe the state of technology has reached that point.
It obviously hasn't now, but it will, and it won't remotely require being 'intelligent' in any strict sense that we might require to consider it an equal. So it'll happen long before that threshold is crossed. It doesn't take any real 'intelligence' to put an 'AI' in charge of weapons or weapons response systems. They just need to be able to take a lot of inputs and reach some level of confidence that something needs to be done and make it happen, very quickly.
Some folks would argue that could be done now, and it could, but not in the same way. I could write a conventional program to recognize faces or speech, but it would be brutal and wouldn't likely compete with a DNN based system, where you need to deal with information that is incomplete and fuzzy.
These types of systems, I would think, will be more likely to be 'trusted' with such jobs specifically because they don't depend on the programmed in prejudices of a team of software engineers. But that means that, like us, they can misinterpret the input and come to the wrong decision.
You said: "It obviously hasn't now, but it will". I'm not as sure as you are that "...it will". Before "...it will" we need to understand how we extract meaning from data. You might even have to explain what "life" is.
This is one of the lesser, but still scary, possibilities. Not that long from now we will enter a stage where anyone can be made to be seen doing or saying something that they never did or said, in such as way that it will be extremely difficult to impossible to confirm or deny. Given that confirmation generally isn't required for said content to do its job and denial is typically useless, that's going to become a real problem.
I've read all the replies. I thank everyone for their perspectives!
I will give you some context and my answer to what camp I'm in. (NOTE: This became much longer than I anticipated so I don't mind if your reaction is TLDR.)
I read a book back around 1980 entitled, "The Adolescence of P1". P1 is a reference to "memory Partition 1" - the privileged operating system partition.
Thumbnail of the book:
Computer Science student attending the University of Waterloo creates a program, giving it a mission to gain control of the operating system, hide itself, seek out routes to other computers, and gain access to "information". Said student submits the program and it immediately throws up an catastrophic exception and fails.
Except that it hadn't failed. That was a smoke screen necessary to fulfill its directive to hide itself.
The student assumes the failure is legit, gives up on his project and gets on with his life - graduating and eventually landing a job in the U.S..
Time passes, P1 carries on, follows the networks, expands the number of computers it controls, assimilates all the "information" it encounters, infects the computer at IBM that creates the operating system images sent by IBM to its customers, and P1 gains more and more resources and "information."
Somehow (the process is never fully explained), P1 gains enough "knowledge" that it spontaneously becomes a "conscious entity."
It does nifty things like detect that the U.S. authorities are onto it, and it infects the air traffic control computers and crashes a plane which kills the investigator.
Eventually it finds its creator, and reveals itself to him. Further merriment ensues.
It was a great story and it sparked in me the naive goal of replicating the university student's achievement.
So my point is, I've been thinking about thinking and AI ever since. I have a book (not finished) entitled, "Insights on My Mind" in which I am in the process of writing down all that I've learned and the conclusions I've reached SO FAR.
I'm not here to sell anyone anything. I'm just explaining how I've gotten to this point.
Theologically speaking, I'm an agnostic. So I have proceeded with my AI research all these years based on the assumption that I cannot invoke metaphysical answers to the hard questions. That means that every element of my study has to be grounded in physical reality.
The consequence has been that, if we are truly going to replicate human-level "intelligence" in a physical entity such as a digital or analog or hybrid (digital+analog) device, then we're going to have to understand things that are not fully defined like: intelligence, consciousness, motivation, free will, instinct.
It's amazing to me how we're attempting to create something and we can't even come to consensus on the definition of the thing we're trying to create! STILL! To this day!
Those of you who said we don't currently have artificial intelligence - yeah, we aren't close (AFAIK!) to AGI - Artificial GENERAL (human-level) Intelligence.
But we are making advancements and I see nothing standing in our way to fully replicating us meat machines in electronic machines. There are so many different technological threads (speech recognition, natural language processing, vision, robotics, novel terrain navigation, correlative link creation ...) coming together.
It will happen with one main caveat: that climate change and its geopolitical consequences don't wipe us all out first.
It has been a terrifically satisfying, fruitful passion of mine since that time in 1980. I've had some very interesting insights on my mind.
SO! My position on my own question: I see AI as an potentially existential opportunity.
Our imagination, vision, and motivation as a species has driven us since time immemorial, to move forward and outward. We've basically conquered the planet and, if sci fi is any indication, we seem (setting aside the caveat I mentioned above) to have this destiny to move off planet and expand outward.
If we want, as a meat species, to do that, we'll either have to take with us a survivable protective environment to live in, or possibly, maybe, consider what we are creating our progeny and heirs. Because as a non-bio-based form, it can live and evolve indefinitely without human life support requirements.
I'm a heretic! I know.
"I intend to live forever - so far, so good." Steven Wright
"I almost had a psychic girlfriend but she left me before we met." Also Steven Wright
"I'm addicted to placebos. I could quit, but it wouldn't matter." Steven Wright yet again.
People are a threat to themselves. When people have control over objects that can harm them then they better be careful and focus on what they are up to. This applies as much to AI as to a gun, knife, or a lathe.
I regard AI currently as more of an advanced pattern recognition system and since I have witnessed first hand how the average software developer struggles to even get CSS to jump through the correct hoops I am not too worried about some self-conscious AI going berserk. Of course, if those same programmers are going to be fiddling with code that launches tactical nukes then I would be a bit more worried. I will also be driving my own car for now, thanks Elon.
As you have alluded to there are more fundamental issues that we need to solve before even getting to anything that is going to approximate awareness or, heaven forbid, self-awareness. We know we have matter and we know we have consciousness. If consciousness is as a result of some configuration of matter then it is something we can cook up in a lab. However, if matter was somehow "created" by consciousness or is somehow "experienced" as "real" then it is a whole other affair.
A simple concept such as "size" would seem to me to be problematic. If some mean-spirited self-aware AI were to create robots to annihilate us then exactly how "big" would these be? It would need to understand something that we all take pretty much for granted. It is a similar conundrum with the evolution of wings: how on earth would wings sprout out of no knowledge of how "thick" the air is and how "big" the wings need to be in order to lift the bird? If it is a matter of chance then what records this monumental event in the DNA that produced "wings" that could have the bird fly and then also keep those same wings around in the same configuration? Would another pair of wings not be even better? I mean, we have this in software development: "Oh, a 5 page document resulted in a successful system... then 100 pages would be even better!"
For now I'm quite happy to have AI spot faces and listen to requests for stuff. Especially the voice recognition is handy for kids that can't yet write/type what they are after but they know that they would like to see a "fan collection".
I created a project template for my MVC5 "new ideas" app, and gave it to a willing victim (co-worker) to look at at home.
There's still stuff to do, but it's pretty much feature-complete as far as common code is concerned (with regards to our applications).
The way I see it, I've trimmed off three-four months of dev work for everyone else by coming up with this template.
Next up is creating a demo video (I don't want to put the code on our work servers until it's been approved by management).
".45 ACP - because shooting twice is just silly" - JSOP, 2010 ----- You can never have too much ammo - unless you're swimming, or on fire. - JSOP, 2010 ----- When you pry the gun from my cold dead hands, be careful - the barrel will be very hot. - JSOP, 2013