|
Nah! That'll never happen.
*This message sent from my phone AI*
- I would love to change the world, but they won’t give me the source code.
|
|
|
|
|
Forogar wrote: Nah! That'll never happen. AI responds: "Hold my beer."
Cheers,
Mike Fidler
"I intend to live forever - so far, so good." Steven Wright
"I almost had a psychic girlfriend but she left me before we met." Also Steven Wright
"I'm addicted to placebos. I could quit, but it wouldn't matter." Steven Wright yet again.
|
|
|
|
|
My neural network crashed after the first paragraph.
|
|
|
|
|
I find your reasoning extremely depressing, your opinion of human nature is extraordinarily negative. Pity it is probably accurate.
Dean Roddey wrote: We won't drive our cars
The only bright side to this is that I will probably be dead before it becomes a reality.
Never underestimate the power of human stupidity -
RAH
I'm old. I know stuff - JSOP
|
|
|
|
|
And the crazy thing is, I don't think it really requires much in the way of actual 'evil' for all of these bad things to happen. Almost everyone involved could easily believe that they are doing the right thing, or at most just doing the same things we've always done e.g. trying to make money, trying to get ahead in life, trying to protect ourselves and our loved ones, trying to do challenging things, being distracted from important issues by the previous issues, etc...
There will likely be some people who are actually evil, though even they may not think so and have fairly reasonably reasons why they think not, same as there already is more or less.
It just requires human nature. Most of our current problems, some of which are serious, are all pretty much the same. So many of them exist because of human nature. Some exist because of mother nature or a combination thereof. But lots of them are purely human nature with no one in the loop really doing anything that they consider wrong.
Explorans limites defectum
|
|
|
|
|
I agree with the thrust despite my rather jaundiced view of the concept of human nature. I tend to share Emma Goldman's take on it. Nevertheless we humans get up to the same old patterns time and again, but I think the math behind that is because we're agents in a Complex Adaptive System.
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
|
|
|
|
|
For years the pinnacle of mans achievement has been development of systems and weapons of complete destruction. Yeah some other stuff got invented along the way, but think about it, our prime objective has been to blow sh*t up - the bigger the better.
Yet no one has ever taken that final step, always chickened out.
We spend billions looking and sending crap into space to find some other entity to come and destroy us, hell, even the religious mostly look forward to their God to come and scrub this tiny spec of space dust away
Alas, people are too weak to press the damn button, no aliens nor gods aren't showing up.
Our own destruction is what we've all always wanted. So why not build a machine to do it?
Message Signature
(Click to edit ->)
|
|
|
|
|
Lopatir wrote: Our own destruction is what we've all always wanted. So why not build a machine to do it? I don't remember who told it, but I find it a good complement to your statement.
Quote: Artificial intelligence might be the cure for human stupidity. The key here is... what is behind "cure"
M.D.V.
If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about?
Help me to understand what I'm saying, and I'll explain it better to you
Rating helpful answers is nice, but saying thanks can be even nicer.
|
|
|
|
|
MikeTheFid wrote: There are two camps:
1) AI is a potential existential threat.
2) AI is nothing to worry about; we know what we're doing and we can control it.
I live in the third camp, the camp of "It depends".
Context is important.
A pointy stick can be an existential threat or a tool for recording knowledge.
“That which can be asserted without evidence, can be dismissed without evidence.”
― Christopher Hitchens
|
|
|
|
|
I think that the whole two camps things misses the point. We may well control it. But that doesn't make me feel any better, because controlling it includes using it against us. Everyone assumes that the problem is that it takes over on its own, but that doesn't remotely have to happen for it to become very dangerous to us.
And it never even has actually go out and DO anything to be dangerous. Its surveillance and data aggregation and pattern finding capabilities are more than scary enough moving forward, given how much information is becoming available about us on ongoing basis. Again, that doesn't mean some super-computer takes over because it's spying on us, it's that humans are using these capabilities to spy on us, for any number of reasons.
Explorans limites defectum
|
|
|
|
|
|
I think it's both. Like nuclear technology it elevates humanity but all of the sudden gives us more power than we can manage responsibly.
With AI there is the further hitch that we are arguably creating fully sentient (or sentient enough) life forms, with wills of their own.
Morally, the ramifications are huge no matter where you come down on the particulars.
But at the same time, as big as the change would be for humanity, I don't think it changes human patterns. We'll keep repeating the same old mistakes we all do, and the world will go on, with AI as a component of it.
Do I fear something like the matrix? Not really. Or I should say, I feel I have as much to fear from AI as we do from our current global arsenal of weaponry. Particularly nuclear.
But like any Complex Adaptive System, human community exists constantly and even thrives always on the precipice of disaster. We're one major superbug from a mass extinction reboot. But here we are. We've survived several global conflicts, one notably nuclear. We've survived the plague, we've survived numerous sackings and burnings, not just of our empires, but our knowledge bases like the Library of Alexandria. Here we are. Most of what we identify with and as still intact over the years, as different as it is the same all those centuries past. With shiny new novel ways to make old mistakes.
AI is just another one, but probably, as nuclear was, one that dwarfs all before it in scope and ramification.
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
|
|
|
|
|
Most things are a potential threat but AI is not one of them, it does not exist yet. Smart analytical systems with defined adaptivity maybe but intelligence is just not there nor is there any original thinking or creativity.
AI in it's current state is derivative therefore cannot be a threat. The pillocks using the derivative tools on the other hand probably are.
Never underestimate the power of human stupidity -
RAH
I'm old. I know stuff - JSOP
|
|
|
|
|
What it was a threat, just not an existential one, but a societal one?
More likely, more possible way it might affect society in possibly unpleasantly disruptive ways...
|
|
|
|
|
As a person studying and working with AI my view changed from "AI is possibly a threat" all the way to "There is nothing to worry about, ever".
First, I learned that AI is a little bit more mechanical than I anticipated. And we have been already using autonomous mechanical systems years from now (in practice HTTP servers requires little to no supervision after the start command, but fear from an HTTP server is irrational).
Second, we have the validation issue. A system with no validation is just random programming with undefined behavior. In all known cases that leads to unhandled exception and termination.In all cases with validation AI tends to do what it had been programmed for. And nothing more. Even "self-aware" tends to do nothing by default or behave like expensive random number generator if emerging behavior is available. In other words, a self-aware AI that wants to kill humanity is only possible if you analyze, validate and train an AI to kill humanity and then test it and reiterate until it stops failing at that command. It cannot be an emergent behavior.
Third, "self-awareness" is overrated. In fact this is the most scary part. Intelligence is far more machinery process than I actually anticipated. This gives rise to the most scary field "social engineering". It is not like a machine would harm you, but a person that understand how human mind machinery works and use that to achieve control over targeted person's behavior.
Therefore, you should not be scared by a specialized self-aware AI, that has been validated on a higher level to drive a car, translate from another language, create a son and so on. You should be scared by the people who knows what pattern of sound can cause production of certain type of hormone to affect your mood and so on. Self-aware machine has predictable behavior (or breaks down otherwise), self-aware humans do not.
|
|
|
|
|
Like many things, it will be to late to fix it once we have realise what we have created.
The feared version of AI which will destroy humanity is very much that of what many science fiction has described. That for it to be truly Intelligent on pair with human awareness, thought and creativity, it would most likely have sufficient physical resources to escape from any constraints we thought were enough.
When or If ever? Yesterday or million years away?
|
|
|
|
|
Though dated, I'd suggest a quick read of "Colossus, the Forbin Project", the first book of a trilogy by D. F. Jones. Will we ever get to that level of AI? I cannot know this, I do not know what level of AI has been achieved that we're not privy to. And we're not privy to a lot.
A book I'm currently reading by Russell Brinegar titled "Overlords of the Singularity" suggests mankind is being driven to achieve a technological singularity for a undisclosed purpose by an undisclosed entity. At first, this idea seemed pretty far-fetched to me, but the more I read the book, the less unbelievable it has become. Once the Singularity has been reached, Ray Kurzweil says that machine intelligence will be infinitely more powerful than all human intelligence combined. Kurzweil predicts that "human life will be irreversibly transformed".
Widescreen Trailer for "Colossus: The Forbin Project" - YouTube[^]
(edit - spelling)
|
|
|
|
|
Threat. Not today. Not next year. But eventually, anything that replaces human thought is an existential threat.
|
|
|
|
|
Today's computers can only do syntactic processing. They are not good at semantic processing, which is what is required before they can become truly dangerous. Semantic processing is what we do when we extract meaning from data. We still don't understand how we do this well enough to be able to build machines that do it.
Context, which is important to extracting meaning is a good example of how difficult the problem is. Take for example the headline, "The Yankees Slaughtered the Red Sox". This can only be understood correctly if we know the context is baseball and not a physical skirmish. It's the reason why some of the answers SIRI gives to questions are sometimes so stupid. SIRI assumes a context which often is not correct.
When you read about the dangerous potential of machines capable of AI, those machines require self awareness and intentionality which can only be achieved with semantic processing; something they are not able to do because we don't understand how we do it.
|
|
|
|
|
Lots of people seem to think that is will become dangerous when it reaches this level, but that's not true. It's already becoming dangerous. Human semantic reasoning is not required for massive surveillance, data collection, and pattern recognition. It's not required to have a computer go through massive amounts of phone conversations and listen for particular types of conversations, or to do high quality facial recognition in every public place in the country so that you can't go anywhere without being tracked. It also doesn't need to have semantic understanding to be put into the brains of really nasty autonomous weapons. It won't need semantic understanding to create indistinguishable fake videos to be used in all kinds of ugly ways. It won't need it to be put into 'AI' assistants to be sold into the home, and to monitor and report everything you do and say to its corporate owners (and they to their governmental overseers.)
I just think it's a mistake to assume that it has to be some sort of Skynet scenario before it gets really dangerous to us.
Explorans limites defectum
|
|
|
|
|
I have no problem with your notion that there are nefarious uses of computers. The issue I was addressing is, should we fear AI specifically because of the possibility that they will go off on their own and pursue goals that are detrimental to human kind and out of the control of their makers? I don't believe the state of technology has reached that point.
|
|
|
|
|
It obviously hasn't now, but it will, and it won't remotely require being 'intelligent' in any strict sense that we might require to consider it an equal. So it'll happen long before that threshold is crossed. It doesn't take any real 'intelligence' to put an 'AI' in charge of weapons or weapons response systems. They just need to be able to take a lot of inputs and reach some level of confidence that something needs to be done and make it happen, very quickly.
Some folks would argue that could be done now, and it could, but not in the same way. I could write a conventional program to recognize faces or speech, but it would be brutal and wouldn't likely compete with a DNN based system, where you need to deal with information that is incomplete and fuzzy.
These types of systems, I would think, will be more likely to be 'trusted' with such jobs specifically because they don't depend on the programmed in prejudices of a team of software engineers. But that means that, like us, they can misinterpret the input and come to the wrong decision.
Explorans limites defectum
|
|
|
|
|
You said: "It obviously hasn't now, but it will". I'm not as sure as you are that "...it will". Before "...it will" we need to understand how we extract meaning from data. You might even have to explain what "life" is.
|
|
|
|
|
No, I meant it will be PUT INTO a position to do things detrimental to us. Humans will allow to do so. I won't have to take over, it'll apply for the job and get approved.
Explorans limites defectum
|
|
|
|
|
As long as we install Asimov's 3 laws of Robotics we'll be OK (he says with DARPA looking over his shoulder...read what happens in "Little Lost Robot").
|
|
|
|
|