|
So, here's a good example of why you are mistaken. I challenge you to write a program that can recognize any picture of a banana with high accuracy. You will find that that is very difficult. And, when you are done, you will have a program that only recognizes bananas. If you need to recognize something else, like stock manipulation patterns, you will have write a different program that will also be very difficult.
DNNs don't have to be changed to do different jobs like that. That's a fundamental difference. The same algorithm can recognize a banana or find patterns in financial transactions or understand written characters or recognize sounds in spoken words, without any changes.
That's because it's not a program of if/elses that you write. It's a program that accepts data, lets that data interfere with itself in ways that creates a pattern that gives a confidence level that the input represents this or that. It's nothing like a bunch of if/else statements making hard coded decisions. Nowhere in there is any code written related to 'is this a banana?' at all.
It doesn't make any difference whether it's 'alive' or 'intelligent' at all in terms of the practical impact that's already having on our lives and the vastly larger impact it will have in the future.
Explorans limites defectum
|
|
|
|
|
Thought I'd interject to say the question of sentience has been a matter of some debate in the philosophy circles i run in, in large part because of AI being on the horizon.
I think reasonable people can disagree, as there are certain grounding assumptions we all have to deal with here in terms of the question of what makes us human, what it even means to think, or engage in say, philosophy?
As for me I'd suggest that anything that is a convincing enough illusion of The Real Thing(TM) (whatever that happens to be) is as good as the real thing for any meaningful intent and purpose.
For example, for all I know, we don't have free will either. It might be possible to develop a way to plot my next thought or move. Maybe I'm a calculation in a simulation. But it doesn't matter. Because I have the illusion of will, and it's a compelling enough illusion that it may as well be (to me) the real thing.
So I'd suggest here, that at a certain threshold, we might accept that a computer "thinks" as any other sentient being might, or even as a human might.
I don't know if that can be done in silicon reasonably, but I'm entertaining a hypothetical here, if you'll humor me that.
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
|
|
|
|
|
codewitch honey crisis wrote: Maybe I'm a calculation in a simulation. The Matrix has you...
M.D.V.
If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about?
Help me to understand what I'm saying, and I'll explain it better to you
Rating helpful answers is nice, but saying thanks can be even nicer.
|
|
|
|
|
ZurdoDev wrote: it's just 0's and 1's based on what some programmer made possible. Well, actually real neurons are more analog than digital and emulated neurons should copy that behavior.
I have lived with several Zen masters - all of them were cats.
His last invention was an evil Lasagna. It didn't kill anyone, and it actually tasted pretty good.
|
|
|
|
|
And they do of course. The nodes in a DNN calculate a level, usually something like -1 to 1. If they were binary you'd need gigantic numbers of them to achieve the same thing. Like real neurons where the strength of an electrical signal is enough to trigger a chemical emission across the synapse or not, these calculate a level that sort of represents the same thing.
Ultimately it's closer to interferometry than a traditional 'decision graph' type of program. It doesn't make decisions, it creates patterns, and via training it's known that a given pattern represents a particular confidence in a particular result.
And of course DNNs can become the inputs to other DNNs. So it's not one huge neural network, and you probably wouldn't want that even if you could do it. It can be a hierarchy where many DNNs are reporting likelihoods of many different conditions and those are feeding into higher level DNNs that are trained to recognize patterns in those conditions and confidences.
Explorans limites defectum
|
|
|
|
|
ZurdoDev wrote: It still comes down to what the programmer has made possible. A computer can never think or reason like a human. It's still if else statements at its simplest. No. The programer wrote an emulation of a neural network, no more. Whatever the capabilites of the neural network may be, they are totally separate from the emulation or the hardware. You can argue that the topology of the network is all wrong, the number of neurons to low or that the learning method is not adequate. The emulation is a normal deterministic algorithm and may fall short of your expectations in many ways, but you are mistaken when you carry these properties over to the simulated network.
Just look at the currently best version of a neural network we have up to now. A unique copy of it is right between your ears. These neurons are real living cells which work on biochemical basis, no emulation needed here. In many ways these neurons are similar to little transistors or surpass them, because transistors can't strengthen, weaken or wire up new connections at all. The basic layout of this network has been shaped by the namegiver of the evolutionary algorithm. From then on it was on its own. nobody programmed it, not even the genetic code that was it#s blueprint. The human genome does not encode enough information to contain a fresh OS installation. And nobody trained it. It started to train itself by processing inputs even before you were born.
I have lived with several Zen masters - all of them were cats.
His last invention was an evil Lasagna. It didn't kill anyone, and it actually tasted pretty good.
|
|
|
|
|
CodeWraith wrote: living Keyword.
Social Media - A platform that makes it easier for the crazies to find each other.
Everyone is born right handed. Only the strongest overcome it.
Fight for left-handed rights and hand equality.
|
|
|
|
|
No. Not at all. Algorithms are independent of their implementation. A neuron implements a switching function and thus implements an algorithm. This algorithm could probably as well be implemented with relays, electronic tubes, transistors, logic gates, in software or even mechanical springs and gears.A tiny biochemical cell was just mother nature's choice.
I have lived with several Zen masters - all of them were cats.
His last invention was an evil Lasagna. It didn't kill anyone, and it actually tasted pretty good.
|
|
|
|
|
ZurdoDev wrote: Computers only do EXACTLY what they are told to do. So, no, there is no threat unless a programmer programs it to make poor choices.
Yeah, this isn't true anymore. Neural networks are black boxes. You train them to recognize a pattern, but no one can read a set of neural network weights and say how they do it.
|
|
|
|
|
It's absolutely a threat. Not in and of itself, anymore than a knife is. But it's a huge threat just because of human nature. Anyone here who thinks that our current baby stuff is indicative of what's to come is fooling themselves. You only have to look at the massive progress made over the last decade or so and project that forward at even a non-increasing rate for some time to come to know what it's going to be like. And more likely it will continue to improve quite non-linearly.
Will it really be intelligent? Not really, IMO. But that doesn't matter. It'll be capable of reacting to massive amounts of input, finding patterns very fast, and making decisions. That will make it irresistible to a lot of players who don't have our best interests at heart.
And, despite the fact that there will have been by that time thousands of books and movies (fiction and non-fiction) predicting the bad consequences of putting such AI's (or whatever you want to call them) in charge of dangerous toys or in charge of us, it's going to happen as sure as the sun rises. Even if every government says it's not going to do it, it'll still be done secretly on the assumption that everyone else is doing it secretly. And it'll become an arms race, both in the weapons world and in surveillance (both business and government.)
Everyone will have an 'AI' assistant in their homes which will effectively know everything they do and say and when they do it and say it and to whom. People will happily pay $1000 a pop to install something that no government could ever get away with forcing them to install. And then everyone will immediately start to work hacking them. Massive resources will be (and already are pretty much) used in the correlation of information in uncountable petabytes of data that will be flowing, which will find everything you do on line, as a consumer, on social media, etc... and ultimately in your own home. Everywhere you go you will be recognized by facial recognition systems. We won't drive our cars or fly our airplanes anymore.
Leaving aside weapons systems, most of these things will be happily adopted and paid for by us. Many of the people working on them or financing them will have intentions that are no worse than just a great interest in making them happen (just as with the bomb) to just old fashioned greed. But, it'll all be a huge system of surveillance and control just waiting to be abused.
And they all will be eventually. That will be far, far too juicy a target or tool. Every government and business and criminal organization (where there's a distinction) will be going after these things full on. And the more powerful they become, the worse the consequences when they are compromised and misused. Governments and businesses will be working overtime to create these systems, and other people in government and business and crime will be looking to hack or misuse them. It's always easy to justify misuse of surveillance systems as patriotism, and it's always easy to justify building nastier weapons because you assume the other side is as well.
And of course we (the US) will likely be at the forefront of the development of nasty weapons and surveillance systems, as we always are. And tt doesn't take even a little bit of cynicism to foresee weapons system responses being put into the hands of AIs who can watch for patterns in enormous streams of data (to respond to incredibly fast AI driven attacks on many fronts from the other side.) We are very easily that stupid and paranoid.
Explorans limites defectum
modified 17-May-19 13:17pm.
|
|
|
|
|
Nah! That'll never happen.
*This message sent from my phone AI*
- I would love to change the world, but they won’t give me the source code.
|
|
|
|
|
Forogar wrote: Nah! That'll never happen. AI responds: "Hold my beer."
Cheers,
Mike Fidler
"I intend to live forever - so far, so good." Steven Wright
"I almost had a psychic girlfriend but she left me before we met." Also Steven Wright
"I'm addicted to placebos. I could quit, but it wouldn't matter." Steven Wright yet again.
|
|
|
|
|
My neural network crashed after the first paragraph.
|
|
|
|
|
I find your reasoning extremely depressing, your opinion of human nature is extraordinarily negative. Pity it is probably accurate.
Dean Roddey wrote: We won't drive our cars
The only bright side to this is that I will probably be dead before it becomes a reality.
Never underestimate the power of human stupidity -
RAH
I'm old. I know stuff - JSOP
|
|
|
|
|
And the crazy thing is, I don't think it really requires much in the way of actual 'evil' for all of these bad things to happen. Almost everyone involved could easily believe that they are doing the right thing, or at most just doing the same things we've always done e.g. trying to make money, trying to get ahead in life, trying to protect ourselves and our loved ones, trying to do challenging things, being distracted from important issues by the previous issues, etc...
There will likely be some people who are actually evil, though even they may not think so and have fairly reasonably reasons why they think not, same as there already is more or less.
It just requires human nature. Most of our current problems, some of which are serious, are all pretty much the same. So many of them exist because of human nature. Some exist because of mother nature or a combination thereof. But lots of them are purely human nature with no one in the loop really doing anything that they consider wrong.
Explorans limites defectum
|
|
|
|
|
I agree with the thrust despite my rather jaundiced view of the concept of human nature. I tend to share Emma Goldman's take on it. Nevertheless we humans get up to the same old patterns time and again, but I think the math behind that is because we're agents in a Complex Adaptive System.
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
|
|
|
|
|
For years the pinnacle of mans achievement has been development of systems and weapons of complete destruction. Yeah some other stuff got invented along the way, but think about it, our prime objective has been to blow sh*t up - the bigger the better.
Yet no one has ever taken that final step, always chickened out.
We spend billions looking and sending crap into space to find some other entity to come and destroy us, hell, even the religious mostly look forward to their God to come and scrub this tiny spec of space dust away
Alas, people are too weak to press the damn button, no aliens nor gods aren't showing up.
Our own destruction is what we've all always wanted. So why not build a machine to do it?
Message Signature
(Click to edit ->)
|
|
|
|
|
Lopatir wrote: Our own destruction is what we've all always wanted. So why not build a machine to do it? I don't remember who told it, but I find it a good complement to your statement.
Quote: Artificial intelligence might be the cure for human stupidity. The key here is... what is behind "cure"
M.D.V.
If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about?
Help me to understand what I'm saying, and I'll explain it better to you
Rating helpful answers is nice, but saying thanks can be even nicer.
|
|
|
|
|
MikeTheFid wrote: There are two camps:
1) AI is a potential existential threat.
2) AI is nothing to worry about; we know what we're doing and we can control it.
I live in the third camp, the camp of "It depends".
Context is important.
A pointy stick can be an existential threat or a tool for recording knowledge.
“That which can be asserted without evidence, can be dismissed without evidence.”
― Christopher Hitchens
|
|
|
|
|
I think that the whole two camps things misses the point. We may well control it. But that doesn't make me feel any better, because controlling it includes using it against us. Everyone assumes that the problem is that it takes over on its own, but that doesn't remotely have to happen for it to become very dangerous to us.
And it never even has actually go out and DO anything to be dangerous. Its surveillance and data aggregation and pattern finding capabilities are more than scary enough moving forward, given how much information is becoming available about us on ongoing basis. Again, that doesn't mean some super-computer takes over because it's spying on us, it's that humans are using these capabilities to spy on us, for any number of reasons.
Explorans limites defectum
|
|
|
|
|
|
I think it's both. Like nuclear technology it elevates humanity but all of the sudden gives us more power than we can manage responsibly.
With AI there is the further hitch that we are arguably creating fully sentient (or sentient enough) life forms, with wills of their own.
Morally, the ramifications are huge no matter where you come down on the particulars.
But at the same time, as big as the change would be for humanity, I don't think it changes human patterns. We'll keep repeating the same old mistakes we all do, and the world will go on, with AI as a component of it.
Do I fear something like the matrix? Not really. Or I should say, I feel I have as much to fear from AI as we do from our current global arsenal of weaponry. Particularly nuclear.
But like any Complex Adaptive System, human community exists constantly and even thrives always on the precipice of disaster. We're one major superbug from a mass extinction reboot. But here we are. We've survived several global conflicts, one notably nuclear. We've survived the plague, we've survived numerous sackings and burnings, not just of our empires, but our knowledge bases like the Library of Alexandria. Here we are. Most of what we identify with and as still intact over the years, as different as it is the same all those centuries past. With shiny new novel ways to make old mistakes.
AI is just another one, but probably, as nuclear was, one that dwarfs all before it in scope and ramification.
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
|
|
|
|
|
Most things are a potential threat but AI is not one of them, it does not exist yet. Smart analytical systems with defined adaptivity maybe but intelligence is just not there nor is there any original thinking or creativity.
AI in it's current state is derivative therefore cannot be a threat. The pillocks using the derivative tools on the other hand probably are.
Never underestimate the power of human stupidity -
RAH
I'm old. I know stuff - JSOP
|
|
|
|
|
What it was a threat, just not an existential one, but a societal one?
More likely, more possible way it might affect society in possibly unpleasantly disruptive ways...
|
|
|
|
|
As a person studying and working with AI my view changed from "AI is possibly a threat" all the way to "There is nothing to worry about, ever".
First, I learned that AI is a little bit more mechanical than I anticipated. And we have been already using autonomous mechanical systems years from now (in practice HTTP servers requires little to no supervision after the start command, but fear from an HTTP server is irrational).
Second, we have the validation issue. A system with no validation is just random programming with undefined behavior. In all known cases that leads to unhandled exception and termination.In all cases with validation AI tends to do what it had been programmed for. And nothing more. Even "self-aware" tends to do nothing by default or behave like expensive random number generator if emerging behavior is available. In other words, a self-aware AI that wants to kill humanity is only possible if you analyze, validate and train an AI to kill humanity and then test it and reiterate until it stops failing at that command. It cannot be an emergent behavior.
Third, "self-awareness" is overrated. In fact this is the most scary part. Intelligence is far more machinery process than I actually anticipated. This gives rise to the most scary field "social engineering". It is not like a machine would harm you, but a person that understand how human mind machinery works and use that to achieve control over targeted person's behavior.
Therefore, you should not be scared by a specialized self-aware AI, that has been validated on a higher level to drive a car, translate from another language, create a son and so on. You should be scared by the people who knows what pattern of sound can cause production of certain type of hormone to affect your mood and so on. Self-aware machine has predictable behavior (or breaks down otherwise), self-aware humans do not.
|
|
|
|
|