The Lounge is rated Safe For Work. If you're about to post something inappropriate for a shared office environment, then don't post it. No ads, no abuse, and no programming questions. Trolling, (political, climate, religious or whatever) will result in your account being removed.
The Beer Prayer - Our lager, which art in barrels, hallowed be thy drink. Thy will be drunk, I will be drunk, at home as it is in the tavern. Give us this day our foamy head, and forgive us our spillage as we forgive those who spill against us. And lead us not to incarceration, but deliver us from hangovers. For thine is the beer, the bitter and the lager, for ever and ever. Barmen.
In my student days, I bought a book for one single reason - its title: "Machines who think".
Considering how long ago that is, I am not holding my breath while waiting for the self-aware machines.
If you really want to loose your sleep over such issues: Pick up some of the SciFi novels by James P. Hogan, such as "The two faces of tomorrow" or "Realtime interrupt". "Two faces" is from my student days as well ("Realtime" is more recent), but Hogan had the top AI experts at C-M and MIT review his manuscripts: Even today they hold water, seen from a professional perspective. Obviously, we have extended our understanding since the books were written, but the knowledge on which the books are built is essentially still "correct". Both books are higly recommended.
It seems like we are in a moment similar to the one just after the Manhattan Project produced the first nuclear bombs
And that's the difference. We had nuclear bombs.
AI? Give me a break. Show me something that actually can be described as artificial intelligence --
something that can perceive the world, contemplate an action, and have the means to interact with the physical world to implement that action. And implement it in a way poses a threat to anything (but you won't get past the first condition.)
What, are all those self-driving cars going to suddenly join Lyft and go on strike?
Even the tragic Boeing crashes are not an AI running amok but a poorly programmed expert system. As in, some intelligence on the plane didn't suddenly say, "hey, let's go kill some people."
There is no AI. There is no "Intelligence" - sure, we have extremely limited systems that can learn and adapt, that require huge training sets that result in a complex weighted network. You call that thinking? You call that intelligence? A worm is smarter.
But that's not true for neural networks. They aren't programmed, they are trained, and they aren't nearly as deterministic as coded programs. They are working on fuzzy logic the same as we do, and they can make mistakes like we do.
But it's not. You should bone up on DNNs a bit more. There is zero problem domain knowledge coded into a DNN. It's just a set of level driven nodes just as our brain's neurons are. There can be problem domain aware code around a DNN do other parts of the job, but the DNN is NOT just doing something it was programmed to do.
It doesn't matter if you consider it intelligent or not. The fact is it will take in lots of information and which generate a choice not based on being told what choices to make and not based on any inputs it has ever seen before. And, like a human, it can make mistakes similar to how we make them, not off/on right/wrong mistakes but fuzzy mistakes.
The codes doesn't ALLOW anything. That's sort of the point of DNNs. They aren't programs in the sense that most programs are. They are more like meta-programs. The program is just the pipes through which the data flows. The decisions are not made by those pipes, it's made by how the data flowing through those pipes interact with each other, which is why it can deal with information it's never seen before.
It doesn't matter if it's alive or really 'thinks' by your or my definition of what that means. The fact is that it can make decisions much more in the way that we do than like a software program does. They aren't anything alike really.
That means it can be used for things that regular software programs cannot hope to do. And those things it can do very well are things that are potentially very dangerous to us, because human nature will insure that we use them thusly.
So, here's a good example of why you are mistaken. I challenge you to write a program that can recognize any picture of a banana with high accuracy. You will find that that is very difficult. And, when you are done, you will have a program that only recognizes bananas. If you need to recognize something else, like stock manipulation patterns, you will have write a different program that will also be very difficult.
DNNs don't have to be changed to do different jobs like that. That's a fundamental difference. The same algorithm can recognize a banana or find patterns in financial transactions or understand written characters or recognize sounds in spoken words, without any changes.
That's because it's not a program of if/elses that you write. It's a program that accepts data, lets that data interfere with itself in ways that creates a pattern that gives a confidence level that the input represents this or that. It's nothing like a bunch of if/else statements making hard coded decisions. Nowhere in there is any code written related to 'is this a banana?' at all.
It doesn't make any difference whether it's 'alive' or 'intelligent' at all in terms of the practical impact that's already having on our lives and the vastly larger impact it will have in the future.
Thought I'd interject to say the question of sentience has been a matter of some debate in the philosophy circles i run in, in large part because of AI being on the horizon.
I think reasonable people can disagree, as there are certain grounding assumptions we all have to deal with here in terms of the question of what makes us human, what it even means to think, or engage in say, philosophy?
As for me I'd suggest that anything that is a convincing enough illusion of The Real Thing(TM) (whatever that happens to be) is as good as the real thing for any meaningful intent and purpose.
For example, for all I know, we don't have free will either. It might be possible to develop a way to plot my next thought or move. Maybe I'm a calculation in a simulation. But it doesn't matter. Because I have the illusion of will, and it's a compelling enough illusion that it may as well be (to me) the real thing.
So I'd suggest here, that at a certain threshold, we might accept that a computer "thinks" as any other sentient being might, or even as a human might.
I don't know if that can be done in silicon reasonably, but I'm entertaining a hypothetical here, if you'll humor me that.
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
Last Visit: 3-Jun-20 20:14 Last Update: 3-Jun-20 20:14