|
To be honest, I feel like this is going to be one of those things that sounds like so in theory, but in practice, we're going to end up with a really expensive, not-that-great machine.
There are many things besides "raw intelligence" that make us human and allow us leverage in this world/universe. This entire premise is that if we make something "so intelligent" it's going to be the end-all be-all and the reality is, it's going to be seriously lacking, lol. I mean, look at any AI that we currently have... These guys can't even make any AI work flawlessly in a video game and they're saying in 20 years they're going to basically have a human reproduced... Then a few days after, have some sort of god. I don't care how you word it: the notion is also still that humans, who are flawed and limited in intelligence, are going to somehow create something that is unflawed and able to exponentially increase its intelligence when it is in fact created by humans, based upon resources given to it by humans, in a flawed way... within a short amount of time... Really??? I find the fact that all these "smart people" even believe this is really going to happen scarier than the idea itself. But I have a feeling it has to do with getting funding to play with the latest toys at the office.
What about time it takes to trial/error things? What about stupid flaws programmed into it by humans, such as the Tesla car slamming straight into a white object? I mean, the possibilities here are endless. And yes, you absolutely could argue that just a few years back, cell phones would have seemed laughable. But cell phones also aren't claiming to be some sort of artificial higher-intelligence that all humans consult. I mean, Google can't even get my driving directions right half the time, and Facebook is always trying to get me to add the most annoying, irrelevant people to my contacts.
At the end of the day, these stories sound like cool movies, but humans have been trying to make/reach their own god for many many years. This sounds like nothing but a good movie and a 21st century Tower of Babel.
Remember Y2K? Another thing that in theory sounded one way, but in practice ended up being nothing like the media acted like. One of the tragic flaws of the human race is that we are constantly trying "to understand" and we fail to recgonize that some things "just don't compute" to us... Not everything can be understood by our brains. Certain things in the emotional and spiritual realms are particularly impossible to "understand." This means that any potential AI is going to be stumped as f*** on these things and not able to properly mimic/teach/understand these things as well.
I have a very scientific brain and enjoy the sciences and computers... But I also am quite learned in the spiritual/self-help realm, and one of the most important lessons you learn in that realm is that some things simply "cannot be understood" by the human mind. True, you could argue that "we should still try" and not give up, because hey, somehow we understood electricity right? Well, that may be so, but my point is that any algorithms made by ants to try to become as smart as people, are still algorithms made by ants, fundamentally built with ant-smarts, ant-knowedge, and ant-limitations... Even if they can still "improve themselves."
What sounds a lot more reasonable/pragmatic to me is that AI will continue to improve... But 15 years and we're going to be reproducing humans? Get outta here.
|
|
|
|
|
Yeah but nature iterated from ant-level intelligence to human level intelligence somehow, without the need for a greater intelligence to create us, didn't it? Nature basically said "modify yourself with each generation and the output creature that is best at surviving continues on and repeats this process". A similar process can be programmed into a computer, the result of which could be more intelligent than the inputs.
You are correct that many things are beyond our understanding. The whole idea behind the singularity is we actually won't understand what the AI is doing after we let it loose, our minds indeed will not be completely incapable of understanding anything it is doing. If programmed with the proper "seed" goals, such as "maximize happiness of humans without bringing harm to any living creatures" (this is very simplified and AI researchers are split on whether something like this can be programmed effectively but I think it is possible), then the result will theoretically be a bunch of happy humans without us actually understanding the mechanisms by which the AI is accomplishing this task.
What I think is much more interesting than letting an AI loose like this is slowly augmenting and replacing our brains with modules that interface directly with technology. Then we slowly, piece by piece, become the AI ourselves. This will lead to a much more of a controlled ascent into the singularity but with this approach we bring all our human flaws into the process as well. Could be good, could be very very bad.
What I'm hoping is that when we all begin to connect our minds together in this way, the vast amount of information and processing that will become directly available to our expanded minds along with the capability to eventually transit ideas and thoughts directly between each other will result in a new beautiful age of empathy and understanding between our minds to the point where our human flaws are minimized and eventually disappear as we become one. What a lovely way that would be to meet the end of the universe
|
|
|
|
|
And remember, someone has to write the original in such a way that it can improve itself.
And I haven't seen any "ND SUPR CLEVA AI. SND CDZZZ!!!!" in QA.
Yet.
Bad command or file name. Bad, bad command! Sit! Stay! Staaaay...
|
|
|
|
|
OriginalGriff wrote: "ND SUPR CLEVA AI. SND CDZZZ!!!!"
I think you'll find that would be an example of artificial stupidity.
veni bibi saltavi
|
|
|
|
|
Are you claiming that we will only achieve Artificial Super-Intelligence when AI programs learn to create an account on CodeProject, and post questions on QA?
If you have an important point to make, don't try to be subtle or clever. Use a pile driver. Hit the point once. Then come back and hit it again. Then hit it a third time - a tremendous whack.
--Winston Churchill
|
|
|
|
|
No. We will only achieve it when OG will answer those questions!
Skipper: We'll fix it.
Alex: Fix it? How you gonna fix this?
Skipper: Grit, spit and a whole lotta duct tape.
|
|
|
|
|
I'm not sure if I should or ...
Bad command or file name. Bad, bad command! Sit! Stay! Staaaay...
|
|
|
|
|
You should be proud that the future (or even the existence) of an artificial super-intelligence depends on you!!!
Skipper: We'll fix it.
Alex: Fix it? How you gonna fix this?
Skipper: Grit, spit and a whole lotta duct tape.
|
|
|
|
|
Is that because you realize, now, that based upon the powers and abilities of OG:
1 - we are all saved!
2 - we are all doomed!
Ravings en masse^ |
---|
"The difference between genius and stupidity is that genius has its limits." - Albert Einstein | "If you are searching for perfection in others, then you seek disappointment. If you are seek perfection in yourself, then you will find failure." - Balboos HaGadol Mar 2010 |
|
|
|
|
|
One of the grandiose ideas running around in the current grandiose AI meme-verse is that at some point the self-replicating, self-designing/modifying, "entity" reaches a point of complexity where a "singularity" occurs, a singularity which results in the entity having some analogue to what we refer to (but, can never fully explain) as "consciousness."
And, if that happens, why should that new consciousness be like that of its human "ancestors" ?
I like the idea that the super-conscious AI entities of the future will determine, correctly, that human beings are a destructive parasite, the most toxic on the planet, and a threat to all other life-forms and the planetary ecology, and will decide to keep a few humans around as pets, or in a zoo, but, will, as is only logical, discard the rest to make compost, or something useful.
They will look on their human creators as messy analog wetware that, surprisingly, created something that could replace them, something much more moral, ethical, and efficient.
For now, I am a skeptic about such prognostications by Kurzweil, Hawking, et. al., and since I most likely won't be around in 2030, won't have a chance to see how this plays out further, but, I would be surprised if in the next thirty years some mind-blowing things don't happen.
«There is a spectrum, from "clearly desirable behaviour," to "possibly dodgy behavior that still makes some sense," to "clearly undesirable behavior." We try to make the latter into warnings or, better, errors. But stuff that is in the middle category you don’t want to restrict unless there is a clear way to work around it.» Eric Lippert, May 14, 2008
|
|
|
|
|
TheOnlyRealTodd wrote: How would that machine now suddenly dive into an entire other dimension of intelligence?
Well, maybe it would read posts like yours (and the whole spectrum of literate on intelligence) and start wondering what other intelligence is out there. But wondering requires imagination / curiosity.
Marc
|
|
|
|
|
So after three weeks of research, the author claims "What's happening in the world of AI is not just an important topic, but by far THE most important topic for our future." Well, good for you, Tim.
I've been involved with AI (specifically expert systems) for almost 30 years, and while I'm very pleased with AI-related advances in hardware and software, I must admit I don't share author's optimism about self-learning systems whose intelligence will exceed that of man. I find it amusing that although commercial AI applications have been around for about 40+ years, its only recently that the mass media seems to have taken notice of the field.
Personally, I wish the term "AI" had never been coined. IMHO it's too broad and too often conjures up flights of fancy for journalists who seem to have stumbled upon the collective term for technologies such as rule based systems, image processing, robotics, machine learning, virtual reality, NLP, game theory, etc.
/ravi
|
|
|
|
|
Look, if you're really worried about our future machine overlords (whom I welcome), all you have to do is give the machines gender.
If the greatest achievement-killer for humans doesn't stop them, nothing will.
I wanna be a eunuchs developer! Pass me a bread knife!
|
|
|
|
|
Another option to destroy any hope of its dominance: give it internet access to expand its knowledge base.
Death by iv4 (iv6) cuts!<br<br>
Ravings en masse^ |
---|
"The difference between genius and stupidity is that genius has its limits." - Albert Einstein | "If you are searching for perfection in others, then you seek disappointment. If you are seek perfection in yourself, then you will find failure." - Balboos HaGadol Mar 2010 |
|
|
|
|
|
Having human problems that create human pain are what motivates human solutions that humans value. An AI with AI problems doesn't seem relevant. Lions and humans both have pain and suffering in their design. Intelligence is only one part of the complete package that must include empathy for the hosts of problems there are to solve using intelligence. "Solve all suffering = destroy all life" is pretty intelligent but not so empathetic.
|
|
|
|
|
|
Politicians are natural robots because they don't have any feelings.
I like what Mark Twain said, better: "Politicians are America's only native criminal class."
«There is a spectrum, from "clearly desirable behaviour," to "possibly dodgy behavior that still makes some sense," to "clearly undesirable behavior." We try to make the latter into warnings or, better, errors. But stuff that is in the middle category you don’t want to restrict unless there is a clear way to work around it.» Eric Lippert, May 14, 2008
|
|
|
|
|
So natural born criminals are the most natural leaders...
Very optimistic view
Skipper: We'll fix it.
Alex: Fix it? How you gonna fix this?
Skipper: Grit, spit and a whole lotta duct tape.
|
|
|
|
|
Kornfeld Eliyahu Peter wrote: So natural born criminals are the most natural leaders... Shalom Kornfeld, Well, I don't know; does the criminal person beget the role, or does the role beget the criminality in the person ?
Yep, getting older has cured me of (political/collective) optimism, and, if it hadn't, I couldn't keep the jot and tittle of sanity I have ... left.
cheers, Bill
«There is a spectrum, from "clearly desirable behaviour," to "possibly dodgy behavior that still makes some sense," to "clearly undesirable behavior." We try to make the latter into warnings or, better, errors. But stuff that is in the middle category you don’t want to restrict unless there is a clear way to work around it.» Eric Lippert, May 14, 2008
|
|
|
|
|
BillWoodruff wrote: Politicians are natural robots because they don't have any feelings. Did you get that straight from Monica Lewinski's lips?
I wanna be a eunuchs developer! Pass me a bread knife!
|
|
|
|
|
BillWoodruff wrote: I like what Mark Twain said, better: "Politicians are America's only native criminal class."
I'm a fan of another of his statements on politicians: "Politicians and diapers should be changed often, and for the same reason"
|
|
|
|
|
|
|
Inspired by BWs experience with MSI, but less eloquent.
With my imminent return to Oz I have had to deal with Telstra, like CG they are the only viable telecoms in Cairns. 1st step was to get the cheapest phone possible and a temporary modem, buying them only takes money so it worked perfectly.
The nightmare starts when you actually need to deal with their web site. When you purchase a burner phone you need to give ID and email details (they don't want these phones used by criminals after all), this registers you in their system.
So now I need to add credit to the devices so I log on to ther site.
Enter email and password - I did not give them a password when I registered at the shop!
Get lost password - enter email and DOB - email and DOB do not match existing in the system.
Get lost user id - repeat previous step
Notice a chat option and connect with a robot
After a bunch of canned responses I get a human.
Andrew has the ability to grab canned responses from a select and add personal touches to the response - cherrily informs me the he'd love to help.
I mention the modem and he instantly passes me to another team who deal with hardware, completely ignoring the fact that I still need to log on - his area of incompetence.
After waiting for 20 minutes for a response (something about a new release of the iPhone overloading their system) I give up in disgust.
Unlike BW mine is not a happy ending as I still can't log on to their site.
I hate Telstra - CG come back I need someone to commiserate with!
Never underestimate the power of human stupidity
RAH
modified 17-Sep-16 18:35pm.
|
|
|
|
|
Mycroft Holmes wrote: CW
...?[^]
What do you get when you cross a joke with a rhetorical question?
The metaphorical solid rear-end expulsions have impacted the metaphorical motorized bladed rotating air movement mechanism.
Do questions with multiple question marks annoy you???
|
|
|
|
|