|
|
You thinking to near into the future... I'm thinking AI super computer writing all software... There will be no 'strapping young fellows' as they won't be required as AI super computer will write all code. You and I and all software developers will become a thing of the past, replaced by a super computer. To the super computer we will all seem like burger flippers. Hell, it will probably turn all of us into 'burger flippers'... E.g. Project Manager: Hey guy AI super computer just created us a new software suite for the hardware we asked it to build us. We need to test what it built us to make sure that everything works as it supposed to... And all of as have been demoted to burger flippers.
I will adapt, sure. I adapt quite easily. I'd be the one writing those scripts initially. I already write lots of automation scripts for tedious burger flipping tasks.
I just think that it would be dumb to build something that would make your own job obsolete and make someone else rich if it is in your power to not do so. Now it won't be in my power alone, but if all developers would be able to refuse something like this, it would be possible not to do so. I unfortunately feel that the chances of that happening is zero as we tend to do whatever work someone pays us to do.
P.S. no offence was intended towards any burger flippers nor testers. I myself started of as a burger flipper.
"Program testing can be used to show the presence of bugs, but never to show their absence."
<< please vote!! >></div>
modified 26-Jul-16 4:36am.
|
|
|
|
|
|
When I look at AI, I use my experience of automation. If I look at automation as a whole, on the negative side I see people loosing their jobs or being turned into burger flippers.
E.g. (burger flippers)
We have cashiers in our country where the tasks of being the cashier has been automated... They still work behind the cash desk, the customers insert the money into the machine and they automatically get back there money from the machine... all she now does is ring up all the items which she was doing in any case. She now gets less value out of her job that she was already getting.
E.g. (job losses)
The car industry, mining sector... no more required to say on these apart from that there is no way of getting a job for all of those who loosed jobs.
On the positive side (and this is how I view automation):
It frees up you time to focus on the real issues or the things that there were no time to look at previously and prevents you from having to do the tedious tasks. It makes it possible to do finish something quicker, less error prone, more cost affective (for someone else).
Now, if I have to weigh the positive against the negative... I have to say that before I can choose the positive I have to consider the negative because this are people's livelihood we are talking about. If you want to go that route, make sure that those who loose there jobs can start working in something else... e.g. research? But the reality of it is that, that will never happen as the main drive behind any idea is making $$$ and not making peoples lives better.
"Program testing can be used to show the presence of bugs, but never to show their absence."
<< please vote!! >></div>
|
|
|
|
|
|
Interesting and kind of get where you are going with this (A new system required for a new era). I like the rule of "unless safety is an issue (humans are error prone) it should be manual" although I do believe that it should be a choice. One should definitely be compensated for working in dangerous environments as a rule of thumb. E.g. "Danger pay" as you get in many companies except the mining sector in certain countries for some reason.
"Program testing can be used to show the presence of bugs, but never to show their absence."
<< please vote!! >></div>
|
|
|
|
|
Only humans are intelligent. Artificial intelligence is not really intelligence, I say correct name for this be pseudo intelligence. Only looks like intelligence. People saying they make intelligence are arrogant, is it not so?
|
|
|
|
|
I agree. Anything that cannot be resolved by deterministic algorithms gets resolved by pattern matching, fuzzy logic and even random logic. That is not intelligence, it is a lottery.
|
|
|
|
|
It's not random logic. In many ways, it's also how our brains work.
/ravi
|
|
|
|
|
Our decisions are never random unless we abdicate them to an external device such a dice. They are always a complex result of thinking, emotion and free will. A machine may achieve thought production and emotion simulation but what about free will. You could simulate free will by rolling a dice but you would be creating a monster.
|
|
|
|
|
john morrison leon wrote: Our decisions are never random What gave you the impression I implied they were?
/ravi
|
|
|
|
|
The juxtaposition of a negative with an affirmative parallel. Yes, I probably misread what you were saying.
|
|
|
|
|
Yes, It is never compared to human intelligence and hence named Artificial
|
|
|
|
|
RJ_Zed wrote: It is never compared to human intelligence I disagree. People often think AI will replace all humans and become more intelligent.
There are only 10 types of people in the world, those who understand binary and those who don't.
|
|
|
|
|
@Ryann we are the creators of AI, we just feed the info to them, they just adapt to what we teach. There are possibilities of them to overcome us, but we are the creators of them. We have the ability to do such calculations but we dont
|
|
|
|
|
RJ_Zed wrote: they just adapt to what we teach No. Again, all computers do is follow exactly what they have been programmed to do. They cannot decide or adapt.
There are only 10 types of people in the world, those who understand binary and those who don't.
|
|
|
|
|
Humans have been creating systems with "basic intelligence" for decades perhaps centuries. Any system where the outcome is used to moderate behaviour may be considered intelligent; think of negative feedback in an amplifier for example.
However "advanced intelligence" requires the ability to imagine future outcomes and thereby develop better behaviour for both new and old situations largely avoiding trial and error. By that standard even we humans are lacking!
|
|
|
|
|
|
N_tro_P wrote: The truth is we have been doing AI for a while. Exactly. I feel the new found fancy that the mass media (and some VC companies) have taken to AI silly. Having worked in the field (specifically expert systems) for about 30 years, I find that many critics of AI are unaware that several aspects of our daily lives are impacted by AI algorithms.
/ravi
|
|
|
|
|
|
Maybe the Webster has a very restricted definition of the word intelligence. You say they are highly intelligent but I say they give the impression that they be highly intelligent. They don't really think and understand anything.
|
|
|
|
|
|
N_tro_P wrote: What is a processor doing if not thinking? It's moving bits of voltage around because it is told to. It only does exactly what it is told to. No thinking. A computer never thinks. It cannot. It is the dumbest machine in the world, it only does what it is told, period. Nothing more, nothing less.
There are only 10 types of people in the world, those who understand binary and those who don't.
|
|
|
|
|
You are right. My understanding of the subject is bad.
I have found articles on the internet explaining the meaning of the intelligence and how artificial intelligence can be called intelligence.
As you say intelligence does not need system to be self aware.
I think English meaning of intelligence does not include understanding also.
|
|
|
|
|
Member 12651605 wrote: They don't really think and understand anything. And what gave you the impression they did?
/ravi
|
|
|
|