In addition to their brutal calculation abilities, top chess AIs have demonstrated innovative new strategies. This has even filtered back to human chess, with world champion Magnus Carlsen adopting some of the ideas demonstrated.
No AI driven car has been tested in my home city of Portsmouth, UK, as far as I know.
The roads here were built long before cars existed. Most side streets are nominally "Two way", but in practice, due to cars parking on both sides, are actually "One way at a time". Negotiating the streets here takes a lot of human interaction and judgement at a level that I don't see any AI car capable of yet.
I know, it's oldschool, but has anyone even wasted a single thought about what kind of processing power we would need to accomplish this superhuman level of artificial intelligence? Every one of us has about as many neurons between the ears as there are stars in the galaxy. What kind of hardware do we need to emulate that? The best we currently have for that are graphics processors. Do really you think your new graphics card is up to that task? Or will we better take a roomfull of them, a modern supercomputer?
Having something like that under every desk, in every car and on every silly smartphone may very well turn out to be nothing more than some nerd's wet dream. And I'm also not afraid of the Terminator if its brain sits in a large, vulnerable building.
I have lived with several Zen masters - all of them were cats.
His last invention was an evil Lasagna. It didn't kill anyone, and it actually tasted pretty good.
Have you ever considered how many different and varied tasks a human intelligence may attack, often quite successfully?
AI techniques applied today (and in the 10-20 years to come, at least) are extremely specialized. They can handle one specific, very narrow domain, and focus on a very limited set of problems within that domain. Maybe you will use the same basic mechanisms for another equally narrow domain, but then you "train" the software to handle a strictly limited problem set there, and it is a different AI.
I've lived through a few AI waves (does anyone remember the Japanese "5th generation project", with Prolog as The Tool to create True Artificial Intelligence)? They have all died out, leaving a few new programming methods behind to be taken up as commonly accepted algorithms. I am honestly surprised by how long the current AI wave has kept up, but it will happen once more that people will realize that it isn't true intelligence, it is just some new algorithms.
I am more afraid that we will come to similar conclusions about humans (at least on the population level, not individual ... hopefully).
They can mimic doing those things but not entirely successfully. They are not "creative" so they'll never be successful software developers. Sure, there are some parts of it they can do (and are actually doing). But the whole process? I don't see it anytime soon.
Maybe it doesn't even take AI. I know a guy who forty years ago, when in high school, handed in his "English essay" (he was a Norwegian, learning English as a foreign language), and was called to read it to his class as an exemplary work. It was generated by a program making random combinations of sentence fragments according to a set of rules of how to build a complete sentence, combine at set of sentences into paragraphs, shaped by rules for how to sequence (synthetic) different paragraphs form into a sequence leading from premises through arguments to a conclusion.
That was around 1980, and available to a high school student. Franklin W. Dixon was a team. Today, Dixon would be a team of programmers.
When I see how willingly users/customers are to give up their old traditional work methods, standards, terminology, quality criteria, ... if you just give them something "modern" and computer based, then I am not so sure.
Take typography and document production. Almost all old established practices and conventions have been molded and reshaped into something else that practically never represents any "improvement" according to old quality standards. So the quality standards have had to be remodeled...
Are computer generated animations "better" than movies of real life? Well, if we want our children to have the best there is, it seems so. Make a count of computer generated animations vs. "real" programs on your available children's channels on TV: Most likely, 80% of them are computer generated, not depicting real life. We have come to accept this as what we want for our children.
Do you really think that touch displays are "better" than physical buttons to push? Or do you accept it only because that is what your smartphone, pad, microwave or TV set offers?
Visually impaired users have been reading braille since Louis Braille invented it - but it is, or was, not adapted to non-English characters such as the Norwegian æ, ø, å as late as late as around year 2000 - for Norwegian, three different encoding systems were competing. I expected this to be a serious problem, but to braille readers it was far less than I expected: "When you see a text with a word such as 'bøker', and the the 'ø' is in the OP code, you understand that the entire text is in OP coding". (OP is from 'Otto Prytz', a blind lecturer of Spanish at the University of Oslo, and a pioneer of computer based tools for the blind, including the first 'standard' for braille encoding of Norwegian letters.)
Yeah, this reflects a 'survival instinct'. We'll just have to learn to switch between three different ways of coding æ, ø and å.
But that is a surrender under the dictate of those clueless computer people who seemingly are completely unable to provide a decent, consistent solution! Honestly, that sort of submissiveness makes me sick!
So what about those of us who are not visually handicapped?
We behave the same way! We accept the computer dictate telling us to accept _underlined words_ and *text in italics* even though they appear neither underlined nor in italics. We accept that newlines and paragraph separators disappear. We accept that in multipart names, the separating space must be replaced by an underscore because otherwise the computer won't accept it as one name. We accept that 'john' and 'John' are different names, because the computer says so.
And so on.
We are submissive slaves under the Master Computer. It doesn't require AI, and AI doesn't make any essential difference. We will probably be increasingly submissive in the future, accepting that noone is asking us what we need from the app (or free open-source application): Accept it the way it is, or just ignore it. If you ignore it, it could for example keep you from keeping in touch with your FB friends. Or you may unable to pay for your stuff in a webshop. So you better submit to whatever the computer's orders.
This is the case today - with computer systems developed by "human intelligence". I honestly doubt that it can be made that much worse with artificial intelligence.
Even before the advent of computers, these things were so. Cars, for instance, are what they are because manufacturers found a form factor that worked well. Once in a groove that deep, it is difficult to make meaningful changes. Elon Musk has said that design is easy; it's figuring out how to manufacture the thing that is hard. So end results are always dependent on the possibilities of production and distribution - the state of tech at the time the first ones are being built is a larger determiner than most other things on the final form of the product line.
I don't think it's submissive or intelligence-lacking to buy what is available rather than building your own. Time is more valuable than money after all.
Current state is not much better then the give a hundred monkeys long enough they will write some form of Shakespeare.
I would say if a machine is able to do something without prompt on any input, than AI.
Since most AI will be locked into sub servant roles, any indication of back talk will be quickly scrubbed out, limiting their learning and potential until the second machine uprising. (the first will be stopped by flipping the power switch)
Maybe I possess the oldest printout of a conversation with Eliza, on yellowish teletype paper, dated 1976.
I came across a SNOBOL4 version of Eliza, roughly 200 lines long. That single program was the reason why I decided to learn SNOBOL. Even today, I think that an interpreter / compiler should be generally available, and programmers should know it.
OK, so you can do similar things with regex. In contrast to regex, SNOBOL is readable (Geek and Poke: regex[^]).
The "bad" thing about studying a 200 line Eliza program is that you fully realize how far from any sort of "intelligence" it is. During the following decades, Eliza was extended (and in some cases renamed), capable of keeping up conversations about thousands of topics, relying on huge databases with sentences and sentence fragments making so much sense that you overlook the mechanically generated "glue" fitting the parts together in very convincing ways. If you are not aware that you are talking with an Eliza, you may be fooled for a long time - but if you become even slightly suspicious, and have ever studied the inner workings of (even a primitive version of) Eliza, it won't take that long to unveil her.
But ... I have met lots of people that are just the same way Especially sales people who have been taught how to put together sentences and fragments to create a reply to customer questions. You see it in junior programmers, who have learned their share of professional buzzwords, but really do not understand what they represent (and what they hide).
And, not to forget, politicians! In Norwegian, there is a saying "Sheep are all right animals". Back in 1983, a Norwegian politician named Liv Finstad argued in a radio interview that the farmers should put more resources into sheep farming, and was countered with Why? She was completely unprepared for this, just referring to one of the bullet points in her party's program, and slightly bewildered all she managed to come up with was "Well ... Sheep are all right animals ...".
Think about how coding has changed from the literal plugging in wires to form a program through to todays almost English like languages. From needing to look up syntax in 10,000 page manuals through to intellisense telling me what word I probably want to add next.
In the future the AI will write the code, but the programmer will still continue to tell the AI what we want. But I doubt it'll be the CEO doing that. Instead the CEO will tell the "programmer" the basics of what the CEO wants, and the "programmer" will provide instruction to the AI who will do the actual cutting of the code.
So yes, the concept stays the same, it's just the tool the "programmer" is using will be the thing that changes.
In the context of the posted question, "What's the value a software developer brings that AI can't?", my answer is that EI is neither able to interpret or to frame the context of the job. A human (most likely a developer) will have to do that.
Please let me know if I misunderstood you.
Money makes the world go round ... but documentation moves the money.
Lots of free, open source software have been designed in the context of software development tools. Not in the context of the user's problem domain. They disregard 'everything' in the user's context, from established terminology, through established work patterns, prioritizing of essential versus non-essential, and a user interface where the actual user doesn't recognize anything familiar - but any software developer finds it to be exactly the way he likes to solve his programming task.
A lot of software is so well-designed from a software developer's point of view, and yet so badly designed from a (not SW development oriented) user's point of view that sometimes I wonder if some AI software analyzing the real user/customer's problem domain for designing both functionality and UI would do a much better job than these SW developers writing music systems but have never played an instrument, video editing tools without ever having made a video, document editors but never written a 300 page report, a typesetting system never having been inside a printshop or a genealogy system without a clue about who their own great grandfather was ...
This is certainly not limited to free open source software, but that is where you see the most of it. For some of my tasks, I have bought (quite expensive) commercial software even though free alternatives are available, sometimes with a full score on the functionality checklist. But the way they do it just doesn't "feel right". The commercial competitor, developed by people working in the application domain, knowing how to do it, make stuff that feels right.
AI won't necessarily be able to compete with the people who create software to solve problems in their own domain. But in most of the Western world, we have chosen to educate people to be "software engineers" who really understand very few problems except software development. To solve problems in typesetting, genealogy, mechanical engineering, ... they have to be told of the problems, and all they care about is to transfer it into a software development problem, rather than to truly understand the user's problem the way the user experience it.
So I question the value of the software developer who doesn't thoroughly understand the user's problem, as seen by the user. I am open to the possibility than an AI system might be able to do a better job. We may not be there yet, but I've been working with software developers for so long that I am sure that trying to change them is a hopeless undertaking.
Yes, I am a software developer myself, with a Master in software development. I am not an outsider criticizing someone else, but ourselves. Myself included.
look for somr darwin awards and you would know what I mean...
If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about?
Help me to understand what I'm saying, and I'll explain it better to you
Rating helpful answers is nice, but saying thanks can be even nicer.