The Lounge is rated Safe For Work. If you're about to post something inappropriate for a shared office environment, then don't post it. No ads, no abuse, and no programming questions. Trolling, (political, climate, religious or whatever) will result in your account being removed.
This article is fascinating and may be a real look into the future of AI.
A System so gigantic you can never talk to a real person who can make sense of things and instead just "puts you in jail". A System that makes decisions based upon bad code and a lack of understanding but a System that no one questions because "it's the computer, it must be right".
The actual infraction can be as slight as the indictment is broad. Stine has a client whose listing for a rustic barn wood picture frame was deemed unsafe and taken down; it turned out the offense was a single customer review that mentioned getting a splinter. (The customer had actually given it five stars.) The seller was allowed back when he promised to add “wear gloves when installing” to his listing.
Apparently rival sellers also post reviews to get competition banned from selling. One example from the article is:
Somebody bought your product, lit it on fire, took a picture, and told Amazon your product is explosive.
Explains the AI and terrible process and system:
But ultimately, it wasn’t the suspension that was most galling. It was the way Amazon kept responding with the same request for more information whenever he appealed. “I was caught in some kind of AI gear,” he says.
In reality, there were likely humans reading Harmon’s appeal, but they’re part of a highly automated bureaucracy, according to former Amazon employees. An algorithm flags sellers based on a range of metrics — customer complaints, number of returns, certain keywords used in reviews, and other, more mysterious variables — and passes them to Performance workers based in India, Costa Rica, and other locations. These workers choose between several prewritten blurbs to send to sellers. They may see what the actual problem is or the key item missing from an appeal, but they can’t be more specific than the forms allow...
ahhh, one of those stories that about once a year the reporter / magazine:
1. replaces the "offender" [Amazon] with lets see, the tax office, insurance company, Shell Oil, health service, 7-11, the EU, US Govt, any Asian country or African dictatorship,
2 google up a couple of anecdotal stories,
3. change a few sentences around, replace "computer" with "system" (pretty much covers anything)
The problem is that it's probably not artificial, and it's by no stretch of the imagination intelligent.
I know that this is kind of a tongue in cheek phrase and I do agree with you. This isn't a full Artificial. The problem is that "AI" triggers the initial thing. Then the un-Intelligence (real person) gets involved who believes the trigger is the Ultimate Truth. So, at this point you have the failure of two systems - the Automation one and the human one. It's quite terrible.
It even goes further as the human feeds back to the Artificial in such a way that furthers the Artificial's code to take other actions which simply convinces the human counterpart that s/he is even more correct. Ugh!!
Sounds more like artificial bureaucracy than artificial intelligence.
Yes! I think we've found our new buzz-phrase: Artificial Bureaucracy.
The problem is that I don't believe human intelligence is capable of making Artificial Intelligence, only Artificial Bureaucracy. I mean, I guess it's kind of evident since we have so many Natural Bureaucracies. And maybe it's not about capability, maybe it's about desire. When you're not the person stuck in the bureaucracy you don't really care about creating non-bureaucratic systems.
Many years ago, famed scientist and science-fiction author, Issac Asimov, wrote a short story describing a very similar situation that you describe here.
However, in this story, the main subject was given a minor parking ticket for a street infraction and through a series of computer mistakes landed up being executed for the murder of a person he never met...
Welcome to your future...
Sr. Software Engineer
Black Falcon Software, Inc.
Many years ago I was was writing/experimenting with Inference Engines. Any expert system and pretty much any AI is only as effective as the input and people training it.
I've been a Marvel comics reader since I could read the word 'soapbox', so several of my many test cases used Marvel comics canon to determine logical and likely outcomes. As a test, I had a friend of mine (a DC fan) run the system to build a knowledge base and rules based on popular DC comic book characters. When he sent me the knowledge base on CD based on specific assumptions that were present due to specific DC lore, we ended up writing a "doors" that allowed our two BBS's to link the inference engines together. The doors on the BBS's we hosted were some of the most popular for our users, who would submit questions to the system and in turn be presented with questions in return to further build out the knowledge base. The conclusions were hilarious.
The IA Network we created was call "Multi-dimensional Omniscient Reactionary Objective Network"
AI, like any other code, follows that age-old adage... "Garbage in, garbage out!"