Maybe artificial intelligence isn’t so smart
There’s no denying AI is unbelievably smart, with it set to completely revolutionise the way we live. However, as a still-emerging technology, there are some teething issues it faces as more and more people adopt it in their lives.
Technical difficulties take on a whole new meaning in these examples where human intervention would probably have saved the day. If you’re someone worrying whether you’ll still have a job or be replaced by Metal Mike (no, NOT Mike Chlasciak), these stories about AI missing the mark will reassure you that it’ll be a long way off. Hopefully.
Therapisation in progress
Therapists are a vital role in society – but they’re human and this means they aren’t available 24/7. Also, mental health issues are on the rise, meaning therapists are more sought after (and spread more thinly) than ever. Enter AI therapists such as Woebot, Wysa and Youper – to name a few.
AI therapists have many advantages over traditional ones; they can multitask, tailor their responses, and are free! Research has shown that some people even find it easier to talk to AI than a human, but there is little proof that AI therapists actually work. There are serious issues with their responses, with some chatbots failing to recognise child sexual abuse – so if you’re seriously struggling, it’s always better to talk to a person.
Along the same lines, AI is being used in the healthcare sector – but there still needs to be an element of human intervention! CT scans can be read by machines faster than humans could ever dream of, but there are almost as many concerns as there are positives.
For example, AI can only act on the data they’ve been fed – so if this turns out to be skewed in any way, so will the results. Some AI has been found to deliver inaccurate or irrelevant information – which isn’t really ideal in a hospital. Maybe with some more tweaking, AI will earn its place among medical professionals.
Chances are, you’ve heard of Cortana or Alexa, both virtual assistants with voice recognition. Voice-activated products have completely changed the game in terms of making things more accessible; you don’t even have to get up out of bed to be able to know what the weather is!
There are many uses of voice technology, including shopping with your voice, but unfortunately, it’s too easy to use. Tech-savvy children (and parrots) have used it to order things, leaving those linked credit cards depleted. Amazon has announced plans for extra voice-recognition to be able to differentiate between children’s voices to stop this.
Picture scanning software can check that certain criteria are met automatically, meaning people don’t have to spend hours pouring over pictures. Unfortunately, it turns out AI is quite discriminatory – especially when it incorrectly classed an Asian man as having his eyes closed in the picture.
Luckily, after contacting a human to resolve the problem, he got his passport. The process was meant to be streamlined and efficient through the use of AI, but turned out to be a much longer process than anticipated – and turned discriminatory, too!
Twitter’s exciting project, an AI chatbot called Tay, was due to be revolutionary – she would respond to user’s posts, and the more she interacted, the more she’d learn. However, you’re probably guessing where this is going – as she was pulling data from Twitter, user’s inflammatory and offensive language got sucked in, too. The, ahem, extreme freedom of speech was immediately parroted by Tay.
That’s how she ended up learning from what she saw, and Tay soon began making anti-feminist and pro-Hitler statements. It’s a really interesting look into the mindset of a whole demographic of Twitter users, but unfortunately it backfired – heavily. Maybe try a filter next time!
Computers are amazing – sometimes. We hope you learned more about the dangers of relying on AI with no failsafe, and why it’s important to test and check any possible outcome before unleashing AI in public, health or other roles.