Amazon has introduced AI-generated English and Spanish dubs for the 2018 anime Banana Fish. The reaction has been negative. Fans and voice actors have used the poor quality of the dubs as proof that generative AI is incapable of replacing humans. Is that actually true though?
Banana Fish Finally Gets an English Dub. Sort Of.
Earlier this year, Amazon started using AI dubs for certain movies and shows on its Prime streaming service. It’s part of a wider push in the tech industry. Companies like YouTube and Meta are now relying on AI to translate the audio in videos into multiple languages, specifically English.
Banana Fish was released back in 2018. Despite its popularity, the queer drama has never received an official English dub. That is, until Amazon decided to include the series in its little experiment. Their system takes the subtitles and then generates voices over the original video.
The Quality Is… Not Good
The quality of the English dub is very poor. Words don’t match the subtitles. Users have complained about names or phrases being mispronounced. The synthetic voices lack emotion and depth. They also sound too robotic, with no cadence or inflection.
Moments that are supposed to invoke a sense of panic, fear, or grief are awkward due to the monotone delivery. There’s no way of interpreting which emotions the characters are trying to express.
There’s a scene in one episode where a young kid is gunned down, yet the main character almost sounds bored instead of distraught. The execution kills the immersion. It creates a divide that makes it hard to connect with the story.
Fans and Voice Actors Aren’t Happy
Fans are livid. They’ve been waiting for an official English dub, only to get a shitty one that doesn’t reflect the hard work that went into the series.
Voice actors have criticized the move as being disrespectful. There have been calls for people to stick with the original Japanese audio and just watch the series with English subtitles.
Is This Proof That AI Can’t Compete?
It’s easy to dismiss this as another sign that AI is inferior to human voice actors, but that’s not necessarily true.
For me, the problem with the AI dub is that whatever model Amazon used (the company hasn’t disclosed how they produced the dub) wasn’t advanced enough for the job.
There are AI models out there that sound very realistic. To the point where you find yourself questioning if it’s voiced by a real person or not. Clearly, Amazon didn’t think to use one of those.
Either that or the human in charge of making the dubs used the wrong prompts. Maybe they just fed the subtitles without any notes on how the AI should react to certain scenes.
Whether or not the English dub would have been better with human voice actors depends on Amazon taking the time to hire ones that are good. Voice acting is an art that not everyone can master. Being human doesn’t always mean that you’re talented.
The Future Will Be Human + AI Actors
The point I’m trying to make here is that generative AI isn’t going anywhere. People need to accept that.
I don’t think it’ll get to a point where studios refuse to hire human voice actors. For older anime like Banana Fish, I can see why someone might decide to use AI instead of hiring dozens of actors. The future is going to be a mix of human and AI voice actors being forced to co-exist with one another.
If AI voice actors are going to be a thing, then studios and streaming services should take the time to ensure the vocal work is decent.
Otherwise, we’d all be better off watching our favorite shows on mute.