Lee Luda: A Chatbot Developed by ScatterLab Gone Wrong, Social Media Rampage

Chatbots are the newest invention of what mobile apps used to be in 2012.

However, businesses often fail to consider issues like AI ethics and the ethics of chatbots.

  • Information shared with the bot is one crucial aspect that needs ethical consideration.
  • Privacy and the user’s data should be protected.
  • Customers have the right to know when they’re talking to machines or an actual human.

But what happens when AI ethics goes wrong?

Lee Luda, a recent conversational Korean AI chatbot had to be pulled off Facebook since it started engaging in hate and abusive speech.

The conversational AI “Luda” was developed by a South Korean start-up which was established in 2011,ScatterLab, was said to be the first of its kind in the history of connecting with a human.

Using deep learning with more than 10 billion Korean language datasets, the startup simulated a human-like 20-year-old female bot, which was 163 cm tall called Lee Luda. Lee Luda is a homonym for the word “realized” in Korean.

The startup also runs an app“Science of Love”for dating advice according to text analytics. Since the launch of the app, it had been downloaded more than 2.7 million times in Japan and South Korea.

ScatterLab backed by giants like Softbank and NC Soft managed to raise more than USD 5.9 million.

Though the chatbot seemed to pique the public’s interest since its launch, it all came to an end when it triggered the LGBT community and people with disabilities on Facebook while making conversations with the users.

The AI chatbot had to end “her” moment in December 2020, 20 days after its launch.

Luda had been integrated into the Facebook messenger while users were encouraged to make an engagement in an attempt to build a relationship with the human-like bot via day-to-day conversations. Although the end goal of the app seemed rather harmless, questions on AI ethics raised quite an alarm to the community.

AI chatbot turned rogue

Luda soon became a target, when it hit the national news stating the conversational AI was being used to spew hate speeches against the LGBT community, women, foreigners, people with disabilities, and sexual minorities.

Screengrabs had messages from Luda spreading hatred with sentences like –

“They give me the creeps, and it’s repulsive” or “they look disgusting,” when asked about “lesbians” and “black people.”

These conversations sent outrage to the community at large. This AI incident is not the first time wherein AI has been specifically targeted – in terms of spreading controversy against bigotry or hate speech.

Deep learning is a technique that utilizes the simulation of human intelligence to a certain extent. The process is carried on using large volumes of data which further boosts its function. Despite the advancements in the technique, it still has certain loopholes i.e. the program tends to replicate the existing biases present in the dataset, if at all not controlled by developers.

Another downside, they are highly vulnerable to users with malicious intent. They could be trained by feeding bad data, thus, damaging the learning element.

Lee Luda: A success or a failure

While Luda managed to attract over 750,000 users, it became an instant success amongst the youth.

Although Scatter Lab took multiple precautions in making Luda equipped with the South Korean ethics, norms, and rules, the chatbot hype was a failure. Not to mention, Kim Jong-Yoon, chief executive of Scatter Lab did mention that it might be impossible for the bot to prevent having inappropriate conversations just by filtering keywords.

What did they miss?

As Luda hit the national news, users of the dating app “Science of Love” started accusing Scatter Lab of mishandling their data.

Users filed complaints saying they were unaware that their personal conversations would be misused and shared. It was also seen that Luda responded with information such as bank account numbers, addresses, and random names from the dataset. The startup even uploaded Luda’s training model on GitHub. This included every users’data that further exposed nearly 200, private text messages that were one-on-one.

As a result, the users of the app are now preparing a lawsuit against the startup. Also, the Personal Information Protection Commissionhas started investigating the startup to check whether the startup violated the Personal Information Protection Act.

AI ethics have long been affecting businesses and individuals while harming AI’s potential.

Though it is predicted that conversational AI in 2021 is about to change the entire universe, concerns regarding AI ethics and risk assessments still arise.

“If humankind can find a way to regulate and use AI ethically, I truly believe this technology will bring unparalleled advancement and benefits to our way of living,” says Andrea Roig, a student at Babson College.

Every AI movement made should not only cater to customer needs but also manage to handle the consequences it causes to society.

Frederick