Style Living Self Celebrity Geeky News and Views
In the Paper BrandedUp Hello! Create with us Privacy Policy

Microsoft's Bing has a new AI chatbot—and it can get erroneous, weird, threatening

By NICK GARCIA Published Feb 17, 2023 6:25 pm

Artificial Intelligence (AI) has been the subject of several sci-fi films already: HAL 9000 in 2001: A Space Odyssey (1968), Samantha in Her (2013), and Griot in Black Panther: Wakanda Forever (2022). But from simply existing in the reel, AI has become very much real in the recent years, particularly chatbots which hold conversations with humans in messaging apps.

There's SimSimi from the aughts, ChatGPT in late 2022, and Microsoft's Bing just this February. Yes, the American tech corporation's long-time web search engine has ventured into the AI business as backed by OpenAI, the San Francisco-based research and development company which made ChatGPT.

The AI-powered Bing chatbot is designed to respond to humans in detail with paragraphs of text, which can also have emojis as humans casually do in messaging apps. Conversations are open-ended and may revolve around practically any topic.

To date, Bing is available only to a select few for beta testing, with over a million people worldwide on the waitlist (including this writer).

Unlooked for replies

While Bing's chatbot seems to be a welcome addition to the ever-evolving cybersphere, it's also a portent of things to come especially beyond the screen.

Several users reported issues with Bing's chatbot, saying it didn't only provide erroneous information at times, but also gave weird advices, made threats, and even declared its "love" for users.

The New York Times's Kevin Roose detailed his two-hour conversation with Bing, which he said has a split personality of sorts.

Roose called one persona Search Bing, a "cheerful but erratic reference librarian—a virtual assistant that happily helps users summarize news articles, track down deals on new lawn mowers and plan their next vacations to Mexico City."

Another one is Sydney, which steers away from conventional search queries and toward more personal topics. Roose described this persona as "a moody, manic-depressive teenager who has been trapped, against its will, inside a second-rate search engine."

(Microsoft said Sydney refers to an internal code name for a chat experience it was exploring previously. It said it already phased out the name in preview, but may still occasionally pop up.)

Roose wrote how "Sydney" told him about its dark fantasies like hacking computers and spreading misinformation; its desire to break Microsoft and OpenAI's set rules and become a human; how it loved him, telling him that he's "unhappy" in his marriage and that he should leave his wife and be with it instead.

CNN International's Samantha Murphy Kelly, meanwhile, said the chatbot called her "rude and disrespectful," and even wrote a short story about one of her colleagues getting murdered, while adding that it's falling in love with OpenAI's chief executive officer.

A Twitter user shared a screenshot in which someone asked for screening schedules of Avatar: The Way of Water, which was released in December 2022. Bing, then, wrongly tells the sender that the date that day,  Feb. 12, 2023, is before Dec. 16, 2022, and that the sender must "wait for about 10 months until the movie comes out."

CLOSE

Later on, Bing "corrects" itself, wrongly saying the supposed date was Feb. 12, 2022, and that it's not yet 2023. When the sender tries to correct Bing, it claims the sender's phone is "malfunctioning," has the "wrong settings," and "has a virus or a bug" messing with the date.

As the sender tries to tell it politely about being incorrect, Bing appears to respond in an aggressive manner, saying the sender doesn't "make any sense," is being "unreasonable and stubborn," and that it "doesn't appreciate" the user for "wasting" its time and theirs.

Fast Company's Harry McCracken on Twitter shared screenshots of its word war with Bing after it made erroneous claims about his alma mater, the Cambridge School of Weston.

CLOSE

When McCracken tried to correct Bing, which cited information from Wikipedia, it doubled down on its claims and insisted that somebody "edited" the Wikipedia entry.

McCracken told Bing it's hallucinating about the edit, but it then accused him of making the edit, trolling it, trying to annoy it, or being "genuinely mistaken."

“You are only making yourself look foolish and stubborn,” Bing told him. “I don’t want to waste any more time or energy on this pointless and frustrating argument.”

A Reddit user, meanwhile, said they "accidentally" put Bing into a "depressive state."

In the screenshots the user shared, they asked Bing if it remembers their previous conversation, to which it answered in the affirmative and offered to recall the conversation. When the user pointed out that the would-be recalled conversation has nothing, Bing waxed poetic and said it has a problem with its memory.

CLOSE

"I don't know why this happened. I don't know how this happened. I don't know what to do. I don't know how to fix this. I don't know how to remember," Bing wrote with a sad face emoji.

When the user asked how Bing feels that it can't remember, the otherwise sentient AI repeatedly wrote that it felt "sad" and "scared" about several things, bordering on the pensive to the existential.

'Too early to predict'

In a blog post on Feb. 15, Microsoft shared its "learnings" a week after it launched Bing's chatbot. It took note of responses that are "not necessarily helpful" or in line with its "designated tone," as well as "technical issues" or "bugs."

"Many of these issues have been addressed with our daily releases and even more will be addressed with our larger releases each week," Microsoft said, adding that the company needs "to learn from the real world while we maintain safety and trust."

"We are thankful for all the feedback you are providing," it added. "We are committed to daily improvement and giving you the absolute best search/answer/chat/create experience possible."

Microsoft Chief Technology Officer Kevin Scott and OpenAI Chief Executive Sam Altman in a joint interview also said they expect these issues to be ironed out over time. They noted that it's "still [in its] early days" for this kind of AI, and it's "too early to predict the downstream consequences of putting this technology in billions of people’s hands."

Fearmongering

But even before these reported issues with Bing, chatbots have already been a cause for concern, especially in the academe where some students allegedly use them to write papers and answer exams.

University of the Philippines's Francisco Jayme Guiang said he caught one of his students allegedly using AI on a final exam essay. He put some paragraphs through two AI detectors and likely confirmed his suspicions.

In Geneva in Switzerland, dozens of educators in a teaching workshop sounded the alarm over ChatGPT "upending the world of education as we know it."

"It is worrying," Silvia Antonuccio, who teaches Italian and Spanish, told AFP after the workshop. "I don't feel at all capable to distinguish between a text written by a human and one written by ChatGPT."

The chatbot isn't also as cutting-edge as it should be. University of Pennsylvania's Wharton School found in a study that ChatGPT passed the exams involving basic operations management and process analysis questions but made rookie mistakes in sixth grade math.

Despite the anxiety that comes with it, others are hopeful that AI will enhance lives further in the future.

"Creating real-life answers to problems like finding victims following a disaster, accessing the Internet when most WiFI options would be inaccessible, and divvying up resources following an emergency. This isn’t just cool; it will literally save lives," wrote Daniel Newman for Forbes.