Wall Street Journal personal technology columnist Joanna Stern recently built an AI chatbot to handle readers’ iPhone-related questions. It turned out to be mostly harmless, and even helpful, in providing personalized consumer advice in an age of endless options and choice paralysis. The bot, which she named the Joannabot, was easily led astray.
Users got it to write code, talk about movies, and act as a Nazi — which Stern says is to be expected when they try to fool the bot.
“It could probably be a lot of other bad things or maybe innocuous. No one's like, ‘Hey, pretend to be a dog, right?’ It's not scandalous.”
The good news is that 99% of the time, Stern says Joananbot stayed on the topic of the iPhone.
She decided to build the bot because she wanted to experiment with chatbot technology, but she was also a little sick of answering readers’ questions. While she is a technology columnist, Stern has spent the last two years focusing on generative AI.
“We had this idea [of] what could we put a chatbot with? And it turns out, a really low-stakes thing is the iPhone, because you don't want to pair a chatbot with sensitive election coverage or sensitive international events, because you really don't want it getting things wrong, and you don't want it saying terrible things.”
She continues, “Turns out, people have pretty simple questions about [iPhones], and if the bot gets something wrong, if it tells you that it's got one storage size and it doesn't, it's not the worst thing. Somebody's gonna figure that out before they buy the phone.”
Since Joannabot’s release, Stern says she’s discovered it's made fewer mistakes than expected. Her team loaded four basic prompts that users can ask the software, such as generating a personalized chart that compares models.
“The majority of people came in here and said, ‘I have an iPhone blank, an iPhone 13. … How is the iPhone 16 better?’ And it gave a really good summary of not only what the specs are, but also my impressions based on my reporting.”
Stern’s bot is technically powered by Google’s Gemini software, a large language model which bases its knowledge off of content from the internet. However, Joannabot only answers questions based on Stern’ past reporting, which includes about a decade of her coverage on the product, as well as raw data from Apple’s website.
There is one big flaw with generative AI at this point: the inability to discern where a bot is pulling its information from.
“These bots are so confident when they're responding. It makes it seem like they know everything and so you can kind of trust it. … It was really important to me when we published this that when you get to the bot, it's not like on its own page, right? It has an introduction of text that I wrote, and it's very specific and says, ‘Below this, you're going to see all my test notes of the iPhone 16.’ It's got my words. It's got my writing. So there is side-by-side the human element of this is what this is really based on.”