top of page
Writer's pictureJoanne Jacobs

Set a bot to monitor a bot: Is the TA hallucinating?

Are two bots better than one? Georgia Tech is hoping its old-tech AI tutoring bot will keep its new-tech AI chatbot from "hallucinating," writes Jeffrey R. Young on EdSurge. "They’re testing the approach in three online computer-science courses this summer."


Credit: Georgia Tech

“ChatGPT doesn’t care about facts, it just cares about what’s the next most-probable word in a string of words,” explains Sandeep Kakar, a research scientist at Georgia Tech. “It’s like a conceited human who will present a detailed lie with a straight face, and so it’s hard to detect.”


The university already uses as a digital teaching assistant, known as Jill Watson. Online students can’t always tell whether they're communicating with a human TA or the bot, Young writes.


Now Georgia Tech wants "Jill" to use course materials and the textbook to to fact-check ChatGPT's answers.


It's hard to stop bots from making things up, says Kakar. In a test by researchers, the bot's answer "included 'a beautiful citation of a book and a summary of it," but the book doesn't exist. Still it did indicate it had "low confidence" in its answer.


Newark schools are testing a tutoring bot developed by Khan Academy. So far, results are mixed, reports Natasha Singer in the New York Times. Khanmigo is supposed to help students analyze problems, but it sometimes just provides the answers. And not always the right answer.


"Stretch" will be trained on vetted learning materials -- not the whole internet -- to avoid misinforming K-12 students, reports Ed Week's Alyson Klein. In addition, Stretch will cite its sources. "If it’s asked about something outside of its areas of expertise, it will tell users it can’t help with the question, instead of making something up, a characteristic of most chatbots that pull information from the entire internet."

66 views0 comments

Commenti


bottom of page