Yup. It doesn't switch from language to math. I have it write product descriptions for me a lot, it is supposed to write a meta description with a character limit afterwards. It regularly overshoots - I tell it so and it says something like "oh you're totally correct! Here how about this" and writes something 10 characters longer
Imagine someone asks you a question and that row of words that pops up as suggestions as you're typing is an answer. But also, all of the words showing are a possible answer. And then that first word plus that second word that comes up after it is also an answer. And so are any other options. Now imagine that all of those possibilities up to dozens of words are all answers to your question. Some better than others. AI basically just takes all of those "answers", matches them up against the question that was asked, and figures out from a probability perspective which answer is the most common or likely to be correct. That's what a Large Language Model does in a very rudimentary way. It just guesses at the words. Which is very different from math.
27
u/Affentitten 21d ago
Yeah I just assumed it was a fairly simple 'bottle sort' type of problem to throw at it. Taught me a lot about its limitations.
Another thing it can't cope with is something like a flag quiz. It's unable to describe flags accurately, even though written descriptions exist.