I'm not an expert in AI, but I am a software developer and have had a modest interest in AI over the past 20 years. From what I understand, large language models are pretty much all about getting the language right, not so much about getting facts right. It's inherent in how they work that they don't understand meaning. They "understand" structure very well by now and can generate text that sounds meaningful, until they suggest recipes with poisons in them. :-P Then it's obvious they don't at all know what they're talking about, even though they can fake it pretty well most of the time.