Latest posts

Maybe LLMs don’t hallucinate? Maybe they’re just lazy?
As tasks require more computing power to run, decisions are being taken on our behalf about how much effort (compute time, so money) to expend when carrying out our requests. What is potentially confusing for users is that as we offload boring, time consuming tasks to an LLM, it too is deciding whether or not it can be arsed to spend enough time on it. And if chooses to spend less than necessary it can produce bad answers. Exactly like a lazy human would.
Read more
