@eb It's always so baffling to me that people made computers do extremely complicated math and logic to give wrong results in simple arithmetic. And they sell that as a product that's supposed to be a great technological leap forward

@Kiloku @eb If you use a hammer to eat soup you generally don't end up with good results.
If you want to sum numbers use the sum function, not one that predicts the next most probable token. I dislike this type of bashing of LLMs because it's trivial to dismiss (ok they can't do trivial maths, but they can write an entire piece of software for me). There are much more risky outputs that could be used as an example. Funky Excel formulas have always existed...

@nicolaromano @Kiloku @eb "If you use a hammer to eat soup you generally don't end up with good results."

Actually a damn good metaphor for using LMM to generate code.

And no, if the method can't solve easy problems, why would you ever trust it to solve hard ones? That's fundamentally not how engineering works.

Follow

@PalmAndNeedle @Kiloku @eb Because code is a series of words, which is what LLMs generate. They're not designed to do calculations.
Indeed, sometimes when asked to calculate something these systems generate code that when executed gives the (mathematically correct, of course) answer.

The reality is, very often LLMs generate good, working code. The issue is not that. There are big ethical and environmental issues. Also, because the code works often, but not always, you need to double check it at all times, effectively taking longer then writing it by yourself (eg metr.org/blog/2025-07-10-early).

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.