@exkclamation @carlos_lunamota Same. And if I decide to wear some, it would be one of those.
@terrorjack and bolts?
@knittingknots2 When you're thinking of children for too much...
Predicting the next token, my stochastic ass...
https://socket.dev/blog/introducing-socket-ai-chatgpt-powered-threat-analysis
The next token is: PWND
@me I used wide + tall too. It's okay, I just think a more rounded approach would work better. Too bad you just can't buy anything less squishy without selling your kidney. It's somehow specialized medical equipment now.
@DrewKadel @dataKnightmare@octodon.social More like they have done their reading... But don't have time to reason properly and systematically and are forced to BS their way through a seminar.
The quality of answers improves significantly when you give them that time ("lets think step by step" etc.).
Given more compute LLMs would be able to do Fermi Estimates on par with the dude himself, if not better. There's nothing putting a hard limit on that, since even the humans are able to do it when motivated.
And with plugins like Wolfram they could build a proper model and perform exact calculations on it.
@dataKnightmare@octodon.social @DrewKadel > LLMs by design are incapable of associating a likelihood value to their output.
> Their output is totally randomly
Sorry, but this is just false. The probability space of "totally random" output is unimaginably huge and most of it not just false, but complete gibberish. Throw a 26-sided dice a few times and compare that totally random result to the GPT output.
To navigate it *at all* requires calculating likelihood and picking only up the sensible stuff.
So "associating a likelihood value to their output" is exactly like the thing works.
@me Wide gets side screens too far. Tall wastes space or forces eye to track objects in 2d instead of a line.
@me Just 3 square displays. Is that asking for too much?
@tojiro why don't just use some wrapper library, like v-ez, if you don't want "control"?
@AminiAllight @aras Yes, and this still looks like self-inflicted pain. I.e. making easier to do the wrong thing instead of plan ahead and do it right.
Don't get me wrong, I'm all for supporting the lowest level possible that is still safe and doesn't involve issuing "driver updates to fix glitches in the SoAndSo game by some AAA studio rushed into production due to managerial pressure".
Toots as he pleases.