The manager of the #Spirit #Halloween in #Butler, PA is very sad it's the off-season.
For all the #antivaxers who are drawn to any post mentioning #vaccination like moths to a flame:
You are shit. You will always be shit. No matter what good you've done in your life, no matter how kind you are to your family and friends, no matter how hard you work, no matter what accomplishments you have to your name, you're still shit. Add a drop of diarrhea to a bottle of the finest champagne, and you have a bottle of shit.
(Hardly any of you were ever champagne. MD 20/20, mostly. But some were at least decent drinks, before ... you know.)
Those of us who are not shit, particularly in the #medical biz, will keep trying to keep you from going down the toilet. Not because you deserve it: you don't. Because we're thinking, feeling, decent human beings. Because we're better than you. We want to be able to look at ourselves in the mirror, and we don't want to see shit like you looking back.
Still, at this point it's hard to feel sorry when you inevitably get flushed.
@8r3n7 They seem to me like they're pretty sensitive to initial conditions. Sure, they tend to converge on certain behaviors, but so do living systems. And "tend to" is not at all the same thing as "always do."
Sure, there are things we don't understand about #LLMs. We know how the underlying #code works, and #tokenization, and all that, but the models are so complicated we can't just take them apart and look at them the way we would, say, a big database. This leads to unexpected emergent behaviors.
That reminds me a lot of my job, which boils down to modeling living systems with #math and code. We know the #physics, we know the #chemistry, and we can observe the #biology, but there are a whole lot of layers in between where apparently simple processes lead to remarkably complicated results.
And? It doesn't mean we don't *understand* living systems, it just means we don't know every single thing that goes on inside them all the time. So we need to #experiment to figure out the most probable results: "If I do this, what do I expect to happen?" Then quantify our #uncertainty about that expectation, which is pretty important when, say, #cancer patients want to know how long they have to live.
Congratulations, #computers! You've joined the entire rest of the universe. In that limited sense, the idea that we "don't understand AI" is true. But it's not some unknowable permanent mystery.
On the scale of revolutions in human affairs, I'm still going with stone #tools, controlled #fire, and #agriculture as somewhat bigger deals. On the second tier I'd put #writing, #machinery that runs on something other than #muscle power, and #electronics including computers themselves.
I don't say it's *impossible* AI will be on the same scale eventually, but if so it won't be any more of a #singularity than the previous big technological shifts. "Our time is unique and nobody else has ever experienced any change this profound!" doesn't have a great track record.
Sure, there are things we don't understand about #LLMs. We know how the underlying #code works, and #tokenization, and all that, but the models are so complicated we can't just take them apart and look at them the way we would, say, a big database. This leads to unexpected emergent behaviors.
That reminds me a lot of my job, which boils down to modeling living systems with #math and code. We know the #physics, we know the #chemistry, and we can observe the #biology, but there are a whole lot of layers in between where apparently simple processes lead to remarkably complicated results.
And? It doesn't mean we don't *understand* living systems, it just means we don't know every single thing that goes on inside them all the time. So we need to #experiment to figure out the most probable results: "If I do this, what do I expect to happen?" Then quantify our #uncertainty about that expectation, which is pretty important when, say, #cancer patients want to know how long they have to live.
Congratulations, #computers! You've joined the entire rest of the universe. In that limited sense, the idea that we "don't understand AI" is true. But it's not some unknowable permanent mystery.
On the scale of revolutions in human affairs, I'm still going with stone #tools, controlled #fire, and #agriculture as somewhat bigger deals. On the second tier I'd put #writing, #machinery that runs on something other than #muscle power, and #electronics including computers themselves.
I don't say it's *impossible* AI will be on the same scale eventually, but if so it won't be any more of a #singularity than the previous big technological shifts. "Our time is unique and nobody else has ever experienced any change this profound!" doesn't have a great track record.
I don't deceive myself that #Magyar is going to usher in a glorious new era. As far as I can tell, on the US scale he'd be a standard issue pre-Trump Republican. But he does seem committed to democracy (I hope I'm right about that) and he's pro #EU, and at least not hostile to #Ukraine. Which right now seems like plenty.
Congratulations, #Hungary! I hope we can do the same.
Bioinformaticist / biostatistician, veteran USAF medic and Army infantryman, armchair paleontologist, occasional science fiction author, long-ago kickboxer, oldbat goth, vaccinated liberal patriot.