2022: We need end-to-end certified compilation!
2024: Nah, just stuff your "proofs" into an LLM to get "executables".
https://x.com/VictorTaelin/status/1837925011187027994
> Since the AgdaJS compiler isn't maintained, and since creating one would be a $1m-budget project, we're now compiling 590 Agda files to TypeScript using Sonnet-3.5, to let us run our apps on the browser.
> The results are excellent. Almost all files work with no intervention. Error rate is lower than the current AgdaJS compiler, on which we found bugs. Performance is 20x higher. We're now automating this in a loop.
@gregeganSF Wow. The text is so profoundly myopic I'm surprised it came from an author who writes about alien minds. Maybe I shouldn't be as he may have a knee jerk reaction to encroachment on his turf, the language.
The opening shot is a masterpiece: "Whatever the art is, AIs sure can't do it".
And I love the illustration too.
But this admission is misplaced.
It all follows from the omission of what kind of AIs we're talking about. From the headline we can read the broad claim "whatever the AI is, it can't do art (whatever it is)".
It then gets narrowed down to "current generation of commercial/public LLMs attached to a chat interface".
The core point is that such a system can't make choices that would be "its own" and it just autocompletes the user input from the internet corpus.
That's a load-bearing "just", but I'll let it slide.
Amusingly, Ted goes on a side quest of training efficiency which is irrelevant to the central claim, but it stands out how the author of "alien mind" stories fails to recognize that the thing under inspection is different from animal brains.
Anyway, the claim gets backed by the assumption that there's no light inside, thus no-one to make choices for the art (as a package of choices made, by the local definition).
And this is where he trips up on that silent narrowing of AI.
Sure, public chatlike models aren't agentic. If anything, they are steered away specifically from being anything like that and into the autocomplete realm since this is where commercial interests are. And, as he correctly points out, there's demand for "no effort, only demands" and the corps are happy to oblige.
In a way, he gets to a wallpaper market in search of a poetry group. Yes, the corpos are selling the idea of creativity to users, which is arguably dishonest. But such is our marketing culture of the day. Nevertheless, an inverse of bad take doesn't make a good support for his claims.
At least he does recognize the emerging sub-genre of "let's sift through the boatloads of generated slop and maybe try to nudge it somewhere interesting". And with the growing interest a tool support will come, that's for sure.
Sorry, I'm meandering again, gotta write more essays (=
In the end the claim narrows down to "it would take more than a few years to make a truly autonomous system that would make salient choices to produce quality art pieces".
Now that's a grounded prediction for which the evidence can be collected. And I would love to see an essay/paper that does just that.
But that is very, very far away from what the headline says.
Instead of delving into (sorry) the topic, Ted produced some textual slop by rehashing the already stale claims. Ironic.
Long time, no SIMD #Haskell
JK, the naivest stupidest binary tree blows fancy 4-way SIMDed BVH out of the water.
Actually, it may be slower by itself, but multicore apparently *destroys* wide instruction performance. At the same time cache^W completely oblivious scalar traversal is happy to run on all the capabilities available.
High ceremony 4-wide or primitive 10-/20-/whatever-wide?
There are 3 types of Visitor:
* Standard Visitor: that are very easy to contain or do not need many resources to contain. It does not mean that the visitor is harmless, just simple to contain.
* Marriage/Civil partnership: that require more resources to contain completely or where containment isn't always reliable. Usually this is because the relationship is insufficiently understood or is inherently unpredictable.
* DELETED.
* Transit visitor: that are very difficult to contain or haven't yet been contained. Generally, the resources used to contain those are often very limited, yet they need much of the resources to contain it. This doesn't mean the visitor is dangerous; just difficult to contain.
https://www.gov.uk/guidance/immigration-rules/immigration-rules-appendix-v-visitor
BVH is love, BVH is life 📦
Ramped up samples, bounces, and resolution and the rendering time is still less than the first attemts with the linear traversal of all objects in the world.
Converting interaction event into a single-constructor with additive, multiplicative, and scatter components helped a little too.
Also, skipped ray bouncing when attenuation drops below 10-bit range. The limit practically remains to prevent infinite bounces between perfect mirrors.
Cornell box, here I come!
Textures are just `Material a` combinators concerned with producing stuff at hit points.
Since the Material is a Monad now, no need for special cases for colors and then another bunch of combinators for materials.
A sampler can be used to pick entire materials if needed.
(Ooops, I actually CPSd a little...)
Toots as he pleases.