@Kjaerulv rationalism doesn't have to be cold blooded. Not unless you don't value having warm blood.
Switched to map and contour generation from quad tree cells. Those are now correctly represent terrain transitions, not only passable regions.
The thing is more costly to construct, but the map filling is much faster than sampling each tile corner - less duplicate work and big chunks of uniform terrain are naturally aggregated and can be dispatched as one job.
Out of 1M tiles only ~75k are 1-tile transitions that are processed sequentially (presumably while some bigger jobs are chugging in background).
I really should do that final tagless retargetable encoding for SDFs and run those on GPU instead.
@RL_Dane @Deuchnord Rust is okay, but but I like Haskell because it is more high-level ![]()
@isagalaev If that is indeed their ~~plan~~ blueprint for comprehensive regulation touted at that interview then there would be no voters to get mad. Alas, no profits either.
@timbray I wander what its failure mode would be when the "thinkers of the children" would come for its bacon.
Okay, the numbers level out (with a slight advantage for the Free) when its sampling function becomes complicated (a primitive SDF vs a stack of 100 primitives). I'm now more sure that I'm measuring some right thing, and not some laziness fluke.
Added bisimulation tests for the Free re-implementation (found 2 bugs
) and got to benchmarking the thing.
I was surprised that a round dance of 3 functors and a ping-pong of functions that pass control around is not only "a little slower" than a tight package, but instead twice as fast! ![]()
@haskman Why do you even want to consider laziness at a review time? Without a profiler/benchmark data you can't point a finger at the expression and say "there's laziness in there, make it strict to go faster" on a hunch. It may as well go slower.
@haskman But anyway, I think that questioning laziness is barking up the wrong tree. What we should strive for is not strictness, but a better tooling that will answer the challenges of performance and legibility "in the large".
Otherwise we're end up in competition with the other more established languages and their outgrown ecosystems instead of walking on our strong foot.
@haskman Thank you. I now wonder how a proper qualifier would like like.
It can't be "laziness is bad if you need performance" as there are some good algorithms that leverage laziness to get amortized speed-ups.
It can't be "laziness is bad if you need to analyze code" as there are cases where order is irrelevant (e.g. commuting operations).
🤔
@haskman Either your claim about "laziness (by default) is bad" is objective/universal and one of is wrong (and I, personally, would like to stop being wrong). Or it is subjective and there's nothing for us to argue about.
@haskman I find the opposite is true. Is that complexity or difficulty an objective thing?
@boilingsteam This is worrying.
The "open source" models are parasiting on their behind-the-doors overseers. I doubt that it is even according to their APIs usage terms, but that isn't relevant in the end.
Google has a moat here - they simply don't (?) have a public API. It is the OpenAI that has to sell away its core to remain afloat.
The incentives for "foundational models" business here is to sell API access under tight contracts. With the progressively steep fines for breaches, making them only accessible for progressively bigger B2B peers. And whack-a-mole any leaks of course. "Intellectual property" gets a new ring to it.
But then there's fundamental research, like the Google paper that brought us transformers. Even with more performance per dollar gains, the open source community is stuck with the published models until they collectively start doing their own research. This further incentivizes labs going dark.
Actually, this may be even good for AI Notkillingeveryoneism as it would be more incentives for non-proliferation of capabilities.
But then, there's this "commoditize your complement" drive, that forces hardware vendors into fundamental research and open-sourcing capability gains - so the clients would buy their chips to run the newest and hottest models.
And this is worrying, since even if AI labs go dark or extinct the hardware vendors would be happy to plunge us into AIpocalypse.
Toots as he pleases.