@RL_Dane @Deuchnord Rust is okay, but but I like Haskell because it is more high-level ![]()
@isagalaev If that is indeed their ~~plan~~ blueprint for comprehensive regulation touted at that interview then there would be no voters to get mad. Alas, no profits either.
@timbray I wander what its failure mode would be when the "thinkers of the children" would come for its bacon.
Okay, the numbers level out (with a slight advantage for the Free) when its sampling function becomes complicated (a primitive SDF vs a stack of 100 primitives). I'm now more sure that I'm measuring some right thing, and not some laziness fluke.
Added bisimulation tests for the Free re-implementation (found 2 bugs
) and got to benchmarking the thing.
I was surprised that a round dance of 3 functors and a ping-pong of functions that pass control around is not only "a little slower" than a tight package, but instead twice as fast! ![]()
@haskman Why do you even want to consider laziness at a review time? Without a profiler/benchmark data you can't point a finger at the expression and say "there's laziness in there, make it strict to go faster" on a hunch. It may as well go slower.
@haskman But anyway, I think that questioning laziness is barking up the wrong tree. What we should strive for is not strictness, but a better tooling that will answer the challenges of performance and legibility "in the large".
Otherwise we're end up in competition with the other more established languages and their outgrown ecosystems instead of walking on our strong foot.
@haskman Thank you. I now wonder how a proper qualifier would like like.
It can't be "laziness is bad if you need performance" as there are some good algorithms that leverage laziness to get amortized speed-ups.
It can't be "laziness is bad if you need to analyze code" as there are cases where order is irrelevant (e.g. commuting operations).
🤔
@haskman Either your claim about "laziness (by default) is bad" is objective/universal and one of is wrong (and I, personally, would like to stop being wrong). Or it is subjective and there's nothing for us to argue about.
@haskman I find the opposite is true. Is that complexity or difficulty an objective thing?
@boilingsteam This is worrying.
The "open source" models are parasiting on their behind-the-doors overseers. I doubt that it is even according to their APIs usage terms, but that isn't relevant in the end.
Google has a moat here - they simply don't (?) have a public API. It is the OpenAI that has to sell away its core to remain afloat.
The incentives for "foundational models" business here is to sell API access under tight contracts. With the progressively steep fines for breaches, making them only accessible for progressively bigger B2B peers. And whack-a-mole any leaks of course. "Intellectual property" gets a new ring to it.
But then there's fundamental research, like the Google paper that brought us transformers. Even with more performance per dollar gains, the open source community is stuck with the published models until they collectively start doing their own research. This further incentivizes labs going dark.
Actually, this may be even good for AI Notkillingeveryoneism as it would be more incentives for non-proliferation of capabilities.
But then, there's this "commoditize your complement" drive, that forces hardware vendors into fundamental research and open-sourcing capability gains - so the clients would buy their chips to run the newest and hottest models.
And this is worrying, since even if AI labs go dark or extinct the hardware vendors would be happy to plunge us into AIpocalypse.
@underlap there are only 3 combinators. One for unfolding, and two for merging nodes. The rest are ye olde fmap, traverse, etc.
Maaaybe if you put in your AST as a functor and get a driver for peephole optimisation or something like that. The whole thing reminds me of recursion-schemes contents.
@underlap it's unfold combinator takes (a -> f a) and that f drives the whole thing. You can then traverse, fold, annotate and aggregate using Functor composition.
I had a late night thought that my sparse quad tree structure can be split up into a bunch of functors.
And that is indeed the case:
type SparseQuadTree a = Free Quad (Sparse (Range a))And the scaffold/compact functions are “bring your own algebra” now.
Replace Quad with Binary or Octal and you’d get the matching structure.
Going from a dense (but lazy) scaffold to sparse states that in type:
(a -> Sparse a) -> Free s a -> Free s (Sparse a)And the decider is a simple function that isn’t concerned with structure at all, making it reusable.
@haskman what do you mean by complexity here?
Toots as he pleases.