Show newer

@haskman Why do you even want to consider laziness at a review time? Without a profiler/benchmark data you can't point a finger at the expression and say "there's laziness in there, make it strict to go faster" on a hunch. It may as well go slower.

@haskman needs to become a better Haskell, not a better Python, or a better Rust, or a better PureScript.

@haskman But anyway, I think that questioning laziness is barking up the wrong tree. What we should strive for is not strictness, but a better tooling that will answer the challenges of performance and legibility "in the large".

Otherwise we're end up in competition with the other more established languages and their outgrown ecosystems instead of walking on our strong foot.

@haskman Thank you. I now wonder how a proper qualifier would like like.

It can't be "laziness is bad if you need performance" as there are some good algorithms that leverage laziness to get amortized speed-ups.

It can't be "laziness is bad if you need to analyze code" as there are cases where order is irrelevant (e.g. commuting operations).

🤔

@haskman Either your claim about "laziness (by default) is bad" is objective/universal and one of is wrong (and I, personally, would like to stop being wrong). Or it is subjective and there's nothing for us to argue about.

@haskman I find the opposite is true. Is that complexity or difficulty an objective thing?

@boilingsteam This is worrying.

The "open source" models are parasiting on their behind-the-doors overseers. I doubt that it is even according to their APIs usage terms, but that isn't relevant in the end.
Google has a moat here - they simply don't (?) have a public API. It is the OpenAI that has to sell away its core to remain afloat.
The incentives for "foundational models" business here is to sell API access under tight contracts. With the progressively steep fines for breaches, making them only accessible for progressively bigger B2B peers. And whack-a-mole any leaks of course. "Intellectual property" gets a new ring to it.

But then there's fundamental research, like the Google paper that brought us transformers. Even with more performance per dollar gains, the open source community is stuck with the published models until they collectively start doing their own research. This further incentivizes labs going dark.

Actually, this may be even good for AI Notkillingeveryoneism as it would be more incentives for non-proliferation of capabilities.

But then, there's this "commoditize your complement" drive, that forces hardware vendors into fundamental research and open-sourcing capability gains - so the clients would buy their chips to run the newest and hottest models.

And this is worrying, since even if AI labs go dark or extinct the hardware vendors would be happy to plunge us into AIpocalypse.

@underlap there are only 3 combinators. One for unfolding, and two for merging nodes. The rest are ye olde fmap, traverse, etc.

Maaaybe if you put in your AST as a functor and get a driver for peephole optimisation or something like that. The whole thing reminds me of recursion-schemes contents.

@underlap it's unfold combinator takes (a -> f a) and that f drives the whole thing. You can then traverse, fold, annotate and aggregate using Functor composition.

I had a late night thought that my sparse quad tree structure can be split up into a bunch of functors.

And that is indeed the case:

type SparseQuadTree a = Free Quad (Sparse (Range a))And the scaffold/compact functions are “bring your own algebra” now.

Replace Quad with Binary or Octal and you’d get the matching structure.

Going from a dense (but lazy) scaffold to sparse states that in type:

(a -> Sparse a) -> Free s a -> Free s (Sparse a)And the decider is a simple function that isn’t concerned with structure at all, making it reusable.

@haskman
Strict-by-default is just asking users do to what a compiler should do.

For the same reason we don't want to use manual memory management (even with the borrow checker assistance) *all the time*.

Let me focus on what's important and let the compiler find a best way to do it.

@ianbicking > From a practical standpoint, this strengthens my worries about LLM assistants entrenching popular languages and tools. If ChatGPT, Bing, and similar tools become an essential part of a programmer's arsenal, it's hard to imagine a new language—or even a new framework—taking off if the models can't use them. Yet, if the language or framework doesn't take off, the models will never learn how to use it for lack of training data.

This is very sad. The network effects get even more networky and we're getting stuck more deeply in some local optima :ablobhungry:

@matthew_d_green this is what you get for leaving the future of the web in corporate hands...

But... Was it, ever, free from it? :ablobcatcoffee:

@reidrac@social.sdf.org Just found something that can be a good kata on parsers and typeclasses.

And not without utility! You can add some CLI options to your game while getting acquainted with the basics.

lhbg-book.link/05-glue/04-optp

@reidrac@social.sdf.org I hope I didn't interfere with your studying/project goals. I tried to let you discover things that are close by and only suggest "obvious" parts what are just out of current search space

In my beginning I wasted a lot of time chasing trivial things just because I didn't knew where to search and not even a guess of what to find. That wasn't a productive activity and a mentor could have cut "time to prod" even more.

@reidrac@social.sdf.org Fair.. An opportunity to take note how avoiding abstraction has its own costs. Like growing codebase makes it progressively difficult to navigate.
Anyway, I, too, usually write a lot of concrete code first, with minimal abstraction. And only when I see that a certain pattern persists I take the abstraction saw to the accrued boilerplate. Expansion-compression cycle of a code pump (8

@reidrac@social.sdf.org writing parsers is a good way to understand the language. It trains a few things about composition, assembling code from smaller parts.

Type classes are Haskell's power tools of the trade (:
They encode many common patterns about types and make that distinct coding flow of "delegate this away and let me focus on my types here" (that you did with the Vector instance).

You can use the Typeclassopedia for an overview of what you can encounter in the wild and where they can be helpful. You don't have to remember everything, just the names of the things and maybe the context where you may encounter them.

The most important are Semigroup/Monoid, Functor/Applicative/Monad, and Functor/Foldable/Traversable. Recently dual-variable classes like Bifunctor are starting to get prominence too.

wiki.haskell.org/Typeclassoped

Show older
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.