@rzeta0 TLDR: It depends on the usage patterns.
If a consumer can work with a producer in lock-step, then the laziness allows to eliminate the intermediate list-building and operation on infinite streams.
Whenever you're working with Haskell lists (and other structures like it) you have to consider 3 situations: empty list [], finite list [1,2,3], and infinite list [1..].
The first two are quite simple. But the last one is the Haskell specialty which regularly breaks the typical advices given in other languages, even "functional".
Consider a case where the producer gives an infinite list (`nums = 36 : nums :: [Int]`).
You can't get a sum of it:
```
go !acc (current : rest) = go (acc + current) rest
go acc [] = acc
```
It is strict in accumulator and efficient, but never terminates on infinite input.
But you can get a stream of running sums:
```
go !acc (current : next) = acc : go (acc + current) next
go acc [] = [acc]
```
It is "tail recursive", but now it "returns" immediately and the caller can inspect its value.
The value happens to be the list constructor (:) and its two arguments. If you only need the current value (or a constant prefix), you don't have to calculate the rest at all. Doing nothing is even more efficient than tail recursion.
@rzeta0 Oops. I just spotted the missing "not"...
> It is "tail recursive", but now it "returns" immediately
Of course it is *not* tail recursive.
Technically (due to laziness of (:)) the call isn't even recursive by itself - it gives you a value right away. It just happens that you can pinch another value from the second argument of the cons. No recursion - no stack to blow - nothing to tail-optimize (=
@dpwiz
thank you
I am new to this concept so it will take some time to sink in but thank you for a really good example