Crazy how no other language I know of other than #Ada use a second stack to return “indefinite objects” (arrays without a statically known size, etc.). In the whole body of Ada code I've seen, it easily removes 60–70% of cases where you'd normally dynamically allocate. It has all the benefits of the stack where everything is nicely scoped, and it even works on real-time and/or embedded systems!
In most Ada code the only reason you'd need to do dynamic allocation is if you need a data structure that contains indefinite objects (e.g. an array of strings) or you're interfacing with some C API. And in the former case there's a good variety of data structures in the stdlib that abstract away the fact they allocate stuff so you don't even have to worry about it then.
@nytpu How do arrange that indefinite objects get deallocated in a stack-like manner? Sounds a bit like MLKit "regions" or what is sometimes called "arena allocation" in other languages.
@radehi
In GNAT (the GCC Ada compiler) actually operates a bit like an arena allocator that supports incremental deallocation. When an object goes lexically out of scope in code then the compiler will mark it as unused, and then try to “roll back” the secondary stack as far as possible without clobbering live data (potentially rolling back other regions marked unused if they weren't cleaned up before). Since it's used in primarily situations where stuff would normally be stack-allocated but can't be allocated by the caller because the size is unknown (like returning arrays from a function). They are scoped similarly to any other stack-allocated object, just in a different region so they can persist through a return
from a function call.
The biggest issue would be non-local pointers to the stack object, but if you do use them with secondary stack objects it has the same caveats as a pointer to any other stack object. But Ada's design generally resists pointers being passed up the call stack (which the compiler is often overly-scrupulous in checking too)
Details on the specific internals here: https://gcc.gnu.org/git/?p=gcc.git;a=blob;f=gcc/ada/libgnat/s-secsta.ads;h=62e1c0bfdb3681c37ef5fc314f8ff8ada81fea77;hb=HEAD#l115
@nytpu Thinking further: if A calls B, the which is going to return to it some variable-sized thing (thing 1) on the secondary stack, and B calls C, the which returns to it a variable-sized thing 2 on the secondary stack, and C returns to B, and B starts allocating thing 1, means B must consume and deallocate thing 2 first? Otherwise, thing 1 will be pushed after thing 2 on the stack, right? So thing 1 gets deallocated when thing 2 does, before B can return it to A?
In Perl, RPL, PostScript, and Forth, the answer is, yes, B must remove thing 2 from the operand stack stack before it can start creating thing 1.