Makefiles fasten compilation.
GCC/binutils apart, compiling #Jehanne from scratch take 2 to 3 minutes. A whole operating system, including kernel, userspace programs and all.
When makefiles where invented compiling and linking a simple hello world used to take minutes, so it was rational to spend computing cycles to minimize the amount of compiled code keeping track of the modified files and their dependencies.
But is this still true today?
To be honest, I don't think so.
Today makefiles and similar tools are used to tame the huge complexity of big codebases, while the right think to do would be to avoid writing such big codebase from the beginning.
That's basically why I did't port neither gnu make nor plan9 mk to #Jehanne: I don't think we should still use them.
In other words, I would suggest you to use a simple lisp (or rc, or sh or... whatever) script to build your code instead of make and similar tools.
@ariel@m.costas.dev
While we shouldn't waste energy as if it was an unlimited resource, we should be careful with performance optimization.
We should ban #blockchain technologies as soon as possible, but we should also note that #BigTech married the greenwashing thread long ago.
They can offer "green" computation on a scale by cutting from their carbon footprint the energy consumed by the connected clients (mostly browsers).
This is obviously unfair because they control such client-side computation in several ways, including by controlling the browser development, the standards and the frameworks used to build upon them (#React, #Angular and so on...).
Not to mention the actual decision of which version of the code you run on your browser, that can be "personalized"/targetted just like ads.
So ultimately they lie, as usual.
BUT arguing about performance and energy consumption might end up supporting their rhetoric.
So sure, compiled software is more efficient then interpreted one, but (first) you need to consider the whole computation, including remote client and data transfer, and (second) you need to compute the optimal balance between the pollution of the biosphere and the predation of the cybersphere.
Amazon might even (pretend to) provide the most energy efficient computation, but in the end they would use the power provided by the data collected to move goods around the world in a very polluting way.
Actually, I wonder if the constraints that made makefiles a good solution are still there.
@ariel@m.costas.dev
"Fast" is arguably a #BigTech concern, not something Aral (or me) would give a shit.
Simplicity however is very important, in different ways, for both of us.
However when you try to figure out how simple is your software, you cannot ignore the stack you are using.
How many milions of lines of codes are behind those 100 #javascript ones?
This obviously apply to any stack, go, python, perl, lisp... even C!
Hidden complexity is not simplicity.
@minimalprocedure@mastodon.uno La scuola e l'università potrebbero, ma - salvo eccezioni - in Italia sono fra gli enti che aderiscono e favoriscono l'assuefazione agli oligopoli proprietari.
Individualmente tento di farlo https://elearning.sp.unipi.it/course/view.php?id=554 sfruttando l'articolo 33 della costituzione, ma questo significa venir sistematicamente emarginati da coloro che si sono adeguati all'idea che il modello del docente universitario debba essere il ragionier Filini.
Non è impossibile evitare di sottomettersi a GAFAM https://peertube.devol.it/w/89XNjqjPTXaMVdWsoR3ZzN Ma come trasformare le scelte individuali in scelte collettive, soprattutto per chi lavora in istituzioni e aziende che fanno l'opposto?
https://vimuser.org/1337box.html
Completely bloat-free, javascript-free lightbox implementations. Insanely optimized, pure CSS. Makes viewing images on web pages more user-friendly.
The optimized version is currently 140 bytes!
I've had this for a while now, and thought I'd publish it.
Non male! :-)
Eidas has strict requirements/audit/sanctions for CAs that operate in the EU
Todat they are the issuers of digital signatures, which are the basis of all transactions with legal validity in Europe.
Browsers rigid standards = chosen by Google.
Trust a private monopolist more?
---
RT @doctorow
EU's Digital Identity Framework Endangers Browser Security https://www.eff.org/deeplinks/2021/12/eus-digital-identity-framework-endangers-browser-…
https://twitter.com/doctorow/status/1471711109593845760
«Manifest V3 è un dannoso passo indietro per la #privacy su Internet. … Niente che #ManifestV3 introduce nel suo stato attuale può aiutare a proteggere la privacy. Gli sviluppatori e gli utenti di estensioni dovrebbero agire con fermezza contro di essa.»
#Google sta cambiando tutto, rimuovendo le funzionalità nelle API di #Chrome su cui si basano queste estensioni e lo fa in nome della sicurezza, della privacy e delle pprestazioni, ma c'è chi ne dubita
Di Richi #Jennings
https://securityboulevard.com/2021/12/google-nukes-ad-blockers-manifest-v3-is-coming/
(little implicit premise: when I say "fork" I mean Plan9 style rfork, that let you specify what is inherited by the parent process... except for the stack, which is inherited anyway)
The fact is that both inherited memory and `malloc`ed one (and even `mmap` one in linux, afaik!) is just a promise you get from a cheating kernel: the physical memory is NOT assigned to the process until the last possible instant (the memory access fault).
In #Plan9 this is also true for the stack that is handled as copy on write, but I'd guess on #Linux and #BSD, all other segments are CoW too.
Also in a Plan9 system, pages are 4096 bytes chunks, so the CoW is pretty conservative and after a fork only one new physical page is assigned to the new process (the page where the return value of the fork syscall get written).
So, in theory, fork is not that coarse memory hungry faggot.
But AFAIK this happens with malloc too: pages are assigned to processes only on read/write faults. In modern systems memory is always overcommitted.
It is dangerous, but consider the alternative: most allocated memory would stay unused and you could run a fraction of the programs you run on the same hardware.
Well, this is a good answer, but in fact, on plan9 you only get a new process: memory is only copied on write.
So it's not necessarily a malloc.
Also rfork give you a good degree of control on what is copied.