Today's 2am shower thought:
Why are we still designing software like software when it's been just telling firmware what to do since it's inception? Why aren't we instead developing micro firmware that handles universal software requests for IO? All operating systems have kernels that govern what the OS can and can't do, so why aren't the hardware manufacturers making the OS on chip. Where's my Windows NT kernel chip? Where's my Linux kernel chip? An OS preprocessor makes viruses obsolete. Software companies can embed their activation keys to the chip so they can stop piracy altogether. #getWithTheProgram #iot #secureSoftware #privacy #efficiency
@skanman OS on chip? Or at least in hardware? Too many vested interests.
However if you took out the bios and replaced it with a combi bios and OS … something probably already done in China and Russia and in military/security hardware … there is a startup idea for starters …
@Lobster not necessarily the whole OS, but the kernel, it already works hand in hand with the BIOS. Things like windows embedded tried to put the whole damn thing on chip which creates update nightmares. But how often does a kernel actually evolve? Not often at all.. even Windows 11 is still based off the 4th version of the NT kernel from 22 years ago. And the Linux kernel is at version 6 and it's God knows how old. By preprocessing the kernel, every little thing the rest of the OS does requires 1 less layer between it and the hardware. Which is basically all of the rest of the OS. The Pentium Pro a bajillion years ago actually was optimized specifically for the NT kernel, and when the correct kernel was run, that 200mhz machine ran like a 2ghz machine at the OS level. If you consider how many servers are using hyper visors and container layers, you could pull the bridge out of those as well. Chips today are down to 4nm fabrication process. Did you know that 98% of them are tossed in the garbage because of the limitations of quality at that size? Nobody is mentioning this aspect of the chip shortage. At a certain point we need to start reducing the distance between the software and hardware. On a side note we need to stop teaching our programming students about sdks and libraries, it's resulting in so much code being compiled into apps that never gets run making current applications so bloated in size. We also teach stupid ideas like Java in android studio is making a "native app". It's not even close. People are missing the boat that you can write android apps in C, and C++ that are fractionally smaller than there Java equivalent that execute insanely faster on the same hardware. All this can be alleviated by moving software into hardware. The name of the game today is performance. Best games are high quality high framerate. Best search engines are the simplest and fastest. To get to the top of the food chain, these days is run by the developers who implement the largest amount of low level code. Evidence in the stock markets.
@Lobster http://www.raspibo.org/wiki/index.php/Compile_the_Linux_kernel_for_Chip:_my_personal_HOWTO
It seems there's quite a few people targeting raspberry pis that are legacy.. cause no new updates for those devices. But after examining their process, in theory, the modified process for the use cases aforementioned in my previous messages are about half the steps because it doesn't need recompiling for arm, and it's simple enough to pull a kernel from a distro that universally supports PC hardware. However there's steps to add such as making / downloading an image of the target bios, subtracting it's size from the chip size to measure freespace on chip, decompiling the old bios, compiling a legacy bootloader that targets the memory addresses after the bootloader in BIOS to load the kernel. Configuring the kernel to scan for distro locations. Compile both into the new BIOS image. Flash it.
Theres probably a step missing for some computers in which to add the BIOS itself as accessible storage for the bootloader, like it's a fake EUFI partition, or just add the bootloader code to the execution heap after the regular POST checks. These 2 steps would be interchangable depending on manufacturer.
The main caveat: if for some reason the BIOS decides it can't enter flash mode because of an error, the motherboard is probably bricked. Older boards that enter flash mode via a jumper use too small a chip. Newer boards with larger chips risk destroying the board and the chip is soldered on.
A side theory: it's probably possible to use a 3rd party GPU to pull this off too, I don't know if you've ever noticed but they have a dedicated BIOS that loads before the main PC BIOS. Sweet part of this idea, is that you don't risk destroying the computer, they have even more storage, and execute on a faster bus. You can also swap it in and out of different computers once it's operational. The caveat here is that the GPU will not actually be a GPU anymore, but a wildly fast way to throw a user into their OS before the user even sees a manufacturer logo. I could only imagine doing this on a server distro that has no GUI but just the shell. As soon as you push the power button, you'll be prompted to enter credentials. Boot times would be measured in ms. I think I'll start with a few dell latitudes, and Lenovo thinkcenters I got laying around. I won't shed many tears if I brick them. Usually business class computers have giant BIOS storage cause of all the special "enterprise" features nobody hardly ever actually cared about. As far as GPUs go, to experiment with, I'm not sure, I'll have to buy a few, got any recommendations?