i really fucking hate dependency management downloading packages from a centralized user submitted repository really grinds my fucking gears
this is like the one go flaw that makes me not use the language for absolutely everything
It's not centralized actually but they use caching servers iirc so it mostly is. Maybe there's a way to turn it off
@nyanide

> Maybe there's a way to turn it off

GOPROXY='direct'
@nyanide These are helpful: `go help environment`, `go env`, `echo FUCK YOU. Strongly worded letter to follow | sendmail rsc@golang.org`

@p @nyanide if only rsc was still involved. things will likely get full corporate bullshit now that he and the other old time go people aren't involved anymore. :blobcatgrimacing:

@bonifartius @p no rsc is still involved he was the opne that proposed to remove powerpc support

@nyanide @p oh, i thought he was doing something else now

@nyanide @p ah, he stepped down as tech lead.

regarding powerpc, it's still listed on the build dashboard, just not as first class port build.golang.org/

@bonifartius @nyanide @p To be fair, removing ppc64 support isn't a good idea. Despite how niche it is, it's still used a lot in the enterprise world. I personally know people that need that support.
@phnt @p @nyanide @bonifartius it's also the most powerful kind of computer you can buy that the FSF likes.
(Buy yourself an 8 core 32 thread beast from raptor computing today!)

@RedTechEngineer @phnt @p @nyanide the most recent computer i own is a 400€ refurbished thinkpad :ablobblewobble:

@bonifartius @RedTechEngineer @phnt @nyanide You gotta get into $20 ARM/RISC-V devices, that's where the shit is.

@p @bonifartius @RedTechEngineer @phnt @nyanide What are some good $20 RISC-V hardware to check out. I bought a Milk-V Duo for luls but that's about it.

@raphiel_shiraha_ainsworth @bonifartius @RedTechEngineer @phnt @nyanide There's always something around; I think a Milk-V would be cool, they seem to be the main ones doing interesting stuff at the moment. I have a RISC-V DevTerm, which I mainly got as a curiosity but it's a really fun system. I don't know of a really impressive $current_year one, I've been thinking of getting one of the Lychee cluster boards, but those are a little more than $20.

@p @RedTechEngineer @phnt @nyanide @raphiel_shiraha_ainsworth what i'd really like was something inexpensive with good storage options, like two sata ports for a raid or something. i don't really like to burn through sd cards all the time :ultra_fast_parrot: that would really help with hosting stuff at home.

still would leave the problem that my connection has shit upload bandwith. maybe i could get a business account from the cable provider or starlink or whatever to fix that, but it's another topic.

@bonifartius @p @RedTechEngineer @nyanide @raphiel_shiraha_ainsworth

>i don't really like to burn through sd cards all the time

Linux has some answers to that problem with filesystems like F2FS and JFFS2. They aren't that user-friendly as the normal ones, but it's still better than nothing and with some config changes that reduce write cycles, you can get a system that does barely any writes when idle (systemd can log to a ring buffer; same can be achieved with a more normal syslog setup and some ingenuity with logrotate and tmpfs). Some manufacturers even make uSD cards specifically made for these SBCs that have higher write endurance and more importantly aren't as slow.
@phnt @bonifartius @RedTechEngineer @nyanide @raphiel_shiraha_ainsworth Has F2FS improved much in the last few years? My main point of reference is some Phoronix benchmarch that demonstrated that Postgres is faster on ext4, but that was from before I set up the previous FSE box, so it's dated. (btrfs, unsurprisingly, performed the worst by an order of magnitude and actually exploded so there are no benchmark numbers for it on some of the SSDs.) In the mean time, ext4 got more SSD-friendly and presumably F2FS has been chugging along.
@p @RedTechEngineer @nyanide @raphiel_shiraha_ainsworth @bonifartius F2FS is still probably much slower than ext4, especially when running something that likes to do a lot of random I/O like a DBMS. It's probably not a good idea to use it on SSDs anyway as those fix a lot of the underlying issues with complex controllers in front of the NAND flash. Google has been using it as the default for both the ro and rw partitions on Android for 4 years. Mainline Linux is probably less stable than that due to a lower degree of testing.

>btrfs, unsurprisingly, performed the worst by an order of magnitude

Probably needs some FS tuning. ZFS has the same issues with DBMS where it does smart things that the DBMS also does and it destroys performance.

> and actually exploded so there are no benchmark numbers for it on some of the SSDs.

Typical BTRFS experience. Thankfully it didn't catastrophically blow up on me yet in the 4 years I've been using it.

>ext4 got more SSD-friendly

There are two sides to this. One is pushing more performance out of the SSDs with more optimized I/O and scheduling (NAND is actually slow on small I/O queue depths and with a DRAM cache, it can perform much worse than spinning rust). The second side is wear-leveling and better managing for the raw flash. ext4 probably doesn't bother much with the latter as the controller is expected to do the heavy lifting, but that controller is mostly absent on the more typical embedded/SD Card flash chips.
@phnt @RedTechEngineer @bonifartius @nyanide @raphiel_shiraha_ainsworth

> Thankfully it didn't catastrophically blow up on me yet in the 4 years I've been using it.

I think sjw was running it for something at some point. I already didn't like it when I was `make menuconfig` and saw "Ooh, new filesystem!" and hit the question mark and it started by saying "It's supposed to be pronounced 'better FS'!" Anyway, the benchmark was more than four years ago (I think 2019) so maybe it doesn't blow up as much any more or maybe it is still the expected btrfs experience. (Even ext4 blew up on me the first time I tried it, at which point I decided to not even bother looking at filesystems unless they've been in production for several years.)

> ZFS has the same issues with DBMS where it does smart things that the DBMS also does and it destroys performance.

ZFS does the same with RAID and LVM and the entire I/O subsystem. They should probably rename it NIHFS.

> ext4 probably doesn't bother much with the latter as the controller is expected to do the heavy lifting, but that controller is mostly absent on the more typical embedded/SD Card flash chips.

I think it *mostly* focused on stability. But it's more or less a 30-year-old codebase, you kind of expect stability. New benchmarks would be interesting but I don't know if anyone has bothered.
@p @RedTechEngineer @nyanide @raphiel_shiraha_ainsworth @bonifartius

>I think sjw was running it for something at some point.

He was also running NB on Arch for some time, if I remember correctly, so it doesn't really surprise me :D

>so maybe it doesn't blow up as much any more or maybe it is still the expected btrfs experience.

It still can blow up when it looses free space and since they broke the free space reporting _intentionally_, a lot of userspace utilities that calculate free space before committing transactions will blow up with it. Unless they use custom code linked from libbtrfs that is. Probably one of the most braindead decisions one could make in filesystem design.

>Even ext4 blew up on me the first time I tried it, at which point I decided to not even bother looking at filesystems unless they've been in production for several years.

I had ext4 survive a bad USB cable that created garbage data and deadlocked the HDD's controller multiple times. It only took 40GB of swap and a day of fsck.ext4 constantly complaining about something to fix it. In the end no data was lost.

>ZFS does the same with RAID and LVM and the entire I/O subsystem. They should probably rename it NIHFS.

It acts like malware in the entire disks and I/O subsystem, sticking its fingers everywhere it can, but usually for a good reason. When it falls apart is applications trying to be too smart with I/O (1). One can only appreciate the whole DB-like design and extreme paranoia with everything I/O related, when they use it on a large disk array. Other than that, it's a bad filesystem to use on your daily-driver system. None of the benefits with all of the issues. Running it under Linux is also probably a bad idea, just use vanilla FreeBSD or TrueNAS.

(1) You can disable a lot of the "smart" features per every pool, so this problem usually only crops up in misconfigured environments.
Follow

@phnt @RedTechEngineer @p @nyanide @raphiel_shiraha_ainsworth zfs is nice when used for what it was made for, on a server serving data :)

> Running it under Linux is also probably a bad idea, just use vanilla FreeBSD or TrueNAS.

iirc openzfs is now the same code base everywhere. i never had problems with the linux port. what i like with zfs is that the tools have a pretty good user interface, like that "zpool status" is providing sane descriptions about what is broken and how to fix it.

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.