i really fucking hate dependency management downloading packages from a centralized user submitted repository really grinds my fucking gears
this is like the one go flaw that makes me not use the language for absolutely everything
It's not centralized actually but they use caching servers iirc so it mostly is. Maybe there's a way to turn it off
@nyanide

> Maybe there's a way to turn it off

GOPROXY='direct'
@nyanide These are helpful: `go help environment`, `go env`, `echo FUCK YOU. Strongly worded letter to follow | sendmail rsc@golang.org`

@p @nyanide if only rsc was still involved. things will likely get full corporate bullshit now that he and the other old time go people aren't involved anymore. :blobcatgrimacing:

@bonifartius @p no rsc is still involved he was the opne that proposed to remove powerpc support

@nyanide @p oh, i thought he was doing something else now

@nyanide @p ah, he stepped down as tech lead.

regarding powerpc, it's still listed on the build dashboard, just not as first class port build.golang.org/

@bonifartius @nyanide @p To be fair, removing ppc64 support isn't a good idea. Despite how niche it is, it's still used a lot in the enterprise world. I personally know people that need that support.
@phnt @p @nyanide @bonifartius it's also the most powerful kind of computer you can buy that the FSF likes.
(Buy yourself an 8 core 32 thread beast from raptor computing today!)
Follow

@RedTechEngineer @phnt @p @nyanide the most recent computer i own is a 400€ refurbished thinkpad :ablobblewobble:

@bonifartius @RedTechEngineer @phnt @nyanide You gotta get into $20 ARM/RISC-V devices, that's where the shit is.

@p @bonifartius @RedTechEngineer @phnt @nyanide What are some good $20 RISC-V hardware to check out. I bought a Milk-V Duo for luls but that's about it.

@raphiel_shiraha_ainsworth @bonifartius @RedTechEngineer @phnt @nyanide There's always something around; I think a Milk-V would be cool, they seem to be the main ones doing interesting stuff at the moment. I have a RISC-V DevTerm, which I mainly got as a curiosity but it's a really fun system. I don't know of a really impressive $current_year one, I've been thinking of getting one of the Lychee cluster boards, but those are a little more than $20.

@p @RedTechEngineer @phnt @nyanide @raphiel_shiraha_ainsworth what i'd really like was something inexpensive with good storage options, like two sata ports for a raid or something. i don't really like to burn through sd cards all the time :ultra_fast_parrot: that would really help with hosting stuff at home.

still would leave the problem that my connection has shit upload bandwith. maybe i could get a business account from the cable provider or starlink or whatever to fix that, but it's another topic.

@bonifartius @RedTechEngineer @phnt @nyanide @raphiel_shiraha_ainsworth

> like two sata ports for a raid or something.

Reasonable. The TuringPi board has a couple of SATA ports and a couple of mini-PCIe connectors; mini-PCIe SATA controllers can be gotten cheap, but to fit it into a DevTerm, you'd have to solder it in and remove the printer.

> i don't really like to burn through sd cards all the time

Ah, yeah. No errors on the uSD currently in my DevTerm, which I have basically never turned off for two years. I think the durability has gotten better. On the other hand, I used older uSD cards for doing the builds of CRUX (for the A-06) and Slackware (for the RISC-V one) and two of them burned out pretty quickly.

> that would really help with hosting stuff at home.

Yeah; for hosting stuff at home, like, I used to just grab refurb servers, and my main server (mail, web, a bunch of Plan 9 VMs, etc.) still is a refurbished DL380 G7. You can get these things from Newegg or wherever in the ~$100-200 range. Like, they have a DL380 for $164 right now: https://www.newegg.com/hp-proliant-dl380-g9-rack/p/2NS-0006-31E21?Item=9SIAG1MKA76526 . The only problem is a refurb is a refurb; I never had any trouble until I got that giant one to run FSE on, and FSE was up and down all that time because the motherboard had some problem that I never ended up solving. (Had to be the motherboard because the hardware watchdog would lock up.)

The TuringPi2 is nice. Much lower power consumption, reasonably priced, aforementioned SATA ports. That's what FSE lives on right now; it's running on a single RK1 with an NVMe. No moving parts besides the fans.
@bonifartius @p @RedTechEngineer @nyanide @raphiel_shiraha_ainsworth

>i don't really like to burn through sd cards all the time

Linux has some answers to that problem with filesystems like F2FS and JFFS2. They aren't that user-friendly as the normal ones, but it's still better than nothing and with some config changes that reduce write cycles, you can get a system that does barely any writes when idle (systemd can log to a ring buffer; same can be achieved with a more normal syslog setup and some ingenuity with logrotate and tmpfs). Some manufacturers even make uSD cards specifically made for these SBCs that have higher write endurance and more importantly aren't as slow.

@phnt @RedTechEngineer @p @nyanide @raphiel_shiraha_ainsworth this still limits their usability imo, because many interesting uses need storage to write to.

i'm not a hardware guy, i just wonder why so few boards include sata or m.2 ports.

i'd really love an inexpensive arm board with many sata ports to build a small nas with. you don't need much cpu power or much ram do do this, only a decent network interface.

@phnt @RedTechEngineer @p @nyanide @raphiel_shiraha_ainsworth but i will try out these fs and see if they are any good - i tend to stick to the classic ones as they are so well tested by now.

@bonifartius @phnt @RedTechEngineer @nyanide @raphiel_shiraha_ainsworth

> i'd really love an inexpensive arm board with many sata ports to build a small nas with.

They've had kits for this ( https://www.hardkernel.com/shop/cloudshell-2-for-xu4/ ) but it's mostly DIY nowadays unless you spring for one of the boards that does have the m.2 already. Most of the RPi gear has a way to get at the PCIe bus nowadays. so you don't really need to worry about uSD cards much any more, except for portable systems. (Even then, though, like, the DevTerm/uConsole, people have tapped into the pins and shoved a "real" SSD inside. I use them as portable machines to talk to the bigger machines, though, so I don't mind treating the storage as disposable and I don't want to trade the battery life.)

You can sorta see the SATA ports next to the PSU on the TPi2 board; they're next to the power connector. (They're empty on FSE because the NVMe is slotted under the board.)
IMG_9860.jpg

@p @RedTechEngineer @phnt @nyanide @raphiel_shiraha_ainsworth i didn't know about the sata stuff for rpis, for a while i was eyeing rockpro64 because it has two sata ports so it could do a raid.

the turing board looks _really_ nice, thanks for the picture! i don't think i have the funds for the board and more than one compute module right now, but it would likely solve all my server needs i have here :)

i will follow up the rpi-cm-sata lead, a first search seems promising

@bonifartius @RedTechEngineer @phnt @nyanide @raphiel_shiraha_ainsworth

> i didn't know about the sata stuff for rpis, for a while i was eyeing rockpro64 because it has two sata ports so it could do a raid.

Oh, yeah, there are a lot of options for that kind of thing nowadays.

> the turing board looks _really_ nice, thanks for the picture! i don't think i have the funds for the board and more than one compute module right now,

Yeah, it's cheap for what it is, but not cheap-cheap. But basically, all the stuff I crammed into that case, it was about $900, and the previous refurbished box with all the trouble was $1400. (And now it's all choked by the shitty net connection because of the circumstances surrounding :brucecampbell::callmesnake:, but it's beefy enough at least.)
@dcc @RedTechEngineer @bonifartius @nyanide @phnt @raphiel_shiraha_ainsworth I don't know. Could be the software, could be the video chip. What's the temperature when it is doing that?
@dcc @RedTechEngineer @bonifartius @nyanide @phnt @raphiel_shiraha_ainsworth Hm. My display glitches a little since I had to take that trip in January. (Basically slept in the car, hoodie kept me warm but I think some of my devices went below freezing.) It goes away after it warms up (past ~28 degrees communist); it's minor so I haven't tried to figure out which component it was. Tried swapping the core out?
Show newer

@p @RedTechEngineer @phnt @nyanide @raphiel_shiraha_ainsworth
the standard rpi cm 4 baseboard has a pcie port (haven't found one for cm5 with pcie port yet) and there are four-port sata boards made for it, guess that should work fine for my purposes.

@p @RedTechEngineer @phnt @nyanide @raphiel_shiraha_ainsworth @bonifartius

I built a NAS, but it's just a four-bay USB JBOD and a pi3.

The pi is fine until a full btrfs fsck is needed; then I had to move it to a full PC. (RAM demands.)

@bonifartius @RedTechEngineer @p @nyanide @raphiel_shiraha_ainsworth

>i'm not a hardware guy, i just wonder why so few boards include sata or m.2 ports.

Using ARM boards as desktops or servers is a relatively new concept and before that you didn't really need either of those. That's why. SATA needs a separate controller (usually on a PCIe bus), M.2 requires both of those and cheap-enough ARM chips with PCIe support came out only in the last few years.

With RISC-V it's the same story, but with even less traction and demand in the market.

>i'd really love an inexpensive arm board with many sata ports to build a small nas with

There are Raspberry Pi hats with ~4 SATA ports on them, if you want. But to me it feels like a hack instead of a proper solution. As p wrote before me, ODROID or TuringPi board are the more proper solution to that.
@phnt @bonifartius @RedTechEngineer @nyanide @raphiel_shiraha_ainsworth

> Using ARM boards as desktops or servers is a relatively new concept

That was the original use of the Acorn RISC Machine ("ARM") CPU: https://en.wikipedia.org/wiki/Acorn_Computers . I had a couple of Genesi Efika MXs. (I have been a fan of ARM since the GBA.)

> With RISC-V it's the same story, but with even less traction and demand in the market.

Yeah. They're pretty fuckin' cool, though.

i'm not a hardware guy, i just wonder why so few boards include sata or m.2 ports.

I'm not a electrical engineer either, @bonifartius, but I'd guess it's summat to do with power delivery. Not that it's impossible, but with lower constraints on total power usage less things can go wrong. An NMVe could easily have higher peak wattage than the rest of the SBC and guess how I learned that!

Cc: @phnt, @RedTechEngineer, @p, @nyanide & @raphiel_shiraha_ainsworth

@phnt @bonifartius @RedTechEngineer @nyanide @raphiel_shiraha_ainsworth Has F2FS improved much in the last few years? My main point of reference is some Phoronix benchmarch that demonstrated that Postgres is faster on ext4, but that was from before I set up the previous FSE box, so it's dated. (btrfs, unsurprisingly, performed the worst by an order of magnitude and actually exploded so there are no benchmark numbers for it on some of the SSDs.) In the mean time, ext4 got more SSD-friendly and presumably F2FS has been chugging along.
@p @RedTechEngineer @nyanide @raphiel_shiraha_ainsworth @bonifartius F2FS is still probably much slower than ext4, especially when running something that likes to do a lot of random I/O like a DBMS. It's probably not a good idea to use it on SSDs anyway as those fix a lot of the underlying issues with complex controllers in front of the NAND flash. Google has been using it as the default for both the ro and rw partitions on Android for 4 years. Mainline Linux is probably less stable than that due to a lower degree of testing.

>btrfs, unsurprisingly, performed the worst by an order of magnitude

Probably needs some FS tuning. ZFS has the same issues with DBMS where it does smart things that the DBMS also does and it destroys performance.

> and actually exploded so there are no benchmark numbers for it on some of the SSDs.

Typical BTRFS experience. Thankfully it didn't catastrophically blow up on me yet in the 4 years I've been using it.

>ext4 got more SSD-friendly

There are two sides to this. One is pushing more performance out of the SSDs with more optimized I/O and scheduling (NAND is actually slow on small I/O queue depths and with a DRAM cache, it can perform much worse than spinning rust). The second side is wear-leveling and better managing for the raw flash. ext4 probably doesn't bother much with the latter as the controller is expected to do the heavy lifting, but that controller is mostly absent on the more typical embedded/SD Card flash chips.
@phnt @RedTechEngineer @bonifartius @nyanide @raphiel_shiraha_ainsworth

> Thankfully it didn't catastrophically blow up on me yet in the 4 years I've been using it.

I think sjw was running it for something at some point. I already didn't like it when I was `make menuconfig` and saw "Ooh, new filesystem!" and hit the question mark and it started by saying "It's supposed to be pronounced 'better FS'!" Anyway, the benchmark was more than four years ago (I think 2019) so maybe it doesn't blow up as much any more or maybe it is still the expected btrfs experience. (Even ext4 blew up on me the first time I tried it, at which point I decided to not even bother looking at filesystems unless they've been in production for several years.)

> ZFS has the same issues with DBMS where it does smart things that the DBMS also does and it destroys performance.

ZFS does the same with RAID and LVM and the entire I/O subsystem. They should probably rename it NIHFS.

> ext4 probably doesn't bother much with the latter as the controller is expected to do the heavy lifting, but that controller is mostly absent on the more typical embedded/SD Card flash chips.

I think it *mostly* focused on stability. But it's more or less a 30-year-old codebase, you kind of expect stability. New benchmarks would be interesting but I don't know if anyone has bothered.
@p

Early EXT4 blew up on me when it was fairly new. I flopped over to ReiserFS after that and eventually stopped using that due to support dropping off upstream.

@RedTechEngineer @phnt @nyanide @raphiel_shiraha_ainsworth @bonifartius
@p @RedTechEngineer @nyanide @raphiel_shiraha_ainsworth @bonifartius

>I think sjw was running it for something at some point.

He was also running NB on Arch for some time, if I remember correctly, so it doesn't really surprise me :D

>so maybe it doesn't blow up as much any more or maybe it is still the expected btrfs experience.

It still can blow up when it looses free space and since they broke the free space reporting _intentionally_, a lot of userspace utilities that calculate free space before committing transactions will blow up with it. Unless they use custom code linked from libbtrfs that is. Probably one of the most braindead decisions one could make in filesystem design.

>Even ext4 blew up on me the first time I tried it, at which point I decided to not even bother looking at filesystems unless they've been in production for several years.

I had ext4 survive a bad USB cable that created garbage data and deadlocked the HDD's controller multiple times. It only took 40GB of swap and a day of fsck.ext4 constantly complaining about something to fix it. In the end no data was lost.

>ZFS does the same with RAID and LVM and the entire I/O subsystem. They should probably rename it NIHFS.

It acts like malware in the entire disks and I/O subsystem, sticking its fingers everywhere it can, but usually for a good reason. When it falls apart is applications trying to be too smart with I/O (1). One can only appreciate the whole DB-like design and extreme paranoia with everything I/O related, when they use it on a large disk array. Other than that, it's a bad filesystem to use on your daily-driver system. None of the benefits with all of the issues. Running it under Linux is also probably a bad idea, just use vanilla FreeBSD or TrueNAS.

(1) You can disable a lot of the "smart" features per every pool, so this problem usually only crops up in misconfigured environments.
@phnt @p @nyanide @raphiel_shiraha_ainsworth @bonifartius we're still talking about file systems? I thought it was understood that XFS was the only one worth discussing.
@RedTechEngineer @p @nyanide @raphiel_shiraha_ainsworth @bonifartius That's my go-to on Linux servers. The only issue I have with it isn't even FS related.

Neither md, or LVM do parity checking on reads by default, so you'll encounter silent bitrot more frequently compared to almost zero on ZFS. And as a result you need to either run scrubs more frequently which can be annoying depending on the array size, or you need to configure dm-integrity with the not-so-great documentation. But if you need to run Linux in your storage server, it's still the best bet and even has good performance.

@phnt @RedTechEngineer @p @nyanide @raphiel_shiraha_ainsworth zfs is nice when used for what it was made for, on a server serving data :)

> Running it under Linux is also probably a bad idea, just use vanilla FreeBSD or TrueNAS.

iirc openzfs is now the same code base everywhere. i never had problems with the linux port. what i like with zfs is that the tools have a pretty good user interface, like that "zpool status" is providing sane descriptions about what is broken and how to fix it.

@phnt @RedTechEngineer @bonifartius @nyanide @raphiel_shiraha_ainsworth

> He was also running NB on Arch for some time, if I remember correctly

I think so; it was ubertuber by the time it was baest, though.

> Unless they use custom code linked from libbtrfs that is.

:alexjonesshiggy2:

> Probably one of the most braindead decisions one could make in filesystem design.

Well, there's a thing that makes no sense when designing a regular POSIX filesystem, and then there's a thing that makes no sense if your goal is a good filesystem but that makes perfect sense if you are trying to do lock-in so you can turn open-source into a closed ecosystem: this was the specific goal for RedHat at some point (and part of Lennart's pitch to his bosses about why they should push systemd), so it's not a huge surprise that they would try to force a new library down everyone's throats (given systemd and D-Bus and PulseAudio and Avahi and and and and and ad infinitum).

> but usually for a good reason.

Well, like, every thing in a shantytown has a good reason to be there, but the shantytown considered as a whole doesn't represent good engineering. "Oh, we don't trust the OS's I/O scheduler to do this optimally" is a good reason, but it's bad engineering.
@RedTechEngineer @bonifartius @nyanide @phnt @raphiel_shiraha_ainsworth I don't know, you'll have to ask him. (He might have said at some point, but I pay almost no attention to distro discussions.)
@p @phnt @nyanide @raphiel_shiraha_ainsworth @bonifartius I like btrfs for SBCs or other devices with little and slow storage.
Transparent compression, COW, and data dedup on the filesystem makes things nice.
Though it feels like btrfs has semi-stalled. I feel like encryption has been a planned feature for the better part of the decade and RAID is still broken, which seems strange for filesystem that likes virtual volumes.
@RedTechEngineer @bonifartius @nyanide @phnt @raphiel_shiraha_ainsworth Basically all of my Looniks systems are just ext4. Anything I write does its damnedest to not touch the disk anyway, so speed doesn't matter, just predictability.
Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.