@Gargron you mean the version of the DB software itself I suppose? Seems reasonable.
Though I am actually kinda surprised given your size you havent moved to a master-slave configuration yet, though not sure if mastodon supports that. I think it was one of the things pleorma was trying to sell as a feature, dont quote me thoguh.
Either way, good luck!
@freemo Mastodon supports replication and I used it in the past to migrate physical machines (and upgrade disk space that way). However, Postgres 9.6 has no way of upgrading to more recent versions via replication as both ends have to use the same version.
@Gargron ahh, lame. and kinda surprising, postgres has a better reputation than that. Though to be fair I dont use postgres as a developer as much as I used to in a way where I need to maintain my own instance. I've been using amazon AWS's postgres for a while now in most professional settings and that is always seemless somehow on upgrades.
@Gargron Ahh, weird choice on your part, but i guess its easy to ignore considering how much of a PITA it can be to do database upgrades.
Not sure what the postgres version was when i first moved over to AWS for most of my work. Wouldnt suprise me if they had a work around even on 9.x somehow.
@Gargron I guess by choice i meant the choice not to upgrade it sooner :)
@Gargron @freemo It's pretty respectable that you've been keeping a Postgres 9 based instance running for that long, with all of its patches, and have been able to do zero downtime updates. 🥂
I've never had to do a zero downtime upgrade from major postgres versions. Since 2016 I've worked at shops that only use RDS (nice, but expensive/locks you to Amazon, which is a terrible company)
If some shop wrote a way to do streaming updates from 9->13, it's probably closed, internal and proprietary
@djsumdog @Gargron @szbalint @freemo
Don’t suppose the underlying storage is anything that supports snapshots? If so…
Down DB
Snapshot FS
pg_upgrade with link option to sub directory of parent
DB up
Cake
Coffee
Profit
Did this a while back and the whole process took less time than it did to type this out. This was on a DB just short of a TB and going from 9.6 to 12. Downtime was measured in seconds and way way less than pg_dump and avoided the double disk requirement.
If you are confident in your backups (You are doing streaming to a replica with wal file backup, aren’t you? ) then you can do the same and skip the snapshot and just accept the longer downtime in the unlikely event something goes sideways.
@freemo What do you mean choice? This database started running in early 2016.