These are public posts tagged with #selfhosted. You can interact with them if you have an account anywhere in the fediverse.
Hi all, I am trying to use Collabora online, but am stuck. I set it up via the docker instructions here (sdk.collaboraonline.com/…/CODE_Docker_image.html).
But when I go to 127.0.0.1:9980, all I get is “OK”. The reverse proxy works, but the same “OK”. How do I actually use the Collabora?
I am anticipating a web interface to a document editor by the browser.
I’m trying to plan a better backup solution for my home server. Right now I’m using Duplicati to back up my 3 external drives, but the backup is staying on-site and on the same kind of media as the original. So, what does your backup setup and workflow look like? Discs at a friend’s house? Cloud backup at a commercial provider? Magnetic tape in an underground bunker?
I’ve been kind of piece-mealing my way towards cleaning up my media server, and could use a little advice on the next steps.
Currently I have a little under 10TB of torrented media that I have been downloading to / seeding from media library folders that Plex and Jellyfin monitor, using my desktop PC as the torrenting client. This requires a bit of manual maintenance–i.e., manually selecting the destination folder for the torrents in a way that Plex/Jellyfin can see.
I recently fired up qBittorrent on my media server (Unraid if that matters), and would like to try out some of the *arrs, but I’m not quite sure how to proceed without creating some kind of unholy mess.
I guess option A is just to import all of my current torrented content from desktop to media server client, and keep manually specifying the torrent destination. It’s not a huge deal, since I am typically only adding a few torrents per week, so it’s literal seconds or minutes of work to find the content I want.
Option B is to start “clean” and follow one of the many how-tos for starting up an *arr stack. But never having used the software, I don’t have a good sense for how it works, and whether there are any pitfalls to watch out for when trying to spin it up with an existing media library that includes both torrented and ripped content.
From a bit of reading, I think radarr for example will only care about new content. So I should be able to migrate all my existing torrents to the new client on my media server, including their existing locations amongst my media library, and then just let radarr locate and manage new content. Is that correct?
Any other advice or suggestions I should be considering?
Today marks my first anniversary with Proxmox ! It has been a wonderful journey with some challenges, but they have always been manageable, thanks to Proxmox Backup Server and the support from the Mastodon community.
Now, I'm looking to add a real second node. It's too addictive. My only regret is that I didn't start earlier. :-)
I didn’t like Kodi due to the unpleasant controls, especially on Android, so I decided to try out Jellyfin. It was really easy to get working, and I like it a lot more than Kodi, but I started to have problems after the first time restarting my computer.
I store my media on an external LUKS encrypted hard drive. Because of that, for some reason, Jellyfin’s permission to access the drive go away after a reboot. That means something like chgrp -R jellyfin /media/username
does work, but it stops working after I restart my computer and unlock the disk.
I tried modifying the /etc/fstab file without really knowing what I was doing, and almost bricked the system. Thank goodness I’m running an atomic distro (Fedora Silverblue), I was able to recover pretty quickly.
How do I give Jellyfin permanent access to my hard drive?
Permanent solution: Add a regex to #PiHole to block upstream AAAA queries for my domain, so local hostnames are always chosen.
I'm haunted how easy this solution was. However coming to the proper diagnose took me a long time. It is extremely rewarding when everything works like a charm. #selfHosted
@Liaely Don't overthink the hardware $$$. The "Servethehome" site has a bunch of articles under #ProjectTinyMiniMicro about repurposing super cheap corporate surplus desktop PC's as servers.
$$-wise, one of the best investments would really just be making sure you're using NVMe storage.
Tech-wise, get comfortable with #Docker and #DockerCompose.
Also get comfortable with a reverse proxy that you're going to get a lot of use out of. #Traefik and #nginx are really good ones that many tend to gravitate to.
#GoToSocial might be the easiest one to deploy. #Mastodon and #Pixelfed might be some of the hardest. #lemmy & #peerTube are somewhere in between in difficulty to set up.
I have some (non-enshittified / non-monetized) how-to's for deploying some #selfHosted services on Docker if it helps. magnus919.com blog.
But really just get very comfortable with Docker and your reverse proxy. If you do both of those things, the rest becomes a lot easier. Traefik has more of a learning curve maybe than nginx but scales up really nicely, so once you've got it figured out it is ridiculously easy to add more services to it and get https
"for free".
This has got to change #nextcloud - I LEFT VM-style setup for #docker here to avoid this pain.. yet it follows.
Would anyone have a recommendation for an alternative? Features I use are file sync, some calendar sync. #homelab #selfhosted #selfhosting
I finally decided to try again hosting my own Pixelfed instance and after a few docker compose up -d
and docker compose down -v
it is finally running.
I was suprised in a good way to see that it is not creating copies of all images from federated posts (at least not the ones I am getting from the relay), which is amazing because storage is going to be way better than anticipated.
I've been thinking: it doesn't make sense to share code related to Forgejo on GitHub Gist, so I'm planning on migrating to Codeberg.
Since Codeberg does not (currently?) have Gists, what are people using to "replace" it?
I've read about using the pages feature, or simply drop all the files on a repository, but those are different kinds of solutions.
Open to opinions/suggestions.
Currently centralizing all my personal stuff around a #selfHosted #Nextcloud at home. I wonder if it's really worth it on the long run... (migrations, crashs, maintenances, backups, etc). I hope it is.
I have been self-hosting for a while now with Traefik. It works, but I’d like to give Nginx Proxy Manager a try, it seems easier to manage stuff not in docker.
Went running to my folks place this morning before work, to plug in the hard drive to the Radxa Zero 3E, and the Ethernet cable. Oddly, it wouldn't show up on the network. It did for a few seconds but then, nothing. Arrgghh! Will need to troubleshoot why.
I was doing all this without a display.
Does anyone have practical experience with https://www.xmox.nl/ ? Seems like a sensible option for running small or personal #email servers.
I realise this is a very niche question, but I was hoping someone here either knows the answer or can point me to a better place to ask.
My @DailyGameBot@lemmy.zip uses Puppeteer to take screenshots of the game for its posts. I want to run the bot on my Synology NAS inside of a Docker container so I can just set it and forget it, rather than needing to ensure my desktop is on and running the bot. Unfortunately, the Synology doesn’t seem to play nicely with Puppeteer’s use of the Chrome sandbox. I need to add the –no-sandbox
and –disable-setuid-sandbox
flags to get it to run successfully. That seems rather risky and I’d rather not be running it like that.
It works fine on my desktop, including if run in Docker for Windows on my desktop. Any idea how to set up Synology to have the sandbox work?
Bonus blog post!
See how I have configured a very small service to convert emails into Pushover notifications. No need to setup complicated email delivery services!
#HomeLab #SelfHosted #SysAdminLife
https://mteixeira.wordpress.com/2025/02/16/sending-email-notifications-via-pushover/
In this post I will show you how to configure a very…
I do what I canTwo new blog posts are up!
Come take a look at how I have configured Renovate to run on my self-hosted Forgejo runners. Now you can easily track the applications that need to be updated on your Docker Compose, and probably anything else that you might be running on your home lab (as long as they are tracked in Git). 1/2
https://mteixeira.wordpress.com/2025/02/16/running-renovate-on-self-hosted-forgejo/
In this post I will demonstrate how I configured Renovate…
I do what I canDear #homelab / #selfhosted ,
I'm don't yet have any central storage for my homelab. I have the option to expand some drives on my proxmox cluster and do #Ceph or just build a dedicated TrueNAS box for the same price. Does anyone have any suggestions?
I just run a few non-cricical containers and want to start storing family documents etc in a non-Google location.
Is Ceph way too overkill or hard to manage? Seems like cool tech but I want storage to be as "boring" as possible with some options to easily expand in the future.
@technotim Damn impressive. Nice work. #selfhosted