@mk @theorytoe you missed the point. containers just make things harder. they are nice rube goldberg machines for shit languages like python which are hell to deploy.
when just installing everything from packages, things will receive timely security patches of the distribution.
when using VMs, one has to upgrade a few VMs for this. not great, not terrible.
with containers one has to hope that some image down the stack will be upgraded to include the fix, while the whole setup provides worse isolation than VMs (which already is prone to leakage). with containers the isolation is essentially the same as for plain linux users and chroot. no improvement. cgroups limiting resource usage can be set by the init system, i think systemd does this already.
containers sure have their use case, but mostly they are a crappy solution waiting for problems.
in the end the image is a meme which makes the point that ftp-ing a directory full of php scripts worked better than all the modern shit.
im running a proxmox server with 2 virtual machines (pfsense and docker).
my docker vm hosts these services:
openldap
nextcloud
peertube 1
peertube 2
mastodon
hedgedoc
gogs
excalidraw
elk_cluster
searx
lightning network daemon (testnet)
lightning network daemon (mainnet)
bitcoin fullnode
bitcoin mempool stats
wordpress
mailcow emailserver
your solution is to..what?
run everything in their own VM? -> ressource nightmare
run everything on one host (without container)? -> security nightmare
bro..you're retarded.
@mk @theorytoe
- vms can use dynamic allocation for years now.
- containers provide absolutely no additional security.
running on the host is perfectly fine. it only requires one to know what one is doing, of course.
lastly, i'd be careful to calling other people retard when using "bro".
"containers provide absolutely no additional security"
then it would be pretty easy for you to proof your statement? i'm waiting.
@mk @bonifartius @theorytoe Systemd does everything already, look at a random NixOS module that configures it.
I've had it with these docker faggots
@RGBCube @theorytoe @bonifartius
you guys are pretty good at talking and pretty shitty at linking to your sources.
@mk @theorytoe @bonifartius https://github.com/NixOS/nixpkgs/blob/master/nixos%2Fmodules%2Fservices%2Fsecurity%2Fendlessh.nix#L42-L96
Or look at literally ANY file under nixos/services/
@RGBCube @theorytoe @bonifartius
the position you're defeding is this:
"containers provide absolutely no additional security"
please provide evidence for this claim.
@mk @theorytoe @bonifartius Systemd already has cgroups, choosing and protecting kernel modules and anything related to the kernel. You don't need d*cker, as systemd already has EVERYTHING. And you optionally give access to specific ports so it can function properly.
Depends on what you mean containerization, but systemd already does it, ignoring the port usage.
@RGBCube @theorytoe @bonifartius
ok..to make it even simpler for you..
- there's a webservice running.
- it gets hacked.
- the hacker owns the webservice (the process)
is it harder or easier for the attacker to own the host system if..
scenario 1: process is isolated from the host system via cgroups and namespaces.
scenario 2: process is NOT isolated from the host system via cgroups and namespaces.
@mk @theorytoe @bonifartius Systemd is also scenario 1, if you do it properly. Which is done here
@RGBCube @theorytoe @bonifartius
bro...nobody is talking about your faggot systemd..
this is the position you retards took:
"containers provide absolutely no additional security"
please defend it or lose this debate.
@mk @RGBCube @theorytoe
i have to do some drywall now, so i'll keep it short:
- namespaces are a copy of a plan9 idea to have composable environments, isolation is a side effect.
- cgroups limit resource usage, might be worthwhile to prevent some daemon going crazy. otoh there already were things in place for that like umask.
- chroot is no "container feature". postfix chroots by default, so do many other daemons. you still need good user/group structure and appropriately set permissions in any case.
all of these things are usable without resorting to docker. @RGBCube explained how a distribution can use the same features with it's packages.
side note: you using words like "retard" and "faggot" while shilling docker which frequently has pride events borders on the comedic.
@bonifartius @RGBCube @theorytoe
are these two technologies making a operating system saver from a hijacked/hacked process?
yes or no.
- namespaces are a copy of a plan9 idea to have composable environments, isolation is a side effect.
- cgroups limit resource usage, might be worthwhile to prevent some daemon going crazy. otoh there already were things in place for that like umask.
@mk @RGBCube @theorytoe
> unilaterally declares victory due to made up facts
bless your heart
i described pretty well what the things involved do and what they were made for. @RGBCube explained that they are in use by distribution packages.
i can't keep you from using fluoridated stuff like docker or proxmox. maybe it's one of these things in life one has to learn the hard way
@RGBCube @theorytoe @mk just getting out data when something in the rube goldberg machinery will inevitably break will be hell enough :)
@RGBCube @theorytoe @bonifartius
i migrated my stuff around already. it's easy, because i've got very few dependencies.
proxmox -> home example
1. rent a little vm with public ipv4-address
2. import pfsense backup-file
3. start vpn from home to pfsense
4. stopp all docker container on old machine
5. move data
6. start docker container
proxmox desaster recovery (german):
https://hedgedoc.satoshishop.de/kS1jwalbQOWzW5hvxLd6_Q#
@RGBCube @theorytoe @bonifartius
proxmox uses kvm/qemu and zfs (zvols)
migrate to physical machine:
1. put new harddrive (/dev/sdb) into proxmox server
2. copy data to /dev/sdb
$ dd if=/dev/zvol/rpool/data/vm-101-disk-0 of=/dev/sdb bs=1GB
$ cfdisk /dev/sdb # resize disk
$ e2fsck -f /dev/sdb-part3 # check
$ resize2fs /dev/sdb-part3 # resize part.
3. put /dev/sdb into new machine 4. boot from it
name one other hypervisor that allow you to do this.
@RGBCube @theorytoe @bonifartius
proxmox (desaster recovery from backup) -> proxmox:
https://hedgedoc.satoshishop.de/kS1jwalbQOWzW5hvxLd6_Q#
tldr:
1. install new one
2. copy VM-config files
3. recreate the linux-bridges from the old /etc/network/interfaces
4. zfs send all the zvols
5. start virtual machines
@RGBCube @theorytoe @bonifartius
you can convert your zvols very easy into every format you might need with "qemu-img convert"
types:
- RAW (zvol)
- QCOW2
- VMDK
- VDI
- VHDX
https://cloudbase.it/qemu-img-windows/
---
$ qemu-img convert -f raw /dev/zvol/pool/vm-311-disk-0 -O vdi vm-311-disk-0.vdi
@RGBCube @theorytoe @bonifartius
Migrating a complete IT environment (proxmox) from one location to another in less than 10min
@bonifartius @RGBCube @theorytoe
"made up facts"
i quoted you. i used you as a fact.
don't be mad. you'll win next time.
@mk @RGBCube @theorytoe it's ok, just think of me when your jenga software stack breaks :)
@bonifartius @RGBCube @theorytoe
ok.. and while we wait for your doomsday prediction, the whole world moves to containerization.
..the whole world? no !
a little man in germany is fighting back by putting all his php-egg into one basket.