@theorytoe

1. i'm using docker
2. i don't use "the cures"
3. nobody "suppresses" ftp

you're retarded

Follow

@mk @theorytoe you missed the point. containers just make things harder. they are nice rube goldberg machines for shit languages like python which are hell to deploy.

when just installing everything from packages, things will receive timely security patches of the distribution.

when using VMs, one has to upgrade a few VMs for this. not great, not terrible.

with containers one has to hope that some image down the stack will be upgraded to include the fix, while the whole setup provides worse isolation than VMs (which already is prone to leakage). with containers the isolation is essentially the same as for plain linux users and chroot. no improvement. cgroups limiting resource usage can be set by the init system, i think systemd does this already.

containers sure have their use case, but mostly they are a crappy solution waiting for problems.

in the end the image is a meme which makes the point that ftp-ing a directory full of php scripts worked better than all the modern shit.

@bonifartius @theorytoe

im running a proxmox server with 2 virtual machines (pfsense and docker).

my docker vm hosts these services:

openldap
nextcloud
peertube 1
peertube 2
mastodon
hedgedoc
gogs
excalidraw
elk_cluster
searx
lightning network daemon (testnet)
lightning network daemon (mainnet)
bitcoin fullnode
bitcoin mempool stats
wordpress
mailcow emailserver

mastodon.satoshishop.de/@mk/11

@bonifartius @theorytoe

your solution is to..what?

run everything in their own VM? -> ressource nightmare
run everything on one host (without container)? -> security nightmare

bro..you're retarded.

@mk @theorytoe
- vms can use dynamic allocation for years now.
- containers provide absolutely no additional security.

running on the host is perfectly fine. it only requires one to know what one is doing, of course.

lastly, i'd be careful to calling other people retard when using "bro".

@bonifartius @theorytoe

"vms can use dynamic allocation for years now."

if you're running 16 vms, you're also running 16 kernel, right? and you'd have to do 16 operating system upgrades, right?

aka .. ressource nightmare

@mk @theorytoe if running 16 kernels eats all your ram, you have other problems. 16 containers have to be updated as well. with a sane diatribution i have security updates in around one day 💁

@bonifartius @theorytoe

"16 containers have to be updated as well"

we're talking infrastructure update here...you'd also have to run your retarded ftp-php-update scripts also 16 times.

@bonifartius @theorytoe

"containers provide absolutely no additional security"

then it would be pretty easy for you to proof your statement? i'm waiting.

@mk @theorytoe
pretty easy, they can't be more safe than the technologies they are composed of. in practice they are more insecure because of the bullshit update mechanisms.

@mk @theorytoe sorry, the relevant articles aren't available in simple english :)

@bonifartius @theorytoe

"in practice they are more insecure because of the bullshit update mechanisms."

your argument is bullshit.

90% of the webservices i run do maintain their own Dockerfile and/or docker images on hub.docker.com

peertube updated their development images 3hours ago.

"Last pushed 3 hours ago"
hub.docker.com/r/chocobozzz/pe

---

peertube uses the latest official debian image. they get updates as soon as new versions release.

@bonifartius @mk plus youre adding more stuff to the dependency chain. If you have more things that could be compromised then that is unequivocally more insecure my pure logic alone

@mk @bonifartius @theorytoe Systemd does everything already, look at a random NixOS module that configures it.

I've had it with these docker faggots

@RGBCube @theorytoe @bonifartius

you guys are pretty good at talking and pretty shitty at linking to your sources.

@RGBCube @theorytoe @bonifartius

the position you're defeding is this:

"containers provide absolutely no additional security"

please provide evidence for this claim.

@mk @theorytoe @bonifartius Systemd already has cgroups, choosing and protecting kernel modules and anything related to the kernel. You don't need d*cker, as systemd already has EVERYTHING. And you optionally give access to specific ports so it can function properly.

Depends on what you mean containerization, but systemd already does it, ignoring the port usage.

@RGBCube @theorytoe @bonifartius

ok..to make it even simpler for you..

- there's a webservice running.
- it gets hacked.
- the hacker owns the webservice (the process)

is it harder or easier for the attacker to own the host system if..

scenario 1: process is isolated from the host system via cgroups and namespaces.

scenario 2: process is NOT isolated from the host system via cgroups and namespaces.

@mk @theorytoe @bonifartius Systemd is also scenario 1, if you do it properly. Which is done here

@RGBCube @theorytoe @bonifartius

bro...nobody is talking about your faggot systemd..

this is the position you retards took:

"containers provide absolutely no additional security"

please defend it or lose this debate.

@mk @RGBCube @theorytoe
i have to do some drywall now, so i'll keep it short:

- namespaces are a copy of a plan9 idea to have composable environments, isolation is a side effect.

- cgroups limit resource usage, might be worthwhile to prevent some daemon going crazy. otoh there already were things in place for that like umask.

- chroot is no "container feature". postfix chroots by default, so do many other daemons. you still need good user/group structure and appropriately set permissions in any case.

all of these things are usable without resorting to docker. @RGBCube explained how a distribution can use the same features with it's packages.

side note: you using words like "retard" and "faggot" while shilling docker which frequently has pride events borders on the comedic.

@bonifartius @RGBCube @theorytoe

are these two technologies making a operating system saver from a hijacked/hacked process?

yes or no.

- namespaces are a copy of a plan9 idea to have composable environments, isolation is a side effect.

- cgroups limit resource usage, might be worthwhile to prevent some daemon going crazy. otoh there already were things in place for that like umask.

@bonifartius

to make it short, because you'll never admit it anyway..

yes they do.

so you lost this position:

"containers provide absolutely no additional security"

@RGBCube @theorytoe

@mk @RGBCube @theorytoe
> unilaterally declares victory due to made up facts

bless your heart

i described pretty well what the things involved do and what they were made for. @RGBCube explained that they are in use by distribution packages.

i can't keep you from using fluoridated stuff like docker or proxmox. maybe it's one of these things in life one has to learn the hard way :blobcatshrug:

@bonifartius @mk @theorytoe Can't wait until they go bald by trying to migrate proxmox to something else when that goes out of fashion.

@RGBCube @theorytoe @mk just getting out data when something in the rube goldberg machinery will inevitably break will be hell enough :)

@RGBCube @theorytoe @bonifartius

i migrated my stuff around already. it's easy, because i've got very few dependencies.

proxmox -> home example

1. rent a little vm with public ipv4-address
2. import pfsense backup-file
3. start vpn from home to pfsense
4. stopp all docker container on old machine
5. move data
6. start docker container

proxmox desaster recovery (german):
hedgedoc.satoshishop.de/kS1jwa

@RGBCube @theorytoe @bonifartius

proxmox uses kvm/qemu and zfs (zvols)

migrate to physical machine:

1. put new harddrive (/dev/sdb) into proxmox server

2. copy data to /dev/sdb
$ dd if=/dev/zvol/rpool/data/vm-101-disk-0 of=/dev/sdb bs=1GB
$ cfdisk /dev/sdb # resize disk
$ e2fsck -f /dev/sdb-part3 # check
$ resize2fs /dev/sdb-part3 # resize part.

3. put /dev/sdb into new machine 4. boot from it

name one other hypervisor that allow you to do this.

Show newer

@bonifartius @RGBCube @theorytoe

"made up facts"

i quoted you. i used you as a fact.

don't be mad. you'll win next time.

@mk @RGBCube @theorytoe it's ok, just think of me when your jenga software stack breaks :)

@bonifartius @RGBCube @theorytoe

ok.. and while we wait for your doomsday prediction, the whole world moves to containerization.

..the whole world? no !

a little man in germany is fighting back by putting all his php-egg into one basket.

Show newer
Show newer

@bonifartius @RGBCube @theorytoe

"docker which frequently has pride events borders on the comedic."

what's you position?
i should stop using docker, because there're activists working on it?

---

well...how about you stop using the linux kernel then.

mastodon.satoshishop.de/@mk/11

@mk @RGBCube @theorytoe i don't have to stop using anything as i'm not the one, according to the insults used by you, who has a problem with what people are :)

@bonifartius @theorytoe

"lastly, i'd be careful to calling other people retard when using "bro"."

fuck you, faggot.

@bonifartius @mk
I can attest to this
containers are a solution to a self-inflicted problem being that people dont want to actually write software that is runable bare-metal

for starters, containers provide no security (docker daemon manager process runs as root, therefore on a basic level one would have to be retarded to think that is good security practice -- it is not). secondly docker works fine for prebuilt images, but I have never had a good experience with compose ever, it has always broken stuff and it never works. it is basically a glorified chroot with ""chroot management"" so you can install others rubbish onto your system

as well docker seems to try to plug into load balancing with k8s/k3s and if you have done any level of k8s management you will know it is a nighmare. when you could just run on a few hosts and incorporate a load balancer. this option is way easier on setup but also on maintenance since its just plain old hosts.

if you cant run software bare-metal without hassle its not good software
@theorytoe @mk @bonifartius lxc containers can be run unprivileged and even root inside the container is an unprivileged user
@Moon @mk @bonifartius
yeah I keep forgetting about lxc because my debian system is too old to get it working :marseykernelpanic:
or rather lxc is too new
@theorytoe @mk @bonifartius anyway to contribute to this thread the problem with containers is really the problem with the os which is by default you can access everything not locked down, rather than having no access and needing to be passed in capabilities to do anything.

@Moon @theorytoe @mk well, if things run as root they need to be locked down ;) a user can't do very much given permissions aren't set badly, privileged ports can't be used, etc.

it doesn't help that to do things like using chroot, namespaces, cgroups one has to be root - it means docker or lxc likely will be run as root.

would be nice if more things would use capabilities.

@Moon @theorytoe @mk haven't used lxc in a long time, i think since they switched to using images? is it worth the trouble?

@bonifartius @theorytoe @mk i use bind mounted directories on the host. i think they work well. the have a whole os inside the container unlike docker which just executes you software directly from pid 1
@bonifartius @mk @theorytoe you can also mount an lvm as your root volume but i could not get this working unprivileged. i only use unprivileged lxc containers, otherwise whats the point

@Moon @theorytoe @mk bind mount sounds like a nice solution, much better than image files. LVM is nice for virtual machines, but if it's running on the same kernel just using the existing FS is better imo.

@theorytoe @bonifartius

"containers are a solution to a self-inflicted problem being that people dont want to actually write software that is runable bare-metal"

what does "running containers" have to do with bare-metal? you can run containers within a bare-metal system. it doesn't make sense.

@mk @bonifartius
>what does "running containers" have to do with bare-metal?
lol its literally not bare metal
you are running a process under a hypervisor
thats not bare metal

@theorytoe @bonifartius

the argument is that docker/containers in general don't have to run within a virtual machine.

@theorytoe @bonifartius
Containers use the kernel of the host system and create an illusionary environment..

chroot
- changes the current root directory

unshare - creates namespaces for:
- User
- Process ID (PID)
- Network
- Mount
- Interprocess Communication (IPC)

..in which the process is allowed to run wild without being able to break anything on the host. there is no kernel abstraction.

@mk @bonifartius
yeah, i know that
but docker in itself is still virtualization, even if you arent emulating a full system, you are still virtualizing pretty much everything minus the kernel

its basically vertualization and therefore if you have even a shred of functioning neuron pathways you should be able to realize that

@theorytoe @bonifartius

docker doesn't use any virtual devices. it basically just changes directories / pointers somewhere else.

it's still "bare-metal", you fucking retard.

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.