Easy Containers for Puppy Linux
- BarryK
- Puppy Master
- Posts: 9392
- Joined: Mon 09 May 2005, 09:23
- Location: Perth, Western Australia
- Contact:
Just a quick reply, will get back to looking at containers very soon.jamesbond wrote:I can reproduce the the BadShmSeg error. This is how I did it:
1. I run "unshare -piumUrfn --mount-proc" in a terminal (this launches the "container", but sharing the filesystem with the host).
2. Inside that terminal then I launch geany (or anything else)
3. Then I got this:Subsequent invocation of geany (or any other X programs) will work without error.Code: Select all
The program 'geany' received an X Window System error. This probably reflects a bug in the program. The error was 'BadShmSeg (invalid shared segment parameter)'. (Details: serial 2165 error_code 128 request_code 130 minor_code 3) (Note to programmers: normally, X errors are reported asynchronously; that is, you will receive the error a while after causing it. To debug your program, run it with the --sync command line option to change this behavior. You can then get a meaningful backtrace from your debugger if you break on the gdk_x_error() function.)
I have an explanation: Modern X server tries to enable shared memory support for X clients for performance purposes. Since we have specified "-i" switch (=don't share IPC, including shared memory segments), shared memory created by X server on the host cannot be accessed by X clients in the container. Thus it fails. Now my guess: when this fails, some flag is set (perhaps in the X server itself?), so that subsequent X server will access the server without using shared memory anymore. To see the list of shared memory, run "ipcs". Run this on the host, and in the container, to see the difference.
I still don't know why you can still connect to the host. In my example above I can do it because basically I'm sharing the entire / of the host to the container. As soon as I do "mount -t tmpfs tmpfs /tmp" inside the container (effectively hiding the host's /tmp under an empty filesystem), no other X program will work.
Perhaps, in your case, try to launch geany or leafpad inside the container, and then geany is running, run "netstat -apn". Here is my output:As you can see geany is talking to X server via the X0 socket. Perhaps in your case the socket is located elsewhere?Code: Select all
# netstat -apn Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name Active UNIX domain sockets (servers and established) Proto RefCnt Flags Type State I-Node PID/Program name Path unix 2 [ ACC ] STREAM LISTENING 62089 39/geany /tmp/geany_socket.e32d8f79 unix 3 [ ] STREAM CONNECTED 59235 - /tmp/.X11-unix/X0 unix 3 [ ] STREAM CONNECTED 61057 45/gnome-pty-helper unix 3 [ ] STREAM CONNECTED 62087 39/geany unix 3 [ ] STREAM CONNECTED 61056 39/geany
@prehistoric: the standard chroot is easy to get out of. It's not even considered as a bug, it is a "feature". This "container" stuff is another approach to this problem, and it has been an ongoing effort started as early as Linux 2.6.24. They finally got the user_ns working on Linux 3.9 (not that long ago right!) but of course it was still full of bugs - they even admitted that the implementation is not complete and is still an on-going effort. I'm sure if you try hard enough, you'll find a way to break out. For better isolation, I would still suggest to using something like qemu; and even that has its share of weakness.
The problem with "detecting hostile activity" is that it requires intelligence that a silicon doesn't have. That doesn't stop people selling AV and IDS though, and making tons of money along the way.
When inside my container, I only mount /proc and /dev/shm, don't mount a tmpfs on /tmp.
Right now, I'm busy designing a "container friendly" infrastructure for Quirky. At first, I thought that I needed to be able to create overlay filesystems inside an overlay filesystem -- but found it doesn't work -- well, it was fixed back at kernel 4.2 but seems to have become broken again.
But I figured out another way to do it, details will be posted soon.
[url]https://bkhome.org/news/[/url]
- BarryK
- Puppy Master
- Posts: 9392
- Joined: Mon 09 May 2005, 09:23
- Location: Perth, Western Australia
- Contact:
Have to make the time to respond now to this though, it is too intriguing!jamesbond wrote: I still don't know why you can still connect to the host. In my example above I can do it because basically I'm sharing the entire / of the host to the container. As soon as I do "mount -t tmpfs tmpfs /tmp" inside the container (effectively hiding the host's /tmp under an empty filesystem), no other X program will work.
Perhaps, in your case, try to launch geany or leafpad inside the container, and then geany is running, run "netstat -apn". Here is my output:As you can see geany is talking to X server via the X0 socket. Perhaps in your case the socket is located elsewhere?Code: Select all
# netstat -apn Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name Active UNIX domain sockets (servers and established) Proto RefCnt Flags Type State I-Node PID/Program name Path unix 2 [ ACC ] STREAM LISTENING 62089 39/geany /tmp/geany_socket.e32d8f79 unix 3 [ ] STREAM CONNECTED 59235 - /tmp/.X11-unix/X0 unix 3 [ ] STREAM CONNECTED 61057 45/gnome-pty-helper unix 3 [ ] STREAM CONNECTED 62087 39/geany unix 3 [ ] STREAM CONNECTED 61056 39/geany
Running "netstat -apn" I get lots of lines with something like this:
Code: Select all
unix 2 [ ACC ] STREAM LISTENING 14424 - @/tmp/.X11-unix/X0
I quit geany, then mounted a tmpfs on /tmp (inside the container):
Code: Select all
sh-4.3# busybox mount -t tmpfs tmpfs /tmp
[url]https://bkhome.org/news/[/url]
- BarryK
- Puppy Master
- Posts: 9392
- Joined: Mon 09 May 2005, 09:23
- Location: Perth, Western Australia
- Contact:
Ah ha, someone asked the same question and got an answer here:BarryK wrote:Have to make the time to respond now to this though, it is too intriguing!jamesbond wrote: I still don't know why you can still connect to the host. In my example above I can do it because basically I'm sharing the entire / of the host to the container. As soon as I do "mount -t tmpfs tmpfs /tmp" inside the container (effectively hiding the host's /tmp under an empty filesystem), no other X program will work.
Perhaps, in your case, try to launch geany or leafpad inside the container, and then geany is running, run "netstat -apn". Here is my output:As you can see geany is talking to X server via the X0 socket. Perhaps in your case the socket is located elsewhere?Code: Select all
# netstat -apn Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name Active UNIX domain sockets (servers and established) Proto RefCnt Flags Type State I-Node PID/Program name Path unix 2 [ ACC ] STREAM LISTENING 62089 39/geany /tmp/geany_socket.e32d8f79 unix 3 [ ] STREAM CONNECTED 59235 - /tmp/.X11-unix/X0 unix 3 [ ] STREAM CONNECTED 61057 45/gnome-pty-helper unix 3 [ ] STREAM CONNECTED 62087 39/geany unix 3 [ ] STREAM CONNECTED 61056 39/geany
Running "netstat -apn" I get lots of lines with something like this:
...so what does the "@" mean?Code: Select all
unix 2 [ ACC ] STREAM LISTENING 14424 - @/tmp/.X11-unix/X0
I quit geany, then mounted a tmpfs on /tmp (inside the container):
Then ran geany again, geany still works. Still get the same stuff with the "netstat -apn".Code: Select all
sh-4.3# busybox mount -t tmpfs tmpfs /tmp
http://unix.stackexchange.com/questions ... -in-chroot
...my Xorg is using an "abstract socket" in the chroot. The host system does have /tmp/.X11-unix/X0 though.
[url]https://bkhome.org/news/[/url]
- BarryK
- Puppy Master
- Posts: 9392
- Joined: Mon 09 May 2005, 09:23
- Location: Perth, Western Australia
- Contact:
Quoting from here:
http://unix.stackexchange.com/questions ... act-socket
http://tstarling.com/blog/2016/06/x11-s ... isolation/
http://unix.stackexchange.com/questions ... act-socket
A useful read on abstract sockets, though it is unclear to me why they should be a security threat:On Linux (in recent versions), Xorg listens on both a Unix domain socket on the filesystem (/tmp/.X11-unix/X<n>) and in the abstract domain (shown as @/tmp/.X11-unix/X<n> in netstat output).
It also listens on TCP (port 6000 + <n>).
One can stop it from listening on TCP by adding a -nolisten tcp,
http://tstarling.com/blog/2016/06/x11-s ... isolation/
[url]https://bkhome.org/news/[/url]
Interesting find. I found out that my Xorg has been listening on the abstract sockets too - it's just that I didn't notice until you brought it out!BarryK wrote:A useful read on abstract sockets, though it is unclear to me why they should be a security threat:
http://tstarling.com/blog/2016/06/x11-s ... isolation/
In my previous test I used this "unshare -piumUrfn --mount-proc" - which is almost identical to you, except that the extra "-n" enables network isolation as well, and according to the articles you linked, is the only way to "hide" an abstract socket from within the container. No wonder my test didn't work (which I expected not to work). When I dropped that "-n", I got the same result as you - X apps start even with /tmp/.X11-unix/X0 is hidden.
I think it's a security thread because:
a) you can't prevent access from within standard chroot (you need network namespaces to disable it)
b) you can't control permission of the abstract socket
Which basically means, if you know the name of the socket, then **everybody** can connect.
Very bad. I should disable this immediately.
Fatdog64 forum links: [url=http://murga-linux.com/puppy/viewtopic.php?t=117546]Latest version[/url] | [url=https://cutt.ly/ke8sn5H]Contributed packages[/url] | [url=https://cutt.ly/se8scrb]ISO builder[/url]
- technosaurus
- Posts: 4853
- Joined: Mon 19 May 2008, 01:24
- Location: Blue Springs, MO
- Contact:
I wrote a whole analysis paper on linux containers several years back (can upload if anyone is interested) Apart from isolation and security, they seems pretty nice for thin client computing because hundreds of clients can run the same executable without significantly increasing the memory load or disk load (with BTRFS) due to copy on write.
When I was researching it one of the things that seemed problematic was how to selectively share /dev/* among multiple users and I don't know if that ever really got sufficiently resolved. I wanted to be able to let thin clients mount their local usb drive on a remote server inside a container. Then there was xdmcp and the network audio system - AFAIK, there is still no ability to map /dev/audio (etc..) to a network based alternative within a container (for seamlessly running older apps without having to modify the code)... basically its really good for hosting providers, but not so much end users that actually do stuff aside from security and the portability of the container, but there are other ways to do that.
There is a really useful tool for packaging an executable with all its necessary files called magic ermine It is proprietary, but the developer (I think her name is Valery?) was really supportive and even offered a free license and hosting for open source projects. She has a similar open source project called statifier on sourceforge. This is similar to flatpak (formerly xdg-app), snappyor appimage ... also similar to roxapps except 1 file instead of 1 directory. I don't see any reason why these cannot be combined with containers or even extended further.
Going back to thin clients (now cloud computing) is starting to make sense again because network speeds have started to rival disk speeds and RAM is getting much cheaper to the point a fairly basic server can keep all applications loaded in RAM and serve them up faster than a local disk (which on some newer arm based machines are as low as 512Mb of flash). Now that there are computers for under $10 that have sufficient processing power it is probably better to create/modify a simple caching network file system using something like zram swap over tcp to a precompressed completely installed distro. That way the client distro only has to have enough infrastructure to connect to the internet and handle the caching filesystem, so it will appear to have every single {debian,arch,fedora,...} package installed, but only need about a floppy size of storage to boot... as a bonus updates are automatically handled by the caching filesystem, just like web page caching (only the server needs to update them) Doing this kind of system with containers or single file format binaries like snappy or flatpaks begins to make even more sense because there are fewer, more compressible files to manage...
When I was researching it one of the things that seemed problematic was how to selectively share /dev/* among multiple users and I don't know if that ever really got sufficiently resolved. I wanted to be able to let thin clients mount their local usb drive on a remote server inside a container. Then there was xdmcp and the network audio system - AFAIK, there is still no ability to map /dev/audio (etc..) to a network based alternative within a container (for seamlessly running older apps without having to modify the code)... basically its really good for hosting providers, but not so much end users that actually do stuff aside from security and the portability of the container, but there are other ways to do that.
There is a really useful tool for packaging an executable with all its necessary files called magic ermine It is proprietary, but the developer (I think her name is Valery?) was really supportive and even offered a free license and hosting for open source projects. She has a similar open source project called statifier on sourceforge. This is similar to flatpak (formerly xdg-app), snappyor appimage ... also similar to roxapps except 1 file instead of 1 directory. I don't see any reason why these cannot be combined with containers or even extended further.
Going back to thin clients (now cloud computing) is starting to make sense again because network speeds have started to rival disk speeds and RAM is getting much cheaper to the point a fairly basic server can keep all applications loaded in RAM and serve them up faster than a local disk (which on some newer arm based machines are as low as 512Mb of flash). Now that there are computers for under $10 that have sufficient processing power it is probably better to create/modify a simple caching network file system using something like zram swap over tcp to a precompressed completely installed distro. That way the client distro only has to have enough infrastructure to connect to the internet and handle the caching filesystem, so it will appear to have every single {debian,arch,fedora,...} package installed, but only need about a floppy size of storage to boot... as a bonus updates are automatically handled by the caching filesystem, just like web page caching (only the server needs to update them) Doing this kind of system with containers or single file format binaries like snappy or flatpaks begins to make even more sense because there are fewer, more compressible files to manage...
Check out my [url=https://github.com/technosaurus]github repositories[/url]. I may eventually get around to updating my [url=http://bashismal.blogspot.com]blogspot[/url].
Barry, anything that goes through X in any way is a security threat because the server runs suid. Anyone who can crash that server with malformed code, or uses a known bug, can get root access.
Really containers are just a variation of the idea of chroot. And overlayfs is simply a LinusTorwald-approved way of implementing a files system 'union'. He always said that the only way the concept would be accepted in the mainline kernel was by using a 'stackable' filesystem.
Really containers are just a variation of the idea of chroot. And overlayfs is simply a LinusTorwald-approved way of implementing a files system 'union'. He always said that the only way the concept would be accepted in the mainline kernel was by using a 'stackable' filesystem.
- BarryK
- Puppy Master
- Posts: 9392
- Joined: Mon 09 May 2005, 09:23
- Location: Perth, Western Australia
- Contact:
I have created Easy Linux, version 0.2 pre-alpha, a first play at a "container friendly" OS:
http://murga-linux.com/puppy/viewtopic.php?t=109958
Have been traveling, and other things, haven't had any time to think further about the issues raised in this thread about containers. That shm Xorg crash is still unresolved.
http://murga-linux.com/puppy/viewtopic.php?t=109958
Have been traveling, and other things, haven't had any time to think further about the issues raised in this thread about containers. That shm Xorg crash is still unresolved.
[url]https://bkhome.org/news/[/url]
Apparently (my knowledge is near zero) with BTRFS you can create/use sub-volumestechnosaurus wrote:I wrote a whole analysis paper on linux containers several years back (can upload if anyone is interested) Apart from isolation and security, they seems pretty nice for thin client computing because hundreds of clients can run the same executable without significantly increasing the memory load or disk load (with BTRFS) due to copy on write.
Over on the Debian forumA subvolume in btrfs can be accessed in two ways:
like any other directory that is accessible to the user
like a separately mounted filesystem (options subvol or subvolid)
In the latter case the parent directory is not visible and accessible. This is similar to a bind mount, and in fact the subvolume mount does exactly that.
I can test any linux-distros installations in the same partition, making use of subvolumes. You wont need Virtualbox again to test new distros.
- BarryK
- Puppy Master
- Posts: 9392
- Joined: Mon 09 May 2005, 09:23
- Location: Perth, Western Australia
- Contact:
Easy Containers is continuing to evolve, see blog post:
http://bkhome.org/news/201805/easyos-py ... n-091.html
Now supporting Linux Capabilities, for improved security.
http://bkhome.org/news/201805/easyos-py ... n-091.html
Now supporting Linux Capabilities, for improved security.
[url]https://bkhome.org/news/[/url]
- BarryK
- Puppy Master
- Posts: 9392
- Joined: Mon 09 May 2005, 09:23
- Location: Perth, Western Australia
- Contact:
I am playing with 'Pflask', which is a single C executable, a kind of "secure chroot". It defaults to isolating all the namespaces. I ran into this same problem, X app crashed first time, after that they work.jamesbond wrote:I can reproduce the the BadShmSeg error. This is how I did it:
1. I run "unshare -piumUrfn --mount-proc" in a terminal (this launches the "container", but sharing the filesystem with the host).
2. Inside that terminal then I launch geany (or anything else)
3. Then I got this:Subsequent invocation of geany (or any other X programs) will work without error.Code: Select all
The program 'geany' received an X Window System error. This probably reflects a bug in the program. The error was 'BadShmSeg (invalid shared segment parameter)'. (Details: serial 2165 error_code 128 request_code 130 minor_code 3) (Note to programmers: normally, X errors are reported asynchronously; that is, you will receive the error a while after causing it. To debug your program, run it with the --sync command line option to change this behavior. You can then get a meaningful backtrace from your debugger if you break on the gdk_x_error() function.)
I have an explanation: Modern X server tries to enable shared memory support for X clients for performance purposes. Since we have specified "-i" switch (=don't share IPC, including shared memory segments), shared memory created by X server on the host cannot be accessed by X clients in the container. Thus it fails. Now my guess: when this fails, some flag is set (perhaps in the X server itself?), so that subsequent X server will access the server without using shared memory anymore. To see the list of shared memory, run "ipcs". Run this on the host, and in the container, to see the difference.
However, the error message was not specifically about "BadShmSeg", it is a "BadAccess (attempt to access private resource denied)". But, the same fix, not unsharing IPC, fixes it.
I posted about this to the pflask github site:
https://github.com/ghedo/pflask/issues/26
[url]https://bkhome.org/news/[/url]