Puppy Linux Discussion Forum Forum Index Puppy Linux Discussion Forum
Puppy HOME page : puppylinux.com
"THE" alternative forum : puppylinux.info
 
 FAQFAQ   SearchSearch   MemberlistMemberlist   UsergroupsUsergroups   RegisterRegister 
 ProfileProfile   Log in to check your private messagesLog in to check your private messages   Log inLog in 

The time now is Mon 20 Nov 2017, 10:03
All times are UTC - 4
 Forum index » Advanced Topics » Cutting edge
Easy Containers for Puppy Linux
Moderators: Flash, Ian, JohnMurga
Post new topic   Reply to topic View previous topic :: View next topic
Page 2 of 2 [25 Posts]   Goto page: Previous 1, 2
Author Message
Burn_IT


Joined: 12 Aug 2006
Posts: 2913
Location: Tamworth UK

PostPosted: Fri 27 Jan 2017, 11:23    Post subject:  

Surely one of the reasons for using a seperate "thing" for testing is that anything nasty is isolated - and that includes all reources and device drivers.
Is it not devices and drivers that much modern malware attacks?

_________________
"Just think of it as leaving early to avoid the rush" - T Pratchett
Back to top
View user's profile Send private message 
BarryK
Puppy Master


Joined: 09 May 2005
Posts: 8312
Location: Perth, Western Australia

PostPosted: Sat 28 Jan 2017, 07:50    Post subject:  

jamesbond wrote:
I can reproduce the the BadShmSeg error. This is how I did it:
1. I run "unshare -piumUrfn --mount-proc" in a terminal (this launches the "container", but sharing the filesystem with the host).
2. Inside that terminal then I launch geany (or anything else)
3. Then I got this:
Code:
The program 'geany' received an X Window System error.
This probably reflects a bug in the program.
The error was 'BadShmSeg (invalid shared segment parameter)'.
  (Details: serial 2165 error_code 128 request_code 130 minor_code 3)
  (Note to programmers: normally, X errors are reported asynchronously;
   that is, you will receive the error a while after causing it.
   To debug your program, run it with the --sync command line
   option to change this behavior. You can then get a meaningful
   backtrace from your debugger if you break on the gdk_x_error() function.)

Subsequent invocation of geany (or any other X programs) will work without error.

I have an explanation: Modern X server tries to enable shared memory support for X clients for performance purposes. Since we have specified "-i" switch (=don't share IPC, including shared memory segments), shared memory created by X server on the host cannot be accessed by X clients in the container. Thus it fails. Now my guess: when this fails, some flag is set (perhaps in the X server itself?), so that subsequent X server will access the server without using shared memory anymore. To see the list of shared memory, run "ipcs". Run this on the host, and in the container, to see the difference.

I still don't know why you can still connect to the host. In my example above I can do it because basically I'm sharing the entire / of the host to the container. As soon as I do "mount -t tmpfs tmpfs /tmp" inside the container (effectively hiding the host's /tmp under an empty filesystem), no other X program will work.

Perhaps, in your case, try to launch geany or leafpad inside the container, and then geany is running, run "netstat -apn". Here is my output:
Code:
# netstat -apn
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
Active UNIX domain sockets (servers and established)
Proto RefCnt Flags       Type       State         I-Node   PID/Program name    Path
unix  2      [ ACC ]     STREAM     LISTENING     62089    39/geany            /tmp/geany_socket.e32d8f79
unix  3      [ ]         STREAM     CONNECTED     59235    -                   /tmp/.X11-unix/X0
unix  3      [ ]         STREAM     CONNECTED     61057    45/gnome-pty-helper
unix  3      [ ]         STREAM     CONNECTED     62087    39/geany           
unix  3      [ ]         STREAM     CONNECTED     61056    39/geany

As you can see geany is talking to X server via the X0 socket. Perhaps in your case the socket is located elsewhere?

@prehistoric: the standard chroot is easy to get out of. It's not even considered as a bug, it is a "feature". This "container" stuff is another approach to this problem, and it has been an ongoing effort started as early as Linux 2.6.24. They finally got the user_ns working on Linux 3.9 (not that long ago right!) but of course it was still full of bugs - they even admitted that the implementation is not complete and is still an on-going effort. I'm sure if you try hard enough, you'll find a way to break out. For better isolation, I would still suggest to using something like qemu; and even that has its share of weakness.

The problem with "detecting hostile activity" is that it requires intelligence that a silicon doesn't have. That doesn't stop people selling AV and IDS though, and making tons of money along the way.


Just a quick reply, will get back to looking at containers very soon.

When inside my container, I only mount /proc and /dev/shm, don't mount a tmpfs on /tmp.

Right now, I'm busy designing a "container friendly" infrastructure for Quirky. At first, I thought that I needed to be able to create overlay filesystems inside an overlay filesystem -- but found it doesn't work -- well, it was fixed back at kernel 4.2 but seems to have become broken again.
But I figured out another way to do it, details will be posted soon.

_________________
http://barryk.org/news/
Back to top
View user's profile Send private message Visit poster's website 
BarryK
Puppy Master


Joined: 09 May 2005
Posts: 8312
Location: Perth, Western Australia

PostPosted: Sat 28 Jan 2017, 08:08    Post subject:  

jamesbond wrote:

I still don't know why you can still connect to the host. In my example above I can do it because basically I'm sharing the entire / of the host to the container. As soon as I do "mount -t tmpfs tmpfs /tmp" inside the container (effectively hiding the host's /tmp under an empty filesystem), no other X program will work.

Perhaps, in your case, try to launch geany or leafpad inside the container, and then geany is running, run "netstat -apn". Here is my output:
Code:
# netstat -apn
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
Active UNIX domain sockets (servers and established)
Proto RefCnt Flags       Type       State         I-Node   PID/Program name    Path
unix  2      [ ACC ]     STREAM     LISTENING     62089    39/geany            /tmp/geany_socket.e32d8f79
unix  3      [ ]         STREAM     CONNECTED     59235    -                   /tmp/.X11-unix/X0
unix  3      [ ]         STREAM     CONNECTED     61057    45/gnome-pty-helper
unix  3      [ ]         STREAM     CONNECTED     62087    39/geany           
unix  3      [ ]         STREAM     CONNECTED     61056    39/geany

As you can see geany is talking to X server via the X0 socket. Perhaps in your case the socket is located elsewhere?


Have to make the time to respond now to this though, it is too intriguing!

Running "netstat -apn" I get lots of lines with something like this:

Code:
unix  2      [ ACC ]     STREAM     LISTENING     14424    -                   @/tmp/.X11-unix/X0


...so what does the "@" mean?

I quit geany, then mounted a tmpfs on /tmp (inside the container):

Code:
sh-4.3# busybox mount -t tmpfs tmpfs /tmp


Then ran geany again, geany still works. Still get the same stuff with the "netstat -apn".

_________________
http://barryk.org/news/
Back to top
View user's profile Send private message Visit poster's website 
BarryK
Puppy Master


Joined: 09 May 2005
Posts: 8312
Location: Perth, Western Australia

PostPosted: Sat 28 Jan 2017, 08:47    Post subject:  

BarryK wrote:
jamesbond wrote:

I still don't know why you can still connect to the host. In my example above I can do it because basically I'm sharing the entire / of the host to the container. As soon as I do "mount -t tmpfs tmpfs /tmp" inside the container (effectively hiding the host's /tmp under an empty filesystem), no other X program will work.

Perhaps, in your case, try to launch geany or leafpad inside the container, and then geany is running, run "netstat -apn". Here is my output:
Code:
# netstat -apn
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
Active UNIX domain sockets (servers and established)
Proto RefCnt Flags       Type       State         I-Node   PID/Program name    Path
unix  2      [ ACC ]     STREAM     LISTENING     62089    39/geany            /tmp/geany_socket.e32d8f79
unix  3      [ ]         STREAM     CONNECTED     59235    -                   /tmp/.X11-unix/X0
unix  3      [ ]         STREAM     CONNECTED     61057    45/gnome-pty-helper
unix  3      [ ]         STREAM     CONNECTED     62087    39/geany           
unix  3      [ ]         STREAM     CONNECTED     61056    39/geany

As you can see geany is talking to X server via the X0 socket. Perhaps in your case the socket is located elsewhere?


Have to make the time to respond now to this though, it is too intriguing!

Running "netstat -apn" I get lots of lines with something like this:

Code:
unix  2      [ ACC ]     STREAM     LISTENING     14424    -                   @/tmp/.X11-unix/X0


...so what does the "@" mean?

I quit geany, then mounted a tmpfs on /tmp (inside the container):

Code:
sh-4.3# busybox mount -t tmpfs tmpfs /tmp


Then ran geany again, geany still works. Still get the same stuff with the "netstat -apn".


Ah ha, someone asked the same question and got an answer here:

http://unix.stackexchange.com/questions/317319/x-org-working-with-no-socket-in-chroot

...my Xorg is using an "abstract socket" in the chroot. The host system does have /tmp/.X11-unix/X0 though.

_________________
http://barryk.org/news/
Back to top
View user's profile Send private message Visit poster's website 
BarryK
Puppy Master


Joined: 09 May 2005
Posts: 8312
Location: Perth, Western Australia

PostPosted: Sat 28 Jan 2017, 08:53    Post subject:  

Quoting from here:
http://unix.stackexchange.com/questions/112316/is-it-possible-to-tell-xorg-not-to-listen-on-the-abstract-socket

Quote:
On Linux (in recent versions), Xorg listens on both a Unix domain socket on the filesystem (/tmp/.X11-unix/X<n>) and in the abstract domain (shown as @/tmp/.X11-unix/X<n> in netstat output).

It also listens on TCP (port 6000 + <n>).

One can stop it from listening on TCP by adding a -nolisten tcp,


A useful read on abstract sockets, though it is unclear to me why they should be a security threat:

http://tstarling.com/blog/2016/06/x11-security-isolation/

_________________
http://barryk.org/news/
Back to top
View user's profile Send private message Visit poster's website 
jamesbond

Joined: 26 Feb 2007
Posts: 3074
Location: The Blue Marble

PostPosted: Sat 28 Jan 2017, 11:09    Post subject:  

BarryK wrote:
A useful read on abstract sockets, though it is unclear to me why they should be a security threat:

http://tstarling.com/blog/2016/06/x11-security-isolation/


Interesting find. I found out that my Xorg has been listening on the abstract sockets too - it's just that I didn't notice until you brought it out!

In my previous test I used this "unshare -piumUrfn --mount-proc" - which is almost identical to you, except that the extra "-n" enables network isolation as well, and according to the articles you linked, is the only way to "hide" an abstract socket from within the container. No wonder my test didn't work (which I expected not to work). When I dropped that "-n", I got the same result as you - X apps start even with /tmp/.X11-unix/X0 is hidden.

I think it's a security thread because:
a) you can't prevent access from within standard chroot (you need network namespaces to disable it)
b) you can't control permission of the abstract socket
Which basically means, if you know the name of the socket, then **everybody** can connect.
Very bad. I should disable this immediately.

_________________
Fatdog64, Slacko and Puppeee user. Puppy user since 2.13.
Contributed Fatdog64 packages thread.
Back to top
View user's profile Send private message 
technosaurus


Joined: 18 May 2008
Posts: 4756
Location: Kingwood, TX

PostPosted: Sun 29 Jan 2017, 04:34    Post subject:  

I wrote a whole analysis paper on linux containers several years back (can upload if anyone is interested) Apart from isolation and security, they seems pretty nice for thin client computing because hundreds of clients can run the same executable without significantly increasing the memory load or disk load (with BTRFS) due to copy on write.

When I was researching it one of the things that seemed problematic was how to selectively share /dev/* among multiple users and I don't know if that ever really got sufficiently resolved. I wanted to be able to let thin clients mount their local usb drive on a remote server inside a container. Then there was xdmcp and the network audio system - AFAIK, there is still no ability to map /dev/audio (etc..) to a network based alternative within a container (for seamlessly running older apps without having to modify the code)... basically its really good for hosting providers, but not so much end users that actually do stuff aside from security and the portability of the container, but there are other ways to do that.

There is a really useful tool for packaging an executable with all its necessary files called [url="http://www.magicermine.com/"]magic ermine[/url] It is proprietary, but the developer (I think her name is Valery?) was really supportive and even offered a free license and hosting for open source projects. She has a similar open source project called statifier on sourceforge. This is similar to flatpak (formerly xdg-app), snappyor appimage ... also similar to roxapps except 1 file instead of 1 directory. I don't see any reason why these cannot be combined with containers or even extended further.

Going back to thin clients (now cloud computing) is starting to make sense again because network speeds have started to rival disk speeds and RAM is getting much cheaper to the point a fairly basic server can keep all applications loaded in RAM and serve them up faster than a local disk (which on some newer arm based machines are as low as 512Mb of flash). Now that there are computers for under $10 that have sufficient processing power it is probably better to create/modify a simple caching network file system using something like zram swap over tcp to a precompressed completely installed distro. That way the client distro only has to have enough infrastructure to connect to the internet and handle the caching filesystem, so it will appear to have every single {debian,arch,fedora,...} package installed, but only need about a floppy size of storage to boot... as a bonus updates are automatically handled by the caching filesystem, just like web page caching (only the server needs to update them) Doing this kind of system with containers or single file format binaries like snappy or flatpaks begins to make even more sense because there are fewer, more compressible files to manage...

_________________
Check out my github repositories. I may eventually get around to updating my blogspot.
Back to top
View user's profile Send private message Visit poster's website 
amigo

Joined: 02 Apr 2007
Posts: 2612

PostPosted: Mon 30 Jan 2017, 07:48    Post subject:  

Barry, anything that goes through X in any way is a security threat because the server runs suid. Anyone who can crash that server with malformed code, or uses a known bug, can get root access.

Really containers are just a variation of the idea of chroot. And overlayfs is simply a LinusTorwald-approved way of implementing a files system 'union'. He always said that the only way the concept would be accepted in the mainline kernel was by using a 'stackable' filesystem.
Back to top
View user's profile Send private message 
BarryK
Puppy Master


Joined: 09 May 2005
Posts: 8312
Location: Perth, Western Australia

PostPosted: Fri 17 Mar 2017, 21:07    Post subject:  

I have created Easy Linux, version 0.2 pre-alpha, a first play at a "container friendly" OS:

http://murga-linux.com/puppy/viewtopic.php?t=109958

Have been traveling, and other things, haven't had any time to think further about the issues raised in this thread about containers. That shm Xorg crash is still unresolved.

_________________
http://barryk.org/news/
Back to top
View user's profile Send private message Visit poster's website 
rufwoof

Joined: 24 Feb 2014
Posts: 2163

PostPosted: Mon 24 Apr 2017, 20:49    Post subject:  

technosaurus wrote:
I wrote a whole analysis paper on linux containers several years back (can upload if anyone is interested) Apart from isolation and security, they seems pretty nice for thin client computing because hundreds of clients can run the same executable without significantly increasing the memory load or disk load (with BTRFS) due to copy on write.

Apparently (my knowledge is near zero) with BTRFS you can create/use sub-volumes
Quote:
A subvolume in btrfs can be accessed in two ways:

like any other directory that is accessible to the user

like a separately mounted filesystem (options subvol or subvolid)

In the latter case the parent directory is not visible and accessible. This is similar to a bind mount, and in fact the subvolume mount does exactly that.

Over on the Debian forum
Quote:
I can test any linux-distros installations in the same partition, making use of subvolumes. You wont need Virtualbox again to test new distros.
Back to top
View user's profile Send private message 
Display posts from previous:   Sort by:   
Page 2 of 2 [25 Posts]   Goto page: Previous 1, 2
Post new topic   Reply to topic View previous topic :: View next topic
 Forum index » Advanced Topics » Cutting edge
Jump to:  

You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum
You cannot attach files in this forum
You can download files in this forum


Powered by phpBB © 2001, 2005 phpBB Group
[ Time: 0.0722s ][ Queries: 12 (0.0088s) ][ GZIP on ]