Puppy Linux Discussion Forum Forum Index Puppy Linux Discussion Forum
Puppy HOME page : puppylinux.com
"THE" alternative forum : puppylinux.info
 
 FAQFAQ   SearchSearch   MemberlistMemberlist   UsergroupsUsergroups   RegisterRegister 
 ProfileProfile   Log in to check your private messagesLog in to check your private messages   Log inLog in 

The time now is Thu 21 Aug 2014, 00:41
All times are UTC - 4
 Forum index » Off-Topic Area » Programming
Bind mounts do the same job as links. Links are better.!
Post new topic   Reply to topic View previous topic :: View next topic
Page 5 of 6 [84 Posts]   Goto page: Previous 1, 2, 3, 4, 5, 6 Next
Author Message
amigo

Joined: 02 Apr 2007
Posts: 2232

PostPosted: Wed 30 Jan 2013, 05:49    Post subject:  

Of course I can see the case for combining several executables into one bundle/package -lots of packages contain more than one program. In the case of FOX there's a limited number of progs available anyway -even outside the official 'suite'. There are several ways to offer the option of which prog to run. The simplest for use with ROX is to have the menu in the right-click of the AppDir -the menu entries are created in AppInfo.xml. Then you create matching code in the AppRun which handles the chosen option. Incidentally, running the AppRun script from any terminal, script or using another program to start it (run-box of your WM, etc.) -these will all respond to the same options. Here's a snip from an AppInfo.xml which does this:
<AppMenu>
<Item label="List main packages" option="--list-main"/>
<Item label="Search Installed" option="--search"/>
</AppMenu>
The option '--list.main' or '--search' gets passed to the AppRun which then takes whatver action:
Code:

if [[ $1 == '--list-main' ]] ; then
   PKG_LIST=/tmp/pkg-list.$$
   ls -1 /var/lib/tpkg/packages |grep -v -E '(*\-devel\-*|*\-docs\-*|*\-i18n\-*)' > $PKG_LIST
   xterm -title 'Pkg-Tools Main Packages' -fn $XTERM_FONT -geometry 45x40 -e "cat $PKG_LIST |less"
   rm -f $PKG_LIST
elif [[ $1 == '--search' ]] ; then
   SEARCHFILE=`greq -e"Search for file: "`
   cd /var/lib/tpkg/packages ;
   RESULT="`grep -H $SEARCHFILE$ * 2> /dev/null`"
   if [[ -n $RESULT ]] ; then
      exec greq -t"Search Results" -p "$RESULT"
   else
      exec greq -t"Search Results" -p "No match found!"
   fi
fi


Did my chrome AppDir run on your system? If it did, I would still like to see the output when running the AppRun from the terminal.

You have a lotta questions -I'll try to be helpful...
# Do all libs. load to ram?
Yes, every library and program gets loaded to RAM. First, understand that the kernel does not execute any programs itself. It calls /lib/ld-linux.so.2 to do that. ld-linux is the dynamic loader. When asked to start a program for the first time, ld-linux first loads the program into RAM at an appropriate address, then looks the prog to see what symbols are in there -names of libraries the program is linked to. It then loads each of the libraries into RAM -examining each one of them for symbols and acting accordingly. It then passes these locations to the executable as it starts it.

Now, each of the libs and the program are in the RAM cache. If you stop the program all the libs and prog remain in the cache. If you *restart* the program it does not get loaded again -everything is still available in the cache. This why programs usually start faster the second time you run them.

In Puppy and other Live distro which 'run from RAM' *any running program is in two locations in RAM*. The 'run-from-RAM' has nothing to do with the above. run-from-RAM means that the main '/' file system is located in a *reserved* portion of RAM which is being treated like a hard disk. This portion of RAM is *unavailable for other use*.

When ld-linux tries to read in a file it must access the device it is on -'normally' a spinning hard disk. The file contents must be transported out of the device, over its' cable to the main bus and then sent into RAM. Of course this takes time. The reason run-from-RAM is faster is that the files are right in the RAM -where that section is being used as ramdisk. The contents must only be accessed at its' RAM address and transport over the bus (to CPU) and then loaded (over the bus again) into the cache area for use. In theory you could unload the ramdisk once everything needed was cached -but there are lots of implications and would be messy anyway.

Version-ed dirs: Any app that *writes* to /usr/share as part of its' running is doing the Wrong Thing. $HOME of course -and many times /tmp, /var/run, /var/lock. Using these 'mini-chroots' to run programs lets you completely sequester the app from your normal $HOME structure -or not. But sequestering means you can run separate versions completely apart. Everything can be written normally to $HOME, /var, wherever -or you can sequester part or all of it -using mount --bind

iso vs squashfs:
Accessing any file system will require support for that FS in the kernel -whether hard-linked in or as a module. The point is that nearly *every system* (except for embedded, maybe) will support CD's, right? But every other file system may or may not be there -very few systems run with support for every possible FS you could use. FAT might be jast as ubiquitous or more. But we can't use that for OS files anyway. One disadvantage of squashfs or other compressed FS is that even more RAM will be used temporarily to decompress the file. So, while loading the file you have one copy (compressed) in the RAM disk, a partial copy in RAM (being decompressed) and a partial copy (uncompressed) in the cache!
Using compressed FS requires more total RAM than using a non-compressed FS. Plus, decompression takes time -another trip down the bus abd back...

"If we could only configure apps" -Of course we can! Roll your own, configured as needed -patching where needed. There is *no* getting around this, sooner or later.

"Any tools for working with AppPkgs would also be AppPkgs" chicken vs. egg -again. The first one still has to be built using some other method. I think you need to separate the product from the process -if only to more clear to your users. If the product is to be AppPkg, then let the softwsre used to create them called AppPkgCreator or whatever. How else will you tell your users... You must install AppPkg in order to create AppPkgs. But you can use AppPkgs without having AppPkg installed. But you need AppPkg installed in order to install AppPkg -wasn't this supposed to be all about using software but not having to install it?

The directions to the user which I like:
1. Download the archive of the AppDir you want to use.
2. Unpack it anywhere you like.
3. Click on the icon to start the program -or first choose from startup options by right-clicking the icon.
# 3 assumes the user uses ROX-Filer (or other AppRun-aware file mgr.) Other wise the user must start the app using the path to the AppRun -and any options you were providing in the ROX right-click must be offered or handled using some other interface. BTW, gtkdialog is a bad choice because it is rarely available on non-Puppy distros.

The product should be that easy to use -most of the time anyway. But that implies a huge body of work behind the scenes to create those AppDirs.
Back to top
View user's profile Send private message 
amigo

Joined: 02 Apr 2007
Posts: 2232

PostPosted: Wed 30 Jan 2013, 05:51    Post subject:  

"LD_LIBRARY_PATH problem" Do you mean trying to use LD_LIBRARY_PATH inside a fakeroot chroot?
Back to top
View user's profile Send private message 
sunburnt


Joined: 08 Jun 2005
Posts: 5016
Location: Arizona, U.S.A.

PostPosted: Wed 30 Jan 2013, 06:21    Post subject:  

Hey amigo; I had tested earlier today, watched movies tonight with my girl friend, Love Potion #9 ( not too bad...).

What I found was the same that jrb ( I think ) did with ChoicePup. LD_LIB stops working ( not in chroot ).
I had to add the path to /etc/ld.so.conf and run ldconfig for the app. to run. I`ve not seen this before.
Some kind of Puppy / Linux flaw that pops up. But what are the controlling circumstances?

It`s 3:00 am here, I`m ready for some shut eye. Tomorrow I`ll have the time to run and test your build.
I didn`t even read your upper long post, that I`ll do with my morning coffee.

Girl friends day off, so... I`ll let you know how it goes.
Back to top
View user's profile Send private message 
greengeek

Joined: 20 Jul 2010
Posts: 2476
Location: New Zealand

PostPosted: Wed 30 Jan 2013, 06:53    Post subject:  

Quote:
Of course I can see the case for combining several executables into one bundle/package

Is there any value in trying to group apps according to which libs they require? eg: creating a "group-static" that is clustered around common libs. Any ram/cpu saving in such an idea? (No point NOT having an app on your system if it creates no extra overhead to have it there...)
Back to top
View user's profile Send private message 
amigo

Joined: 02 Apr 2007
Posts: 2232

PostPosted: Wed 30 Jan 2013, 09:49    Post subject:  

"Is there any value in trying to group apps according to which libs they require?" See the package structure here:
http://distro.ibiblio.org/amigolinux/distro/kiss-4.0/PKGS/i586/
The source tree looks similar:
http://distro.ibiblio.org/amigolinux/distro/kiss-4.0/SOURCE/
There the source tarballs are right inside the dirs with the build scripts, etc. But, for KISS-5 I have the sources all in a separate directory -so you can also get a quick alphabetical list of the sources. It's what you call the filesystem as a database -after all filesystems *are* databases.

That said, it doesn't resolve things at all, as far as run-time dependency resolution. And neither will some web--page listing, master-list or any probability statistics. the only thing that matters is making exactly the right library available. This is nearly always the *same* one that was used when the program was configured, compiled and linked. If you are using pre-compiled binaries the only way to know *exactly which of these libs are the correct ones.
Again, a master list will not really do the job -each package should contain the information which leads you to those exact libs. Usually the package does not contain the repo info. Any pkg mgr will combine your settings for repo URL, plus the path/name of the needed lib/prog -of course real integrity here implies that the library package come from the same repo where the program package came from -or from some assumed availability base. The very same libs from the very same tool-chain with the same original configuration, options and mix of software installed at the time the program was built. That is the One and Only way to be absolutely sure that a program will run.

How you can assure that is the whole problem. And every choice you make about how you do that has advantages and disadvantages or limitations. Using such a chroot method one could access a completely different OS to run that program with -only the running kernel would be, bypassing *all* system libs.
Of course, you can also hard-link all the libs and proggie together -but that makes creating deliverables really, really tedious. Lots of things really won't compile completely static without *lots* of work. The chroot method allows you to use stuff normally -if you need extra libs or newer versions of something you simply include the normal lib in your bundle -the layering order of the union takes care of that.

"no extra overhead" -there's always the overhead of having something somewhere -on disk or whatever. Of course, you could even create something really tiny, which connects to the net to an iso image somewhere and over-the-net mounts that iso and makes them ther' libs seem like they really are on your machine -and they don't even use any RAM until ld-linux loads *straight into the cache*.

You could make this tiny thing be completely self-contained so that it would not even need unpacking in order to use it. : An executable self-extractor stub with a payload which is unpacked and a script/command executed. Just download it, click it (from *any file mgr*) and the program runs -fully sequestered from anything else on your system in its own 'sandbox'. I *have* created such executables. But it is a rather complex job to do and *they are still mostly achitecture dependent -a bundle made for an ARM machine won't run on an intel machine, etc.

Now, maybe you begin to see how big the problems are -the easier you want to make it for your user, the more complex is your job. Maintaining even a handful of packages/bundles can mean *lots* of work. Doing them manually becomes impossible -you can only create more by using a very good system for creating them.
Back to top
View user's profile Send private message 
sunburnt


Joined: 08 Jun 2005
Posts: 5016
Location: Arizona, U.S.A.

PostPosted: Wed 30 Jan 2013, 17:06    Post subject:  

Hi greengeek; My thinking exactly. As amigo said, the Fox libraries are not very commonly used.
So my Xfe + Fox suite AppPkg bundle has the libs. inside it and shares them. This is good bundling.
I would think similar apps. like media would have many specialized libs. that they commonly use.
So putting them inside a media bundled AppPkg saves space and the apps. have the correct libs.

I thought about a "cloud" based O.S., but currently there`s no "web boot" method that I know of.
Currently the user downloads kernel and initrd.gz to boot from, and it`d connect to the "web drive".
This is a "cloud O.S.", not just "cloud apps.". Not the best setup, relies on fast internet to work at all.


amigo; Sorry about the delay, partnerships must be maintained or lost. Cool
I was able to post when my girl friend stopped browsing Russian news ( she`s from St. Petersburg ).

Build 2 does the same thing, VT output:
Code:
sh-4.1# ./AppRun
getopt: unrecognized option '--version'
BusyBox v1.17.2 (2011-05-01 08:45:38 GMT-8) multi-call binary.

Usage: getopt [OPTIONS]

Options:
   -a,--alternative      Allow long options starting with single -
   -l,--longoptions=longopts   Long options to be recognized
   -n,--name=progname      The name under which errors are reported
   -o,--options=optstring      Short options to be recognized
   -q,--quiet         Disable error reporting by getopt(3)
   -Q,--quiet-output      No normal output
   -s,--shell=shell      Set shell quoting conventions
   -T,--test         Test for getopt(1) version
   -u,--unquoted         Don't quote the output

getopt: invalid option -- '-'
getopt: invalid option -- 'n'
getopt: invalid option -- 'o'
getopt: invalid option -- '-'
getopt: invalid option -- 'a'
getopt: invalid option -- 'n'
getopt: invalid option -- 'd'
getopt: invalid option -- 'b'
getopt: invalid option -- 'o'
getopt: invalid option -- 'x'
Usage:
    fakechroot [-l|--lib fakechrootlib] [-s|--use-system-libs]
               [-e|--environment type] [-c|--config-dir directory]
               [--] [command]
    fakechroot -v|--version
    fakechroot -h|--help

###  $? = 1

sh-4.1#

It`s the same with or without: 2>/dev/null at the end of the false-chroot command.

Thanks for the lib. tutoral, I knew much of it but didn`t know if some libs. were loaded differently.
I noticed quite a few folks are following this thread, so I`m sure many find the info. interesting.
I`ve said for a long time that loading any Puppy Sq. files into ram just doubles the ram used.
And copying Puppy files to ram only to have them swapped back to the HD is just plain stupid.

I didn`t know extra ram`s used for the extraction, I figured it was directly read as it`s extracted.
This would be a "live image file" if the kernel extracted each instruction into it`s cache and ran it.
But just dumping the file from inside the Sq. file into ram anyway, then it`s not really a "live" file.

An interesting experiment I did was to copy a large file from inside a Sq. file to a partition, and
then copy the same file from partition to partition, the Sq. file copy is much faster ( slow HD ).
A ram copy test goes so fast it`s hard to tell any difference, but Sq. should be quicker here also.
Tests for mount and link speed on a partition vs using them in ram ( /tmp ) ran too fast also.

Versions: If the r-w union dir. is unique for $HOME and for the app. version being run, that`ll fix it.
I`d only contemplated using $HOME for labeling the r-w dir. So like: Chrome-24_HomerS.rw

### Thoughts on differences between having mounts and links on a partition or in ram ( /tmp ).?
Ram is faster as no drives are accessed I think.? And no wear and tear on the physical drives too.

A lot of my Qs I already have an idea about, but I value your input.
These Qs usually are base engineering concepts for designing O.S.s and apps.
I have ideas for "big industry" items that I`ll probably never be able to realize.
Automobiles, Housing, Aircraft, and Computers. All beyond my abilities to do much about.
Back to top
View user's profile Send private message 
amigo

Joined: 02 Apr 2007
Posts: 2232

PostPosted: Thu 31 Jan 2013, 04:15    Post subject:  

Damned BusyBox! I don't like it a bit. So, you need to go through each command that's being run by AppDir, run something similar maually and see which one is crapping out. Possible clue:
getopt: invalid option -- 'n'
getopt: invalid option -- 'o'
getopt: invalid option -- '-'
getopt: invalid option -- 'a'
getopt: invalid option -- 'n'
getopt: invalid option -- 'd'
getopt: invalid option -- 'b'
getopt: invalid option -- 'o'
getopt: invalid option -- 'x'
See no-andbox? Perhaps you have accidentally deleted an 's' there -check the option after google-chrome. But, I'm puzzled as to why busbox is complaining. Have you changed to shebang to /bin/sh perhaps -Bad Boy, if so. When the shebang says /bin/bash then that means it is written for bash and 'sh' will not work -especially if it's busybox's sh.

Accessing anything in RAM will always be faster than accessing a spinning disk or anything connected by a cable -even if only reading or checking the existence, perms of something. For a single access you'd have trouble measuring the difference. Only when doing lots of accesses can you see/fell the diff. Of course actually reading a file will involve lots of transactions -ask for a file, egt one block, confirm you received the block, ask for another... and so on.

For me, speed is not everything and neither is size -I want stuff that works like it's supposed to every time and can be built/maintained easily.

When you use a compressed file system, the files are transported to the CPU in a compressed state and must be de-compressed to make them usuable. For compressed filesystems this may occur in a fairly direct fashion, but each chunk must be de-compressed -causing extra work, In the case of executables compressed using upx, the file is actually de-compressed to a file in /tmp, then executed/loaded and after waiting abit is removed.

Making things small does not necessarily make them faster -in fact when you enable faster routines it usually means extra code in the executable.
Back to top
View user's profile Send private message 
sunburnt


Joined: 08 Jun 2005
Posts: 5016
Location: Arizona, U.S.A.

PostPosted: Thu 31 Jan 2013, 04:40    Post subject:  

I`m aware enough not to change things I don`t understand... Very Happy
I`ve used: #!/root/bacon and even used my own scripts for shebang.
The parts you asked about are intact, no reason to change them at all.
Code:
#!/bin/bash

### The only thing I did to your script ( --no-sandbox is correct ):
#fakechroot chroot $UNION /usr/bin/google-chrome --no-sandbox 2> /dev/null
fakechroot chroot $UNION /usr/bin/google-chrome --no-sandbox
echo "###  $?"

I did test the unionfs-fuse and fusermount and found they`re working.
From fusermount on it really doesn`t matter, it fails before that.
And these lines it`s hard to see how they could go wrong.
Code:
READ_WRITE=$HOME/Choices/GoogleChrome/write
UNION=$HOME/Choices/GoogleChrome/union
# Both the write directory and the mount-point for the union need
# to be writable by the user and, at best,  should be unique to this app
mkdir -p $READ_WRITE
mkdir -p $UNION

# we need to know where this AppDir is to find the 'app' directory
HERE=$(dirname $0)

That`s the whole script, there`s only the union mount before it fails. It works except the fakechroot command. Can`t figure it out.
I`m not sure why it shows getopt help, other than it`s in fakechroot. The only variable for fakechroot is $UNION, and it`s good.


I don`t like BusyBox, it`s a legacy item and not needed anymore. Puppy already uses the real binaries for many /bin commands.
This is because it was never compiled for Puppy ( tool chain again...). Why not just replace BusyBox commands with the real ones?


LD_LIBRARY_PATH update. After messing with it I started using ld.so.conf It worked of course, as jrb had found out.
Then I tried the old LD_LIBRARY_PATH version again and it was working.!!! ( now it works, and now it doesn`t ).
When ld.so.conf updates the cache with ldconfig it fixes something. ldconfig didn`t help LD_LIBRARY_PATH
There`s something very fishy with LD_LIBRARY_PATH that the web warns about. I`ll probably stick with ld.so.conf
BUT... I`m really curious as to what the conditions are that make LD_LIBRARY_PATH fail and then work again.
Back to top
View user's profile Send private message 
amigo

Joined: 02 Apr 2007
Posts: 2232

PostPosted: Thu 31 Jan 2013, 11:26    Post subject:  

I think I see where the fakechroot problem is:
Code:
#!/bin/sh

# fakechroot
#
# Script which sets fake chroot environment


Try change that to /bin/bash. Then, if you wanna find out where/why the error is happening, look further down (line 47):
Code:
getopttest=`getopt --version`
case $getopttest in
    getopt*)
        # GNU getopt
        opts=`getopt -q -l lib: -l use-system-libs -l config-dir: -l environment -l version -l help -- +l:sc:e:vh "$@"`
        ;;
    *)
        # POSIX getopt ?
        opts=`getopt l:sc:e:vh "$@"`
        ;;
esac

From the original error message:
getopt: unrecognized option '--version'
See, it's being run by busybox shell. This is why it's such a bad idea to have anything besides bash as /bin/sh -unless you want to always be sanitizing/re-writing other peoples shell scripts. I don't use getopts at all, but just because I hate it... Anyway, you could play with that code there to adapt it to busybox's getopt(and any other systax problems). In this case, fakechroot is a debian program, so /bin/sh will be pointing to */bin/dash*, not /bin/bash. BTW, busybox 'ash' shell doesn't always match other shell syntax for 'ash' -in fact, there are 3 or 4 distinct versions of ah around...

Or, you could change your /bin/sh link to point to /bin/bash -although it would not surprise me if this didn't break a few things in Puppy. Still, there's lots of code out there which uses /bin/sh shebang, but actually contains bashisms.

Okay, about losing LD_LIBRARY_PATH. You are probably not getting it exported/passed to the end binary program. If you have a script which sets LD_LIBRARY_PATH, but does not export it, then it is not even available to whatever your script calls. And if, in fact, your script is actually calling another script or wrapper, then it will also need to be exporting it -it has to be available at the time ld-linux.so.2 is called to run the program.

About LD_LIBRARY_PATH being 'evil'. There is nothing wrong with it at all. It's just a handy way to override the normal library paths.
"something very fishy with LD_LIBRARY_PATH" I'm pretty sure you mean LD_PRELOAD which is different. It lets you override not library paths, but functions themselves. Again, of itself it does no harm. Any Bad Things which could come about, would come from the library which is preloaded. It is easy to use, so it comes with some safety features -setuid programs will not be executed at all, etc. To disable LD_PRELOAD, I think you'd need to patch/recompile glibc, which I know you wouldn't want to do.

Anyway, LD_PRELOAD is used by lots of handy tools, like src2pkg, install/checkinstall, fakechroot, fakeroot and nearly every other sandbox/jail implementation.
Back to top
View user's profile Send private message 
sunburnt


Joined: 08 Jun 2005
Posts: 5016
Location: Arizona, U.S.A.

PostPosted: Fri 01 Feb 2013, 02:07    Post subject:  

Some success... After fiddling with it awhile I got the errors to change.
Code:
/usr/bin/google-chrome: line 17: /dev/null: Permission denied
[25890:25890:0131/222056:ERROR:resource_bundle.cc(543)] Failed to load /mnt/sda3/apps/bin/chrome.pak
Some features may not be available.
[25890:25890:0131/222056:ERROR:resource_bundle.cc(543)] Failed to load /mnt/sda3/apps/bin/chrome_100_percent.pak
Some features may not be available.
[25890:25890:0131/222057:ERROR:chrome_browser_main_extra_parts_gtk.cc(51)] Startup refusing to run as root.
###  1

It`s trying to find the .pak files in the dir. where fakechroot was at. So I moved it to the chrome run files dir.
Also the same error refusing to run as root. I fixed it with the proper command argument for chrome.
And the error for permission denied for /dev/null. Puppy`s /dev/null has permissions: Read=All, Write=All, Exec=None.
I know I could change it, but couldn`t we just provide our own in the package ( self-supported ) so there`s no problem?

The next run got these errors:
Code:
/usr/bin/google-chrome: line 17: /dev/null: Permission denied
Failed to open /dev/null
[25611:25633:0201/000224:ERROR:bus.cc(307)] Failed to connect to the bus: Failed to connect to socket /var/run/dbus/system_bus_socket: Connection refused
[25611:25633:0201/000224:ERROR:bus.cc(307)] Failed to connect to the bus: Failed to connect to socket /var/run/dbus/system_bus_socket: Connection refused
[25611:25611:0201/000225:ERROR:resource_bundle.cc(543)] Failed to load /mnt/sda3/apps/bin/chrome.pak
Some features may not be available.
[25611:25611:0201/000225:ERROR:resource_bundle.cc(543)] Failed to load /mnt/sda3/apps/bin/chrome_100_percent.pak
Some features may not be available.
###  6

Same null error of course, but chrome`s not refusing to run as root. Don`t know why the .pak file errors now.
The /var/run/dbus/system_bus_socket is there, maybe a permission problem also?
And now it ends with an error code of: 6. I think I`m close, but it`s gone into territory I`m unfamiliar with.
Except for the null permission thing., this last set of errors look to be all fakechroot problems.

Here`s what I did to your AppRun script to get it this far:
Code:
# Make dir. for Chrome user settings.
PROFILE=$HOME/.google/chrome/profile
mkdir -p $PROFILE
 
# Change dir. into the union and the Chrome files dir. ( fakechroot is there ).
cd $UNION/opt/google/chrome

# chroot into the union mount and execute the desired program
#fakechroot chroot $UNION /usr/bin/google-chrome --no-sandbox 2> /dev/null
fakechroot chroot $UNION /usr/bin/google-chrome --no-sandbox --user-data-dir=$PROFILE --disk-cache-size=20971520
echo "###  $?"

The rest of the script is the original. So close, yet so far.

# I hope this can give a solid reliable method for us to base AppPkg on, it`d getting complex.
.
Back to top
View user's profile Send private message 
amigo

Joined: 02 Apr 2007
Posts: 2232

PostPosted: Fri 01 Feb 2013, 11:26    Post subject:  

"/dev/null: Permission denied" Sunds like the original problem when trying to use real chroot. 'mount -o bind...' could fix that but for me simply using fakechroot took care of all those access problems.

"Failed to load /mnt/sda3/apps/bin/chrome.pak" I haven't seen that at all. I do, however, get the same dbus errors 'system_bus_socket: Connection refused' as you reported before.

I'm afraid this is all wrong:
Code:
# Make dir. for Chrome user settings.
PROFILE=$HOME/.google/chrome/profile
mkdir -p $PROFILE
 
# Change dir. into the union and the Chrome files dir. ( fakechroot is there ).
cd $UNION/opt/google/chrome

# chroot into the union mount and execute the desired program
#fakechroot chroot $UNION /usr/bin/google-chrome --no-sandbox 2> /dev/null
fakechroot chroot $UNION /usr/bin/google-chrome --no-sandbox --user-data-dir=$PROFILE --disk-cache-size=20971520


Do not CD into the union directory, nor should fakechroot be there. fakechroot must run from the 'real' system. And it takes care of cd'ing into the union directory to start the command you want. You should not cd into a dir and then run fakechroot $PWD. As you see, my AppRun provides a separate location for the union mount. If you try to use '.' or $PWD in the unionfs-fuse or fakechroot commands, then you can get a hung umount when leaving the chroot. When you start fakechroot don't use the current directory. You actually are starting fakechroot from a subdir of the union.
"cd $UNION/opt/google/chrome"
This means you have 'installed' the chrome stuff into your union already?? The whole idea is to use the unionfs layering to set that up -there really should be nothing in the union mount dir before you start the union. You need four locations to work with, the RW dir, the app dir- a subdir of the toplevel AppDir is the logical place for this (where chrome should be), then you need '/' (or parts of it) and the union dir itself. the RW abd union dirs should be in $HOME for users to be able to use the setup.


I guess you are doing this because you want to include fakechroot right inside your App. That's fine too, both it and unionfs-fuse could be loacted inside -but place them in the toplevel next to the AppRun script and call them from there (from the AppDir):
Code:
./unionfs-fuse ...
./fakechroot ...


"it`d(sic) getting complex" Hmmm, my src2pkg has well over 10,000 lines of code, so a few (dozen) commands don't scare me a bit... we've not done any error checking/handling yet, either, and as I said, misuse of either unionfs-fuse or fakechroot can leave hanging mounts/umounts which can sometimes only be undone by re-booting!

For me, it's useful to tackle a 'hard customer' like chrome first, but you might do better to take on some simpler-running programs at first so you can see how things should and do work. Create some chroots manually, the fakechroot into them manually and run some commands, like 'pwd' 'which goggle-chrome', etc. And, at the same time examine some of the sub-mount points which Puppy has lots of. BTW, the existing aufs mount and link structure used by Puppy would, expectedly, cause you some extra grief. Basically, any sub-mount of the real '/' needs to be mount binded under the same locations under the union dir before starting fakechroot.

Also, if you do not want complete sequestration of any writes, then you need to bind your $HOME to $union/home/user -but this will destroy the ability to have every single file sequestered which is created under the chroot(-useful for multi versions of programs). Course, as part of cleanup after the union is unmounted, you could move sequested items into you $HOME. You will also need to clean up some temp items from the RW area, like files under /tmp and /var -leaving them in place could prevent startup of the program next time.
Back to top
View user's profile Send private message 
sunburnt


Joined: 08 Jun 2005
Posts: 5016
Location: Arizona, U.S.A.

PostPosted: Fri 01 Feb 2013, 13:45    Post subject:  

I thought that fakechroot had to be with the .pak files, but putting unionfs-fuse there did the trick.
I moved fusermount and fakechroot to /GoogleChromeApp, and unionfs-fuse to /GoogleChromeApp/app/opt/google/chrome.
Code:
HERE=$(dirname $0)
$HERE/app/opt/google/chrome/unionfs-fuse -o nonempty -o allow_root -o cow $READ_WRITE=RW:$HERE/app=RO:/=RO $UNION

# Make dir. for Chrome user settings.
PROFILE=$HOME/.google/chrome/profile
mkdir -p $PROFILE
 
$HERE/fakechroot chroot $UNION /usr/bin/google-chrome --no-sandbox --user-data-dir=$PROFILE --disk-cache-size=20971520
echo "###  $?"
$HERE/fusermount -u -z $UNION

No errors for it trying to find the "fuses" and fakechroot. # And now the .pak file errors are gone.
Code:
[12800:12800:0201/105242:ERROR:master_preferences.cc(104)] Failed to read master_preferences file at /mnt/sda3/AppPkg/build/CreateChromeAppDir-0.2/GoogleChromeApp/app/opt/google/chrome/master_preferences. Falling back to default preferences.
/mnt/sda3/AppPkg/build/CreateChromeAppDir-0.2/GoogleChromeApp/fakechroot: line 144: 12800 Aborted                 env LD_LIBRARY_PATH="$paths" LD_PRELOAD="$lib" "$@"
###  134

Only one file not found. Same as the .pak files, it`s looking in the AppPkg dir. and not in the union where it should be looking.
But it falls back to the default preferences file, so it doesn`t seem like it would stop it from running.
And I tried changing the permissions for /dev/null, it didn`t make any difference, still the same error.
It looks like fakechroot is still struggling to work properly. I`m beginning to think that Puppy`s to blame for some of this.
Back to top
View user's profile Send private message 
amigo

Joined: 02 Apr 2007
Posts: 2232

PostPosted: Sat 02 Feb 2013, 03:36    Post subject:  

Quote:
fakechroot: line 144: 12800 Aborted env LD_LIBRARY_PATH="$paths" LD_PRELOAD="$lib" "$@"

That doesn't look good at all. I still think that all the accessories should be outside of the union.
Back to top
View user's profile Send private message 
sunburnt


Joined: 08 Jun 2005
Posts: 5016
Location: Arizona, U.S.A.

PostPosted: Sun 03 Feb 2013, 01:34    Post subject:  

Okay, took all the exec. files out of any app., write, or union dirs.

I`ve been trying to get Geeqie to work this way.
Code:
sh-4.1# env: can't execute 'geeqie_1.0-10_i386/geeqie_1.0-10_i386.u': Permission denied

Unionfs-fuse command ( $uFS = lib. and app. dirs. ):
Code:
./unionfs-fuse -o cow,allow_other $Pkg/$Pkg.w=RW:$uFS:/=RO $Pkg/$Pkg.u

The union dir. has the write, lib, and app. dirs. all appearing in it.
Failure comes at fakechroot, the union won`t allow it to exec.
So damn odd because xMahjongg has no problem with chroot.

I tried union mounting on / ( bad idea ), there`s a nonempty option.
But all it does is to mount over it`s contense just like regular mount does.
# If only mount had the option: mount -o transparent

# You know that unionfs-fuse has it`s own chroot option don`t you?
Tried using it but no go. There`s so few examples to follow.

This seems to be close to working, I spent the night Googling but no help.
Back to top
View user's profile Send private message 
sunburnt


Joined: 08 Jun 2005
Posts: 5016
Location: Arizona, U.S.A.

PostPosted: Sun 03 Feb 2013, 13:18    Post subject:  

Update: Finally found a web page with the same error type. The reply said the /etc/fuse.conf file needed a line in it.
Code:
# Set the maximum number of FUSE mounts allowed to non-root users.
# The default is 1000.
#
#mount_max = 1000

# Allow non-root users to specify the 'allow_other' or 'allow_root' mount options.
#

user_allow_other

I uncommented the bottom line as specified. No change, same error...
.
Back to top
View user's profile Send private message 
Display posts from previous:   Sort by:   
Page 5 of 6 [84 Posts]   Goto page: Previous 1, 2, 3, 4, 5, 6 Next
Post new topic   Reply to topic View previous topic :: View next topic
 Forum index » Off-Topic Area » Programming
Jump to:  

You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum
You cannot attach files in this forum
You can download files in this forum


Powered by phpBB © 2001, 2005 phpBB Group
[ Time: 0.1661s ][ Queries: 13 (0.0053s) ][ GZIP on ]