Bind mounts do the same job as links. Links are better.!

For discussions about programming, programming questions/advice, and projects that don't really have anything to do with Puppy.
Message
Author
User avatar
sunburnt
Posts: 5090
Joined: Wed 08 Jun 2005, 23:11
Location: Arizona, U.S.A.

#46 Post by sunburnt »

Ibidem; I think your script example was for chroot into an empty dir.?

My script was including / in the union stack so everything should work. Right?
This explains the error: "/proc is already mounted", it was already working.

So why`s it fail to find the google-chrome file which I can see in the union?

Ibidem
Posts: 549
Joined: Wed 26 May 2010, 03:31
Location: State of Jefferson

#47 Post by Ibidem »

sunburnt wrote:Ibidem; I think your script example was for chroot into an empty dir.?

My script was including / in the union stack so everything should work. Right?
This explains the error: "/proc is already mounted", it was already working.

So why`s it fail to find the google-chrome file which I can see in the union?
Is chrome invoked in a command invoked by chroot?
For reference, this:

Code: Select all

chroot "/path/to/"  something
setup.sh
changes the root directory to /path/to, searches the PATH relative to the new root, runs the command "something" from within the new root, and then exits the chroot and runs setup.sh.

User avatar
sunburnt
Posts: 5090
Joined: Wed 08 Jun 2005, 23:11
Location: Arizona, U.S.A.

#48 Post by sunburnt »

That`s pretty much what my run script does.
At the bottom is the run script

Code: Select all

======= Script: chrome 
#!/bin/sh
cd ${0%/*}/.AppPkg
export Pkg="chrome-24_i386"
./setup google-chrome --user-data-dir=profile --disk-cache-size=20971520 &

======= Script: setup 
#!/bin/sh 
#>>>>>>>>>>>>>>>>>>>>>>>>>>>>>   Mnt. Sq. Files & Union, Run App. 
if [ "$1" ];then
	echo -e "\n\n###  Run:  $Pkg\n"
	App="$Pkg/$Pkg.app"									### Mount App. Sq. File
	Sq="$Pkg/$Pkg.sq"
	[ ! -d $App ]&& echo -e "###  ERROR:  NO App. Dir.:  $Pkg.app\n" && exit
	[ ! "`ls $App`" ]&& if [ -f "$Sq" ];then				# IF app dir. empty
		mount -t squashfs -o loop $Sq $App					# IF Sq. file, mnt.
		[ $? -gt 0 ]&&
			echo -e "###  ERROR:  Fail Mount App. File:  $Pkg.sq\n" && exit
		echo -e "#  Mount App. Sq. File:  $Pkg.sq\n"
	fi
### Mount union fs
	unionfs-fuse $Pkg/$Pkg.rw=RW:$Pkg/$Pkg.app=RO:/=RO $Pkg/$Pkg.u
	[ $? -gt 0 ]&&
		echo -e "###  ERROR:  Fail Mount Union:  $Pkg.u\n" && setup &
	echo -e "#  Mount Union:  $Pkg.u\n"
	./run $@ &												### Run hook file

else #>>>>>>>>>>>>>>>>>>>>>>>>>>>>	Delete Dependency Link, Unmount Sq. Files.
	echo -e "###  End:  $Pkg\n"
	umount -f $Pkg/$Pkg.u				    				# Unmount union fs
	[ $? -eq 0 ]&& echo -e "#  UnMount Union:  $Pkg.u\n" ||
		echo -e "###  ERROR:  Fail UnMount Union:  $Pkg.u\n"
	Mnt=`mount`
	App=`echo "$Mnt" |grep $Pkg.app`
	if [ "$App" ];then umount -d $Pkg/$Pkg.app				# Unmount App. file
		[ $? -eq 0 ]&& echo -e "#  UnMount App. Sq. File:  $Pkg.sq\n" ||
			echo -e "###  ERROR:  Fail UnMount App. File:  $Pkg.sq\n"
	fi
fi

======= Script: run 
chroot $Pkg/$Pkg.u $@
./setup &
I tried this setup with avidemux and it can`t find the exec. file also.
Chrome`s wrapper script errors oddly, when it can find it.
But xMahjongg runs great with the exact same setup. Really weird...

Ibidem
Posts: 549
Joined: Wed 26 May 2010, 03:31
Location: State of Jefferson

#49 Post by Ibidem »

I've modified the shell scripts in ways that should be synonymous with your version.
sunburnt wrote:That`s pretty much what my run script does.
At the bottom is the run script

Code: Select all

======= Script: chrome 
#!/bin/sh
cd ${0%/*}/.AppPkg
export Pkg="chrome-24_i386"
./setup google-chrome --user-data-dir=profile --disk-cache-size=20971520 &

======= Script: setup 
#!/bin/sh 
#>>>>>>>>>>>>>>>>>>>>>>>>>>>>>   Mnt. Sq. Files & Union, Run App. 
if [ "$1" ];then
	echo -e "\n\n###  Run:  $Pkg\n"
	App="$Pkg/$Pkg.app"									### Mount App. Sq. File
	Sq="$Pkg/$Pkg.sq"
	[ ! -d $App ]&& echo -e "###  ERROR:  NO App. Dir.:  $Pkg.app\n" && exit
	[ ! "`ls $App`" ]&& if [ -f "$Sq" ];then				# IF app dir. empty
		mount -t squashfs -o loop $Sq $App	||
			{ echo -e "###  ERROR:  Fail Mount App. File:  $Pkg.sq\n" && exit }
		echo -e "#  Mount App. Sq. File:  $Pkg.sq\n"
	fi
### Mount union fs
#this ought to exit on failure, surely?
	unionfs-fuse $Pkg/$Pkg.rw=RW:$Pkg/$Pkg.app=RO:/=RO $Pkg/$Pkg.u  || { echo -e "###  ERROR:  Fail Mount Union:  $Pkg.u\n" && setup & }
	echo -e "#  Mount Union:  $Pkg.u\n"
	./run $@ &					### Run hook file

else 
	echo -e "###  End:  $Pkg\n"
	umount -f $Pkg/$Pkg.u && echo -e "#  UnMount Union:  $Pkg.u\n" ||
		echo -e "###  ERROR:  Fail UnMount Union:  $Pkg.u\n"
	Mnt=`mount`
	App=`echo "$Mnt" |grep $Pkg.app`
	if [ "$App" ];then umount -d $Pkg/$Pkg.app && \
               echo -e "#  UnMount App. Sq. File:  $Pkg.sq\n" || \
               echo -e "###  ERROR:  Fail UnMount App. File:  $Pkg.sq\n"
	fi
fi

======= Script: run 
chroot $Pkg/$Pkg.u $@
./setup &
I tried this setup with avidemux and it can`t find the exec. file also.
Chrome`s wrapper script errors oddly, when it can find it.
But xMahjongg runs great with the exact same setup. Really weird...
OK, so the "setup &" is unmounting everything...
Let's see if I'm following the logic (BTW, it seems clearer with a code block per file, and your commenting style somehow seems to obscure it):

Code: Select all

chrome
#cd .AppPkg; mount union
# run google-chrome --user-data-dir=profile --disk-cache-size=20971520 &
# == chroot $Pkg/$Pkg.u google-chrome --user-data-dir=profile --disk-cache-size=20971520 
# when this exits we run:
# setup &
Which suggests that your code will not work with wrappers that execute things in the background...
Also, you might check the return values from unionfs-fuse: it looks like a nonzero return would cause that sort of issue.

amigo
Posts: 2629
Joined: Mon 02 Apr 2007, 06:52

#50 Post by amigo »

I agree with Ibidem about backgrounding the ./setup command.
I hadn't forgotten about this thread. Instead I had been experimenting here on my system as a normal user. Actually, suing chroot by itself is impossible as a normal user -even if using sudo! (The same will actually apply to mount itself.)

So I had been looking for other solutions and came across 'schroot' -but it needs configuring for individual chroots -although it claims to be safer than real chroot. But then, I found fakechroot which seems to solve the problem nicely. Both schroot and fakechroot are debian projects so they should be in the repos you are using.

I hadn't asked before, but what system exactly are you trsing this all on. If you are running as root or running some puppy derivative, then your results are not gonna reflect any sort of reality for people using other systems.

Anyway, I just got a chroot working here using unionfs-fuse and fakechroot without any problems at all. I had downloaded the chrome browser a couple of days ago and first got it working normally to avoid any mix-ups. Apparently the chrome builds have less dependencies than using the real chromium. I'll try to get an example chrome-in-union working tomorrow and post the results.

User avatar
sunburnt
Posts: 5090
Joined: Wed 08 Jun 2005, 23:11
Location: Arizona, U.S.A.

#51 Post by sunburnt »

Ibidem; I hadn`t thought of the effects of it running a back-grounded script.
If the wrapper back-grounds critical stuff, then it`s not ready for my script.
I`d thought of adding "sleep 2" after unioning to give it time to be ready.
Not the best way to do it. Know a better way over come this behavior?

Comments; Yeah, html doesn`t like tab characters, so best to leave them out.

I see what you`ve done to fix the union command error trapping.
And I realized the "unsetup" part needs to test if there`s a mounted union.
Testing with "time" shows testing usually quicker than to: command - error.


amigo; I didn`t think you`d given up that easy... :wink:

If normal users lack permission, then even Squash files won`t mount.?

Yes, Puppy528 and Puppy54. And I know Puppy`s an abnormal O.S.
I`ve looked for a Ubuntu or Debian setup that`s like Puppy ( Squash file ).
I`ve used Debian before, I`m not sure how it differs from Ubuntu.
Tiny Core`s so easy to modify, but what is it based on.? Debian maybe.?

I`ll look at fakechroot. A fix for running wrappers that back-ground tasks?

Ibidem
Posts: 549
Joined: Wed 26 May 2010, 03:31
Location: State of Jefferson

#52 Post by Ibidem »

My only guesses are:
1: postpone umount; let shutdown or a cleanup script handle that

2: use the cgroup API (kernel 2.6.24+ only!) http://www.kernel.org/doc/Documentation ... groups.txt
My guess would be to use something like

Code: Select all

#sorry, no error checking
mount |grep /sys/fs/cgroup || mount -t tmpfs cgroup /sys/fs/cgroup
Pkgcgroup=/sys/fs/cgroup/Apppkg/$Pkg
[[ -d $Pkgcgroup ]] || { 
[[ -d /sys/fs/cgroup/Apppkg ]] || \
      mount -t cgroup -o cpuset,memory /sys/fs/cgroup/Apppkg 

mkdir $Pkgcgroup
/bin/echo $$ >$Pkgcgroup/cgroup.procs
#union mount here!
}
/bin/echo $$ >$Pkgcgroup/cgroup.procs
#now chroot 
#for cleanup:
rmdir $Pkgcgroup && umount $Pkg/$Pkg.u
Note, this is not a full script, just notes on where things go.

User avatar
sunburnt
Posts: 5090
Joined: Wed 08 Jun 2005, 23:11
Location: Arizona, U.S.A.

#53 Post by sunburnt »

Ibidem; I see the concept of cgroups and it`s purpose, but it adds complexity.

amigo; Tried a few fakeroot arrangements, but it didn`t fix the wrapper problem.


I was packaging a few AppPkg to host and LD_LIBRARY_PATH didn`t work.!
Ldconfig didn`t make any difference. A RoxApp Avidemux doesn`t use it.
Lib. not found, then after some messing, the error switched to it`s lib. dep.
It couldn`t have known about the dep. if it hadn`t found the original lib.
libSDL-mixer + dep. libmikmod are in the same dir. that`s in LD_LIBRARY_PATH.
But it`s said that LD_LIBRARY_PATH is a bad hack ( kernel code? ), do not use.
I know about ld.so.cache, but making the libs. globally available is overkill.

SO... I did what works. I just made links to the libs. in /usr/lib ( no ldconfig ).
More cleanup, but there`s already the other links in: /opt, /etc, /var, /usr/share.
And it worked of course, following my attitude of using "Links, Dirs., and Files".
And... Links work with wrappers that background tasks, unlike changing root.

# How to link $HOME/( ConfigDirs.) => AppPkg/home/$HOME/( ConfigDirs.).?
The dir`s. name is unique and quite often it isn`t in the package, it`s made.
# So at AppPkg run, find new $HOME/( Config.) dirs., and test if they exist in: AppPkg/home/$HOME/( Config.),
if it doesn`t exist move: $HOME/( Config.) to: AppPkg/home/$HOME/( Config.), if it exists delete: $HOME/( Config.).
Then make link: $HOME/( Config.) => AppPkg/home/$HOME/( Config.).

Maybe only one / union like Puppy, links & dirs. layer in ram dies at shutdown.
Still need to unmount Squash files at app. exit, but cleanup`s not needed.
The host O.S. would always remain completely pristine and free of clutter.
BUT... The kernel needs a union module :? ( aufs better than unionfs-fuse ).

My premise for AppPkg was 1 reliable method ( standard ) for a GUI builder.

amigo
Posts: 2629
Joined: Mon 02 Apr 2007, 06:52

#54 Post by amigo »

First, I am attaching what is working, so far, for google-chrome. Unpack the attached archive and unpack it somewhere where you have a bit of space. Inside you'll see a README file which explains how to create an AppDir using the included script. The archive also contains all the files needed except for the coogle-chrome package.
The resulting AppDir can be used by normal users. The 'meat' of how the AppDir creates the unionfs chroot is in the AppRun script.

"premise for AppPkg was 1 reliable method ( standard ) for a GUI builder" Huh?? A GUI builder? I thought you were trying to implement any easy way for users to run non-installed software bundles, as if they were really installed... My understanding was that AppPkg was supposed to be some sort of framework for users to manage and use bundles that you create.

You need to succinctly define what feature you want to implement. Then, separate the idea into two sides: 1 what/ how the user does and 2 what you the developer does.

Put yourself first in the users seat. What should the user do? How should he use your end product? What actions will be required for them to use it? Formulate your concept with these questions. When you think you have the right idea, then manually create one, or more, deliverable bundles. Don't worry just yet about how to automate the process of creating them. Do a couple manually and then try them out -thinking as (or better, running as), a normal user.
Work the kinks out of the implementation -usually after the second or third revision it is time to start over from scratch. Not all of your ideas will work the way you hoped or thought.

Once you have worked out the flaws in the implementation concept, then you can start working out how to automate the process of creating the bundles. The matter of creating the usable bundles is no different than creating software packages of any sort -you still need to keep in mind a couple of hundred things (at least!) -just in order to create a sane and reliable product. The matter of creating the actual content is complex -even if you are relying on pre-compiled content from others.

There is no 'One Way' to create package content of any format! Many bundle/package will always have some special needs which can't be simply intuited -needing specific instructions. My 'src2pkg' program comes closer than anything else to being able to automatically build packages from many, many types of content -including converting from pre-packaged content- often without needing *any* special instructions from the command-line or as script fragments written into a build script. Otherwise, each distro has its' own way of building packages, but they all will require some kind of recipe/script/spell specific to each package.

As far as the 'wrapper' problem, that's right up-front on the implementation end. Instead of being stubborn about finding a way to make it work, ask yourself why you think it needs to be backgrounded anyway! Backgrounding the process means that the script will continue on to the next line:
chroot $Pkg/$Pkg.u $@
./setup &
means that setup will be run before the chroot has been exited. Surely, that is not what you want. No, you want your wrapper to wait until the program has finished so that cleanup and unmounting can be done.

There is nothing wrong with LD_LIBRARY_PATH and has nothing to do with kernel code. LD_LIBRARY_PATH is used by the run-time linker(from glibc), which the kernel calls to start all programs. Besides, you were probably thinking of LD_PRELOAD anyway. True, it can be used in malicious ways, but still is widely used. It is not malicious in and of itself. LD_PRELOAD is also used by the run-time linker, not the kernel, not the shell.

As you have seen, modifying the PATH and LD_LIBRARY_PATH can allow your programs to run from anywhere -but that doesn't solve the problem of local/dynamic configuration files, shared items, icons, etc. That's why you need a union. And each app should create its own union -otherwise they will not be dynamically available. The problem with the puppy SFS system is that all the SFS's are made available at boot-time. The SFS-loader idea was meant to be a way to dynamically load/unload SFS images. The problem with that is that it still requires creating/destroying links on the writable system, and that it must either poll for newly added SFS's or be run each time to load new or unload running apps -My understanding of your AppPkg was that it do something similar to SFS-loader.

Either way, relying on PATH, LD_LIBRARY_PATH and system links will not solve the problem completely. The complete concept here is that a user be able to run the program quite normally -at will. And only a chroot will allow the user to run two versions of the same program which may also use two different versions of some underlying library (which may be a library which is already on teh main system, but of still a third version). Any only the chroot will overcome the 'other minor paths'

We're using fakechroot here. fakeroot is something else.

Ubuntu is a derivative of debian. It uses debain packages from both 'stable' and 'testing' branches, plus others which it re-compiles and packages, plus still more stuff which conanical authors and maintains -stuff like 'unity', wayland' and others.
Attachments
CreateChromeAppDir.tar.gz
Kit for creating a unionfs/chroot AppDir of google-chrome.
(4.99 KiB) Downloaded 275 times

User avatar
sunburnt
Posts: 5090
Joined: Wed 08 Jun 2005, 23:11
Location: Arizona, U.S.A.

#55 Post by sunburnt »

Always a fountain of info. amigo! :D Thanks for the Chrome setup, it`s tonights job...

My hope for a GUI builder I realized would probably have to rely on app. build scripts.
There`s just too many exceptions to the way apps. are made for Linux. It`s anarchy!
AppPkgs I make are proof-of-concept experiments, a GUI builder would be for users.
I agree that without build scripts for some apps., it`d make a skeleton AppPkg at best.

# AppPkg will do the following I hope:

Able to contain many apps. in 1 AppPkg. To make an AppPkg from other AppPkg apps.,
drag each app`s. inner dir. and exec. link to target AppPkg, check /.AppPkg/lib for deps.
Other AppPkg files like setup, run, unionfs-fuse and the menu exec. are generic.
And then edit a new menu.lst file which is a standard colon delimited list type menu file.

Able to run different versions of apps. ( say Chrome 20 & 24 ) even in the same AppPkg.
Problem here is the users $HOME config. and /share dep. dirs. don`t have the versions.
But separate app. unions correct this. So then configs. are in AppPkg, not in $HOME.

A good idea I think is an AppPkg /lib dir., add missing libs. easily, as loose files or Sq.
So a "not complete" AppPkg ( what`s complete? ) could be easily user "patched".
This was the problem experiment with LD_LIB not being able to find libs. in it`s path.
It works most all the time, I have RoxApps that use it, but it broke down this time.


I have started over from scratch several times now... :wink: I`m getting close I think.
Using unionfs-fuse and then changing root is such a clean way to implement this.
And you know my dislike for unions, so this is saying a lot.

# Current AppPkg layout ( Chrome-24 example )( Files are under their Dir. ):

/Chrome-24.AppPkg/.AppPkg/chrome-24/( desktop, icon, lib, Union Dirs.: .app, .rw, .u )
AppRun _ _ _ _ _ _ _ setup _ chrome-24.sq
.DirIcon _ _ _ _ _ _ _ run _ _ chrome-24
Chrome-24 _ _ _ _ _ U-fuse
_ _ _ _ _ _ _ _ _ _ _ AppPkg.mnu
_ _ _ _ _ _ _ _ _ _ _ menu.lst

User avatar
sunburnt
Posts: 5090
Joined: Wed 08 Jun 2005, 23:11
Location: Arizona, U.S.A.

#56 Post by sunburnt »

Hey amigo; Extracted the tar-bz2, moved the .deb into the dir. and ran: bash prepare-Chrome-AppDir
It made the AppDir and I clicked it in Rox to run and nothing happened.
So I typed in RoxTerm: /mnt/sda3/AppPkg/build/CreateChromeAppDir/GoogleChromeApp/AppRun
Again nothing, I ran ldd on the chrome file in it and all the libs. are there.

There is no error reported in RoxTerm, it just ends immediately.
So I don`t know what`s wrong, and I`m not sure how to diagnose it.
A failed union mount would show error, but the fake-chroot is: 2>/dev/null

I`ll continue probing it to see if I can make sense of it.

PC is Pentium-D 802, 1 GB ram, on board Intel GPU.
I`ll try another Puppy version.
This one is Puppy Lupu-005-PolarPup, the one I do all my work on.

# UPDATE: Put "exit" after the union command, works good as I figured.
The fusermount command works as mount shows no union left running.
Then I put "echo $?" after the fake-chroot command and it returned a 1.

amigo
Posts: 2629
Joined: Mon 02 Apr 2007, 06:52

#57 Post by amigo »

Here's an update attached. The directions are the same. But, do edit the AppRun file to comment or remove the '2> /dev/null' part of the fakechroot command. Then run it from the terminal so we can see what is happening.

This version does some further cleanup of stuff left behind by chrome. I still get some errors here, but it runs alright. I have a felling that chrome is providing lots of extra script steps which won't be required by most programs. And I'm pretty sure that we are going to need to 'mount --bind' some dirs after creating the union in order to access some stuff properly.

I had to do a few things to get chrome working here from a normal installation of it. I run without dbus normally, but have it packaged,of course. So, I needed to install that. You have it or it would show 'not found' in the ldd output. I also had to make /dev/shm world-writable and include this is in the init script for udev, so this would be done at boot-time.

If you get it to start, you may want to remove the '--no-sandbox' for google-chrome. I disabled that because 'chrome-sandbox' was failing to start -that may be related to a further dbus warning. But, chrome will start even with dbus and/or the sandbox not working.

I am afraid your AppPkg doesn't sound very easy to use. I'm still not clear on what it consists of or should do. Do you mean a GUI tool which any user can use to build their own AppPkg bundles?

What do you mean by: new menu.lst You mean you want to be able to choose which apps to start at boot-time?

Is there supposed to be some AppPkg *program* or is it just a location where you put apps? I understand putting apps together somewhere.

"dislike for unions" != "love LiveCD" What speaks against union mounts? They don't use any memory or any fixed amount of RAM (as using a RAM disk does). They don't even need space in tmpfs.
Of course there are drawbacks or limitations as with everything.

Have you read all the notes in the AppDir? Have you seen where you could place and mount FS images instead of using a dir with loose files? I say File System image because of your seeming misunderstanding of SFS vs. sqf (or whatever).

An SFS is simply a filesystem-image with the filesystem being 'squashfs'. if you are using mksquashfs then you are doing the same thing -it doesn't make any difference what the file suffix is. The fact that Puppy does something 'special' with any files it finds with '.SFS' suffix does nothing to change the file type or content.

The point is that you could -and maybe should- use any other valid filesystem image the same way. The image can even be a writable image. The loop-mount turns any filesystem image into a mountable device. So, you could use an ext2, ext4, btrfs *or iso* filesystem for the image. In fact, using *.iso images would actually be the most universally available. Not every linux is gonna have support for *all possible* fs's. Even a normal iso will give you about 40-45% compression. zisofs will get about 55% and squashfs around 60-65%. cromfs will compress *even more*. The problem with most special filesystems which are compressed/shrunken is that they do lots of that by not supporting/including many file attributes.

The problem with using a 'universal' extra libdir or sfs, is that you almost always are overkilling or still missing something. But, using multi-versions cleanly -and at the same time- is also possible. And have a version-specific write dir is really a matter of simply naming the AppDir with a version number and using a version-specific location under $HOME for that name-and-version.

Here's an example case. Say you are running a system with glibc-2.13 and chrome-20 works fine when normally installed. But now, you wanna try chrome-22 and chrome-24 at the same time, but chrome-22 requires glibc-2.15 and chrome-24 requires glibc-2.17. What 'cha gonna do ther? LOL. The problem winds up being the same as with any software packaging solution.

Your idea of using stats about how often each lib gets used and being able to build a really useful extra libs image also would not resolve the problem. Knowing how often a lib gets used (or the probability of it being used) has no relation to how much it is needed when it is required.

You can't have a GUI to 'something' if 'something' doesn't exist yet. Whether you wish to distribute bundles yourself, or distribute software for creating them, you're gonna need solid scripts which do the complete job -as easily as possible.

Be assured, it wouldn't take a very long script used with src2pkg which would create these things on-the-fly. The AppRun script for this chrome AppDir looks nothing like my normal ones which I create using my template. And the little prepare-Chrome-App script is also just a proof of concept. The really hardest part is creating and sanitizing the content. You can't rely 100% on using other peoples packages -sooner or later you'll want/need a variation. But, I have 2rc2pkg which already does that. Have you not thought yet that the content under the 'app' directory could just as well be the result of 'make DESTDIR=app install' in your favorite sources.
But, you also have to inspect and sanitize that content and add/remove stuff *every time*.

Anyway, it'll just take a few lines in an add-on module to src2pkg to create these things -as raw files or an fs image of any sort wanted.
src2pkg already does the whole download/configure/compile/sanitize/package steps very well. It just takes a snippet to create an image of the DESTDIR and generate an AppDir of the thing or use a pre-written one.
Attachments
CreateChromeAppDir-0.2.tar.gz
(5.28 KiB) Downloaded 244 times
Last edited by amigo on Tue 29 Jan 2013, 11:25, edited 1 time in total.

amigo
Posts: 2629
Joined: Mon 02 Apr 2007, 06:52

#58 Post by amigo »

Duplicate.

User avatar
sunburnt
Posts: 5090
Joined: Wed 08 Jun 2005, 23:11
Location: Arizona, U.S.A.

#59 Post by sunburnt »

Hi again amigo; This is my experience, it works great, take it next door to show my friend and it bombs.
Need standardized "known" base for the O.S. so everyone is on the same page. No more assumptions...
Apple has it much easier than M$, Apple`s hardware is known and fixed, M$ has to support everything.

I`ll try your update when I shut off this running AppPkg of Chrome. Tests when it`s on does odd things.

A GUI tool will have to be fairly simple. Developing AppPkg will produce many AppPkgs to host for users.
So far I`ve downloaded pkgs. manually, a GUI interfaces this part of AppPkg building, and build choices.
The background code resolves deps. and downloads them, assembles the AppPkg, and makes scripts.
There is always odd stuff to fix, some of it can be automated, some of it will always be manual labor.
Any tools for working with AppPkgs would also be AppPkgs of course.
A link: /opt/AppPkg => /mnt/(partition)/AppPkg gives a reliable path to AppPkg apps. and any tools.

If src2pkg will download binary app. packages and gets their deps., then no need to parse web pages.
I assume it will do this for Debian, Slackware, etc.? So then AppPkg would have a bigger base of O.S.s.
I have already made a GUI in BaCon like the Debian packages page. An App-Groups list and Apps. list.
A second tab shows the description for the selected app. A third tab is build control, options, and paths.

For AppPkgs that have more than one app., I wrote a popup menu in BaCon, it uses a std. menu format.
This is part of the menu file for my AppPkg of the Xfe and Fox suites of apps. ( Name Desc.:Exec.:Icon ).

Code: Select all

Xfe  File Manager:xfe:xfe
Xfi  Image Viewer:xfi:xfi
Xfp  Package Manager:xfp:xfp
Xfv  Text Viewer:xfv:xfv
Xfw  Text Editor:xfw:xfw
When you click on Xfe_Fox.AppPkg or run AppRun it pops up the AppPkg menu to select from. ( See pic.)
It was a nice experiment as it shares the FOX libraries in a /lib Sq. separate from the app`s. Sq. files.
It`s infuriating because in this AppPkg LD_LIBRARY_PATH works just fine ( it works, then it doesn`t...).
This is the extra /lib dir. I spoke of, not system wide, but shared in an AppPkg with >1 app. needing it.
End-users can also use this same /lib dir. to easily patch the AppPkg with any libs. their O.S. is missing.

When I say SFS I mean a Sq. file made for Puppy that`s to be unioned. Some SFS will work non-union.
A Sq. file`s a normal Squash file of any kind, can be a Bash tutorial, install app., union image, anything.
AppPkg can use Sq. files or the mount dir. can just contain the app`s. files ( Good to develop the app.).

I thought of using the original app. files r-w, but this allows corruption, best to use the app`s. files r-o.
Hummm... Use an iso image, interesting idea amigo, Sq. files use a kernel module, does an iso file too?

Versions: Most apps. I`ve seen create dirs. in $Home and /usr/share, and these dirs. aren`t versioned.
If the apps. could be made to put versions on their dirs. that`d be nice. If we could only configure apps...
Like modifying the paths within binary exec. files, some apps. will work and all too many of them won`t.

Lib. stats would be useful for building an O.S. ( what to put in it...). It could also help in making apps. too.

### A couple of Qs I`ve been meaning to ask:
# Do all libs. load to ram? So lib. "file" is read once? So can unmount a lib. Sq. file after app. startup?
# Is a mount dir. on a partition slower than one in ram ( say: /tmp )? Like a link, it must be resolved.
....... So to access a link or mount point that`s on a HD, USB, or CD the device is accessed continually?
....... Web page clicks cause the HD light to flash. The /etc/resolv file should be in /tmp. Bad O.S. design.


### Again many thanks for hanging in there on this amigo. Now I think it`ll become a reality.! Terry B.
.
Attachments
AppPkg_Menu.png
AppPkg popup menu for Xfe + Fox suites of apps.
(31.18 KiB) Downloaded 216 times

User avatar
sunburnt
Posts: 5090
Joined: Wed 08 Jun 2005, 23:11
Location: Arizona, U.S.A.

#60 Post by sunburnt »

# NOTE: Testing LD_LIBRARY_PATH problem. Previously working apps. using LD_LIBRARY_PATH have stopped working.!

I saw something on the web saying LD_LIBRARY_PATH was disabled in their kernel. Tried Puppy5.4.X.5 and same error.

amigo
Posts: 2629
Joined: Mon 02 Apr 2007, 06:52

#61 Post by amigo »

Of course I can see the case for combining several executables into one bundle/package -lots of packages contain more than one program. In the case of FOX there's a limited number of progs available anyway -even outside the official 'suite'. There are several ways to offer the option of which prog to run. The simplest for use with ROX is to have the menu in the right-click of the AppDir -the menu entries are created in AppInfo.xml. Then you create matching code in the AppRun which handles the chosen option. Incidentally, running the AppRun script from any terminal, script or using another program to start it (run-box of your WM, etc.) -these will all respond to the same options. Here's a snip from an AppInfo.xml which does this:
<AppMenu>
<Item label="List main packages" option="--list-main"/>
<Item label="Search Installed" option="--search"/>
</AppMenu>
The option '--list.main' or '--search' gets passed to the AppRun which then takes whatver action:

Code: Select all

if [[ $1 == '--list-main' ]] ; then
	PKG_LIST=/tmp/pkg-list.$$
	ls -1 /var/lib/tpkg/packages |grep -v -E '(*\-devel\-*|*\-docs\-*|*\-i18n\-*)' > $PKG_LIST
	xterm -title 'Pkg-Tools Main Packages' -fn $XTERM_FONT -geometry 45x40 -e "cat $PKG_LIST |less"
	rm -f $PKG_LIST
elif [[ $1 == '--search' ]] ; then
	SEARCHFILE=`greq -e"Search for file: "`
	cd /var/lib/tpkg/packages ;
	RESULT="`grep -H $SEARCHFILE$ * 2> /dev/null`"
	if [[ -n $RESULT ]] ; then
		exec greq -t"Search Results" -p "$RESULT"
	else
		exec greq -t"Search Results" -p "No match found!"
	fi
fi
Did my chrome AppDir run on your system? If it did, I would still like to see the output when running the AppRun from the terminal.

You have a lotta questions -I'll try to be helpful...
# Do all libs. load to ram?
Yes, every library and program gets loaded to RAM. First, understand that the kernel does not execute any programs itself. It calls /lib/ld-linux.so.2 to do that. ld-linux is the dynamic loader. When asked to start a program for the first time, ld-linux first loads the program into RAM at an appropriate address, then looks the prog to see what symbols are in there -names of libraries the program is linked to. It then loads each of the libraries into RAM -examining each one of them for symbols and acting accordingly. It then passes these locations to the executable as it starts it.

Now, each of the libs and the program are in the RAM cache. If you stop the program all the libs and prog remain in the cache. If you *restart* the program it does not get loaded again -everything is still available in the cache. This why programs usually start faster the second time you run them.

In Puppy and other Live distro which 'run from RAM' *any running program is in two locations in RAM*. The 'run-from-RAM' has nothing to do with the above. run-from-RAM means that the main '/' file system is located in a *reserved* portion of RAM which is being treated like a hard disk. This portion of RAM is *unavailable for other use*.

When ld-linux tries to read in a file it must access the device it is on -'normally' a spinning hard disk. The file contents must be transported out of the device, over its' cable to the main bus and then sent into RAM. Of course this takes time. The reason run-from-RAM is faster is that the files are right in the RAM -where that section is being used as ramdisk. The contents must only be accessed at its' RAM address and transport over the bus (to CPU) and then loaded (over the bus again) into the cache area for use. In theory you could unload the ramdisk once everything needed was cached -but there are lots of implications and would be messy anyway.

Version-ed dirs: Any app that *writes* to /usr/share as part of its' running is doing the Wrong Thing. $HOME of course -and many times /tmp, /var/run, /var/lock. Using these 'mini-chroots' to run programs lets you completely sequester the app from your normal $HOME structure -or not. But sequestering means you can run separate versions completely apart. Everything can be written normally to $HOME, /var, wherever -or you can sequester part or all of it -using mount --bind

iso vs squashfs:
Accessing any file system will require support for that FS in the kernel -whether hard-linked in or as a module. The point is that nearly *every system* (except for embedded, maybe) will support CD's, right? But every other file system may or may not be there -very few systems run with support for every possible FS you could use. FAT might be jast as ubiquitous or more. But we can't use that for OS files anyway. One disadvantage of squashfs or other compressed FS is that even more RAM will be used temporarily to decompress the file. So, while loading the file you have one copy (compressed) in the RAM disk, a partial copy in RAM (being decompressed) and a partial copy (uncompressed) in the cache!
Using compressed FS requires more total RAM than using a non-compressed FS. Plus, decompression takes time -another trip down the bus abd back...

"If we could only configure apps" -Of course we can! Roll your own, configured as needed -patching where needed. There is *no* getting around this, sooner or later.

"Any tools for working with AppPkgs would also be AppPkgs" chicken vs. egg -again. The first one still has to be built using some other method. I think you need to separate the product from the process -if only to more clear to your users. If the product is to be AppPkg, then let the softwsre used to create them called AppPkgCreator or whatever. How else will you tell your users... You must install AppPkg in order to create AppPkgs. But you can use AppPkgs without having AppPkg installed. But you need AppPkg installed in order to install AppPkg -wasn't this supposed to be all about using software but not having to install it?

The directions to the user which I like:
1. Download the archive of the AppDir you want to use.
2. Unpack it anywhere you like.
3. Click on the icon to start the program -or first choose from startup options by right-clicking the icon.
# 3 assumes the user uses ROX-Filer (or other AppRun-aware file mgr.) Other wise the user must start the app using the path to the AppRun -and any options you were providing in the ROX right-click must be offered or handled using some other interface. BTW, gtkdialog is a bad choice because it is rarely available on non-Puppy distros.

The product should be that easy to use -most of the time anyway. But that implies a huge body of work behind the scenes to create those AppDirs.

amigo
Posts: 2629
Joined: Mon 02 Apr 2007, 06:52

#62 Post by amigo »

"LD_LIBRARY_PATH problem" Do you mean trying to use LD_LIBRARY_PATH inside a fakeroot chroot?

User avatar
sunburnt
Posts: 5090
Joined: Wed 08 Jun 2005, 23:11
Location: Arizona, U.S.A.

#63 Post by sunburnt »

Hey amigo; I had tested earlier today, watched movies tonight with my girl friend, Love Potion #9 ( not too bad...).

What I found was the same that jrb ( I think ) did with ChoicePup. LD_LIB stops working ( not in chroot ).
I had to add the path to /etc/ld.so.conf and run ldconfig for the app. to run. I`ve not seen this before.
Some kind of Puppy / Linux flaw that pops up. But what are the controlling circumstances?

It`s 3:00 am here, I`m ready for some shut eye. Tomorrow I`ll have the time to run and test your build.
I didn`t even read your upper long post, that I`ll do with my morning coffee.

Girl friends day off, so... I`ll let you know how it goes.

User avatar
greengeek
Posts: 5789
Joined: Tue 20 Jul 2010, 09:34
Location: Republic of Novo Zelande

#64 Post by greengeek »

Of course I can see the case for combining several executables into one bundle/package
Is there any value in trying to group apps according to which libs they require? eg: creating a "group-static" that is clustered around common libs. Any ram/cpu saving in such an idea? (No point NOT having an app on your system if it creates no extra overhead to have it there...)

amigo
Posts: 2629
Joined: Mon 02 Apr 2007, 06:52

#65 Post by amigo »

"Is there any value in trying to group apps according to which libs they require?" See the package structure here:
http://distro.ibiblio.org/amigolinux/di ... PKGS/i586/
The source tree looks similar:
http://distro.ibiblio.org/amigolinux/di ... .0/SOURCE/
There the source tarballs are right inside the dirs with the build scripts, etc. But, for KISS-5 I have the sources all in a separate directory -so you can also get a quick alphabetical list of the sources. It's what you call the filesystem as a database -after all filesystems *are* databases.

That said, it doesn't resolve things at all, as far as run-time dependency resolution. And neither will some web--page listing, master-list or any probability statistics. the only thing that matters is making exactly the right library available. This is nearly always the *same* one that was used when the program was configured, compiled and linked. If you are using pre-compiled binaries the only way to know *exactly which of these libs are the correct ones.
Again, a master list will not really do the job -each package should contain the information which leads you to those exact libs. Usually the package does not contain the repo info. Any pkg mgr will combine your settings for repo URL, plus the path/name of the needed lib/prog -of course real integrity here implies that the library package come from the same repo where the program package came from -or from some assumed availability base. The very same libs from the very same tool-chain with the same original configuration, options and mix of software installed at the time the program was built. That is the One and Only way to be absolutely sure that a program will run.

How you can assure that is the whole problem. And every choice you make about how you do that has advantages and disadvantages or limitations. Using such a chroot method one could access a completely different OS to run that program with -only the running kernel would be, bypassing *all* system libs.
Of course, you can also hard-link all the libs and proggie together -but that makes creating deliverables really, really tedious. Lots of things really won't compile completely static without *lots* of work. The chroot method allows you to use stuff normally -if you need extra libs or newer versions of something you simply include the normal lib in your bundle -the layering order of the union takes care of that.

"no extra overhead" -there's always the overhead of having something somewhere -on disk or whatever. Of course, you could even create something really tiny, which connects to the net to an iso image somewhere and over-the-net mounts that iso and makes them ther' libs seem like they really are on your machine -and they don't even use any RAM until ld-linux loads *straight into the cache*.

You could make this tiny thing be completely self-contained so that it would not even need unpacking in order to use it. : An executable self-extractor stub with a payload which is unpacked and a script/command executed. Just download it, click it (from *any file mgr*) and the program runs -fully sequestered from anything else on your system in its own 'sandbox'. I *have* created such executables. But it is a rather complex job to do and *they are still mostly achitecture dependent -a bundle made for an ARM machine won't run on an intel machine, etc.

Now, maybe you begin to see how big the problems are -the easier you want to make it for your user, the more complex is your job. Maintaining even a handful of packages/bundles can mean *lots* of work. Doing them manually becomes impossible -you can only create more by using a very good system for creating them.

Post Reply