Puppy Linux Discussion Forum Forum Index Puppy Linux Discussion Forum
Puppy HOME page : puppylinux.com
"THE" alternative forum : puppylinux.info
 
 FAQFAQ   SearchSearch   MemberlistMemberlist   UsergroupsUsergroups   RegisterRegister 
 ProfileProfile   Log in to check your private messagesLog in to check your private messages   Log inLog in 

The time now is Tue 23 Dec 2014, 01:22
All times are UTC - 4
 Forum index » House Training » Users ( For the regulars )
Static programs - isolating library $PATH [SOLVED}
Moderators: Flash, Ian, JohnMurga
Post new topic   Reply to topic View previous topic :: View next topic
Page 2 of 2 [28 Posts]   Goto page: Previous 1, 2
Author Message
amigo

Joined: 02 Apr 2007
Posts: 2297

PostPosted: Tue 01 Jan 2013, 07:58    Post subject:  

A statically-linked program will require more RAM simply because it is larger. The size difference can be dramatic, depending on just how many libraries it needs. Of course, using a couple may be alright. But some of the things you might most want to have as statically-linked applications may be very large indeed -FF, Seamonkey, Thunderbird, Chrome, etc.

Still, the best way to keep things small is to know exactly what each thing needs. Only very good dependency information can provide this -and dependency info can only be accurately determined *at compile time*. Since even compile-time options can change the requirements of a lib/program, let alone inter-versional differences, the real requirements can only be determined on the build system, at build-time.
Back to top
View user's profile Send private message 
npierce

Joined: 28 Dec 2009
Posts: 858

PostPosted: Tue 01 Jan 2013, 16:22    Post subject:  

greengeek wrote:
Heres an example: Lets say I want to start with a stripped down Puppy (pupngo) and add only two other programs - a word processor and a browser: Imagine that both of those programs are labelled as being “statically linked”, yet both initially fail to run because each is looking for something called libXt.so.6 (which suggests to me that they are not fully statically linked...). I can grab a similar named lib from another puppy, but I might find one thats not exactly the right name - like libXt.so.6.1.2, - so I might symlink the missing file to the “appropriate” lib and the programs might run.

However, I have no way of knowing if those programs were intended to run with libXt.so.6.1.2 (which is what they are going to actually be running with...) or whether the person who turned them into “statics” was using other slightly different libs.

OK, so you are basically trying to prevent the age-old problem of newly installed packages overwriting shared libraries with versions that are incompatible with previously installed programs. You want each program to be able to find a library that is compatible with it without stepping on the toes of other programs' libraries.

One of the things that attracted me to Linux in a previous century was that I was tired of that very problem, which at the time had reached almost epidemic proportions in the Microsoft world. I read about Linux's system of versioning for shared libraries, and its ability to have multiple versions of a library installed in a way that each application could link with the appropriate version. "That," thought I, sounds like an O.S. that has been designed with good common sense. That's the O.S. for me!"

While I still find that there is much "common sense" in the way things are done in Linux, it wasn't long after I started working with it that I found out the hard truth that the reality of library versioning in Linux did not always adhere to the good intentions set forth in the documentation. (If it did, we would not be having this discussion today. Smile ) In order for guidelines to work, people have to actually follow them, but people are only human. In many, many cases things really do work as intended, but some library developers never quite grasped the concept of how to maintain compatibility with old programs. Others seemed to act as if they simply couldn't be bothered with maintaining compatibility.

Still, while the implementation of library versioning is imperfect, it might be worth reviewing here how things are SUPPOSED to work.

In your example, libXt.so.6, should be a symlink to a file, not an actual file. This is what is known as a "soname". The symlink will point to the actual library file, say, libXt.so.6.0.0, or (as in your example) libXt.so.6.1.2.

(If you have your devx*.sfs loaded, there should also be another symlink named "libXt.so". This is used when compiling, and will usually point to the soname with the highest major version (or sometimes the library file itself) unless you have changed it, which you would do if you wanted to link a program you are compiling to an earlier major version. But this symlink is of no concern to this discussion.)

Within their list of "NEEDED" objects in the binary file, programs will provide the soname, which indicates the name and major version number of the library that they need, in this case "libXt.so.6".

In theory, if everything works as intended, you should need only one actual libXt.so.6.x.x file in your filesystem. If you have one program that was originally linked to libXt.so.6.0.0, and one that was originally linked to libXt.so.6.1.2, you should only need libXt.so.6.1.2, which should be compatible with both programs, since a minor version change is not supposed to break compatibility with old programs. (And I hastily remind you of the two words that started this paragraph, "In theory" -- I'm painfully aware that things don't always work as intended.)

So, again in theory, a developer could create an application, release a binary executable linked against certain libraries, then retire from this mind-numbing business, perhaps take up newt-husbandry, and her program would keep working forever as long as it was installed on a distro that had the same major versions of the libraries it was compiled with, and the minor versions were the same or newer.

Gee, wouldn't that be swell.

But in reality, although many of those libraries might be well maintained, and continue to be compatible through the years, all it takes is one bad apple to introduce incompatibility and render the program useless. Soon users would start showing up at the former developer's newt ranch, asking her to come out of retirement and modify her program to work with the broken library.

And, sadly, while this problem may be more likely to occur with a binary, it can also be incompatible at the source code level. In that case, simply recompiling won't fix things.

One example that comes to mind is GTK+ 2. A search through these forums will turn up quite a few posts over the years related to incompatibility problems with new "minor" releases of GTK+ 2. I would be among the first to acknowledge that the GTK+ folks have contributed vast amounts of useful code to the GNU/Linux world over the years, but I would not say that compatibility is their top priority. Imagine how much effort could have been saved by developers all over the world if they didn't need to stop, troubleshoot, and adjust their code when a new "minor" version of GTK+ 2 was released. (I think it was mikeb who used to refer to GTK+ 2 as "the source of all bugginess". Smile )

If releases that break compatibility with old programs are correctly identified as major releases, then programs can be linked to the appropriate major version. But when library developers neglect to increment the major version number for a version that breaks compatibility, the only recourse (short of modifying the program code to work with the broken library) is to do what you are considering doing: linking statically or linking dynamically and segregating the "minor" versions and using LD_LIBRARY_PATH or one of the other alternatives to tell the linker where to find the appropriate "minor" version. And in a case like GTK+ 2, that would also require segregating a basket-full of interdependent libraries.

(In defense of GTK+, I should say that I understand that the developers probably want to feel free to make major changes on a frequent basis. And if they incremented the major version number each time they did so, distros would then be forced to include many versions of GTK+ to support the various programs that use GTK+. But this GTK+ user thinks that it sure would be nice if they could give compatibility a little more thought when making changes, and save those things that simply cannot be changed without breaking compatibility for an occasional major release. Of course, it is easy for me to sit in my chair and talk about how things should have been done, and it is another thing to face the compromises that sometimes must be made in a massive project like GTK+.)

Another way of dealing with library compatibility issues is by using "symbol versioning". This was introduced for Linux in the late 1990s to allow a library to add new functions and even make compatibility-breaking changes to existing functions without the need to increment the major version number. This is done by having multiple versions of a function within a single library file, and assigning a version identifier to each. This is what has allowed the major version number of libc to remain at 6 for an eternity, and why at least some programs compiled with version GLIBC_2.1 will still work with recent versions of libc.

With "symbol versioning", the linker/loader checks to ensure that the available library (pointed to by the soname) needed by a program actually supports all of the needed functions, and -- if it has multiple versions of a function -- has the needed version of the function. Before "symbol versioning", the major version number of a library was supposed to be incremented not only when changes broke compatibility with old programs, but also when changes meant that the old version of the library wouldn't work with new programs. In other words, minor revisions were limited to stuff like bug fixes and minor improvements to functions which didn't affect how the program interfaced with it and maintained its behavior in a way consistent with the documentation. So without "symbol versioning", there would be many more major versions of those libraries which now use "symbol versioning".


Well, I seem to have said a lot more here than I intended to. Perhaps it was helpful to the understanding of why the problem exists, or perhaps not.

As amigo points out, there can be various reasons (in addition to those I've mentioned above) why a library that one would expect to be compatible is not actually compatible. Still, you will find that many will work as expected, so you may not need to segregate every library used by a program. But if you would be more comfortable doing so, go for it. Much as I personally like the efficient use of RAM and disk space (hence my choice of a distro that fits on one sixth of a CD), I realize that we no longer live in a world where RAM is measured in kilobytes and disk drives are measured in megabytes -- if you've got the space, use it as you like.

Either way, I would agree with amigo that your best bet would be to place any needed libraries that do not already exist on the distro, and any that are incompatible with the existing libraries, in a separate directory, and use a wrapper script to set LD_LIBRARY_PATH before launching the program.

Happy new year.
Back to top
View user's profile Send private message 
sunburnt


Joined: 08 Jun 2005
Posts: 5043
Location: Arizona, U.S.A.

PostPosted: Tue 01 Jan 2013, 17:30    Post subject:  

And that`s exactly what RoxApp or AppDir or AppPkg does.
They setup a run environment for the app. and then run it.
This is $PATH, $LD_LIBRARY_PATH, and links and dirs. needed.
[b]No union[/n] is needed for these apps., they work with or without one.

If Linux and it`s apps. were setup properly, no need for the links and dirs.
But the original creators of Unix couldn`t have known of this type of use.

amigo has a RoxApp setup that compiles the app. for the distro ( very nice ).
This is the best arrangement for installing odd apps. into odd Linux distros.

As I`ve said, Ubuntu Lucid-Precise are very compatible with the Puppy versions.
I`m sure there`s a few apps. that would have problems, but not many.
So AppPkg or RoxApps made from these should work very well.

AppPkg is compatible with RoxApps, but it allows multiple apps. in one.
It allows app. specific local libs., and AppPkg shared local libs., and it
uses the Linux O.S. shared libs too of course. So no lib. conflicts...

There are other things that keep an app. from working, but libs. are a big one.
Back to top
View user's profile Send private message 
greengeek

Joined: 20 Jul 2010
Posts: 2755
Location: New Zealand

PostPosted: Wed 02 Jan 2013, 04:11    Post subject:  

npierce wrote:
OK, so you are basically trying to prevent the age-old problem of newly installed packages overwriting shared libraries with versions that are incompatible with previously installed programs. You want each program to be able to find a library that is compatible with it without stepping on the toes of other programs' libraries.
Yes you've summarised it nicely.

Quote:
Still, while the implementation of library versioning is imperfect, it might be worth reviewing here how things are SUPPOSED to work.
Yes, thank you for the indepth info. It is very helpful. It seems to me there will be times when I want to try and do the right thing, and other times when I will want to cut corners, and just get something going the quick and dirty way - but it helps a lot to understand the risks and potential solutions.

Quote:
Perhaps it was helpful to the understanding of why the problem exists, or perhaps not.
Very much so. Thanks.

Quote:
I realize that we no longer live in a world where RAM is measured in kilobytes and disk drives are measured in megabytes -- if you've got the space, use it as you like.
Somehow buying a PC that has more than 1 GB of RAM seems morally wrong. And I have heaps of older machines that would benefit from a lean O.S. Apart from that I feel very insecure using software that is bloated and inefficient. Rightly or wrongly it makes me feel that it is potentially full of spyware and other nasties. (ok, it's probably not...)

Quote:
Either way, I would agree with amigo that your best bet would be to place any needed libraries that do not already exist on the distro, and any that are incompatible with the existing libraries, in a separate directory, and use a wrapper script to set LD_LIBRARY_PATH before launching the program.
Excellent summary. Thanks. On that note I shall mark the topic solved, although I am still very keen to hear of other methods of "sandboxing" apps to avoid conflicts.

sunburnt wrote:
And that`s exactly what RoxApp or AppDir or AppPkg does. They setup a run environment for the app. and then run it. This is $PATH, $LD_LIBRARY_PATH, and links and dirs. needed. [b]No union[/n] is needed for these apps., they work with or without one.
Thanks. I will look into this a bit more. Howdo these apps compare with CDE for usability or practicality? Totally different?
Back to top
View user's profile Send private message 
npierce

Joined: 28 Dec 2009
Posts: 858

PostPosted: Wed 02 Jan 2013, 08:41    Post subject:  

You're welcome. I'm glad to hear it was helpful. Good luck with your project.
Back to top
View user's profile Send private message 
sunburnt


Joined: 08 Jun 2005
Posts: 5043
Location: Arizona, U.S.A.

PostPosted: Wed 02 Jan 2013, 17:33    Post subject:  

I looked at CDE and I didn`t like how the packages are built ( usage ).
I think it`s messy making them and also messy using them ( try one...).
CDE only shares or includes libs., it doesn`t manage them like AppPkg.
The concept is somewhat similar to RoxApps and AppPkg, but I did not
see that CDE was any better in any regard, and not so good in some.
All of these package types are; No union, and no install or uninstall.

RoxApps and AppPkg are easy to make and work well ( simple is good ).

I`ve designed a small minimal Linux O.S. that only uses AppPkgs.
The O.S. loads into ~ 30 - 40 MB of ram. Apps. run from ram or device.
It only has rxvt, text editor, a few other utilities, and an AppPkg builder.
All other apps. are AppPkgs, so they`re easily added and removed.
Basing it on stable Ubuntu versions means no app. compiling`s needed.
Back to top
View user's profile Send private message 
greengeek

Joined: 20 Jul 2010
Posts: 2755
Location: New Zealand

PostPosted: Wed 02 Jan 2013, 17:36    Post subject:  

I'd be keen to have a try of your OS. Any links?
Back to top
View user's profile Send private message 
sunburnt


Joined: 08 Jun 2005
Posts: 5043
Location: Arizona, U.S.A.

PostPosted: Wed 02 Jan 2013, 17:53    Post subject:  

The only working setup I made was based on Tiny Core Linux as it`s close.
Puppy`s boot is complex and it uses a union FS, Tiny Core doesn`t.

It`s like John Murga`s version of Puppy, or the Puppy PXE LAN boot setup.
2 files, the kernel and image.gz ( like the old-old Puppy ), the image file
loads to ram and has the root file system and the main Squash file in it.
No save file, it uses a save dir., but uses a save file for non-Linux partitions.
And that`s about it.!!! A very simple setup indeed.

I haven`t made a complete working O.S. of it as I realized the apps. were
the most critical part. So I`ve been working on AppPkg to get it ready.
Then I`ll turn my attention to the O.S. to complete the picture.
Back to top
View user's profile Send private message 
greengeek

Joined: 20 Jul 2010
Posts: 2755
Location: New Zealand

PostPosted: Wed 02 Jan 2013, 19:26    Post subject:  

If your OS ends up having a shorter boot time than puppy I think it will be an interesting alternative to trial. Those older puppies are quite intriguing in terms of their ability to do a task or two very quickly and I'd love to see one tailored to run ok on a wide range of hardware (including new gear). I'd probably be looking at having a mix of OS installed - some to handle the normal varied desktop type of environment (like Slacko and Lucid etc), and some cut down OSes to facilitate specific functions that I want to do quickly, without bloat.

What I'm finding interesting with pupngo2012 is that despite it's lack of bulk it runs on even my newest gear (admittedly my netbooks are 3 years old now, so hardly "new"). Maybe that is to do with the zdrive setup. Anyway, the idea of a small, fast puppy core with a statically-linked word processor grafted into it is my first goal. So much to learn and so little time...
Back to top
View user's profile Send private message 
sunburnt


Joined: 08 Jun 2005
Posts: 5043
Location: Arizona, U.S.A.

PostPosted: Wed 02 Jan 2013, 22:04    Post subject:  

Having to reboot all the time is a pain. One O.S. should be as good as another.
Why a W.P. O.S. ?

Eventually I intend the device modules to be compiled into the kernel.
So each kernel would be for a specific PC ( motherboard ).
This would speed up the boot, no modules to load. Also no union to setup.

Puppy has lots of boot methods, this makes for varying boot times.
Booting from CD - DVD is slow, the devices were never ment for an O.S.
Boot methods would be; HD, USB, PXE.
Back to top
View user's profile Send private message 
greengeek

Joined: 20 Jul 2010
Posts: 2755
Location: New Zealand

PostPosted: Thu 03 Jan 2013, 03:16    Post subject:  

sunburnt wrote:
Why a W.P. O.S. ?.
It's just my observation that the more effort goes into grafting multiple programs into a cutdown OS like puppy, the higher the chance of conflicts, and of unnecessary system load. Some of my machines are old and lacking in ram and they run faster with slim OSes. There are many nights where all I am doing is typing up documents and a WP is all I need. The faster the system runs the more productive I am. Plus it is a learning experience for me to build a puppy one step at a time. If I can't successfully remaster a puppy with nothing more than a single WP theres no point me trying a more complex one Smile
Back to top
View user's profile Send private message 
sunburnt


Joined: 08 Jun 2005
Posts: 5043
Location: Arizona, U.S.A.

PostPosted: Thu 03 Jan 2013, 04:50    Post subject:  

I see and understand that. I guess my thought is: Load the SFS you want.
Remastering for apps. is kinda silly considering Puppy`s flexibility.

So... Your stripped Puppy and various SFS files that load-on-the-fly.
There`s How-To pages that talk about removing apps. from the main SFS.
What`s needed is an SFS load-on-the-fly GUI ( maybe one`s been made ).

Puppy`s Boot Manager doesn`t offer Profiles ( I always thought it should ).
But it doesn`t matter, just boot with no SFS files, and load / unload them.

I argued for this type of setup years ago. Many have liked the idea.
Then I realized the union was not needed and it has un-fixable problems.
So my simplified O.S. gradually took shape. AppPkg is a critical part of it.
Back to top
View user's profile Send private message 
tallboy


Joined: 21 Sep 2010
Posts: 454
Location: Oslo, Norway

PostPosted: Fri 04 Jan 2013, 08:43    Post subject:  

greengeek, for some info on almost similar ideas, and in case this distro is unfamiliar to you, you may get some inspiration by taking a look at how GoboLinux is made:

On GoboLinux, Wikipedia wrote:
An alternative distribution which redefines the file system hierarchy by installing everything belonging to one application in one folder under /Programs, and using symlinks from /System and its subfolders to point to the proper files
.
(my personal view: links suck!)

tallboy

_________________
True freedom is a live Puppy on a multisession CD/DVD.
Back to top
View user's profile Send private message 
Display posts from previous:   Sort by:   
Page 2 of 2 [28 Posts]   Goto page: Previous 1, 2
Post new topic   Reply to topic View previous topic :: View next topic
 Forum index » House Training » Users ( For the regulars )
Jump to:  

You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum
You cannot attach files in this forum
You can download files in this forum


Powered by phpBB © 2001, 2005 phpBB Group
[ Time: 0.1155s ][ Queries: 11 (0.0062s) ][ GZIP on ]