Puppy Linux Discussion Forum Forum Index Puppy Linux Discussion Forum
Puppy HOME page : puppylinux.com
"THE" alternative forum : puppylinux.info
 
 FAQFAQ   SearchSearch   MemberlistMemberlist   UsergroupsUsergroups   RegisterRegister 
 ProfileProfile   Log in to check your private messagesLog in to check your private messages   Log inLog in 

The time now is Thu 21 Aug 2014, 12:06
All times are UTC - 4
 Forum index » Off-Topic Area » Programming
Static Linking Considered Harmful
Post new topic   Reply to topic View previous topic :: View next topic
Page 1 of 2 [19 Posts]   Goto page: 1, 2 Next
Author Message
disciple

Joined: 20 May 2006
Posts: 6427
Location: Auckland, New Zealand

PostPosted: Sun 19 Feb 2012, 05:49    Post subject:  Static Linking Considered Harmful
Subject description: "Conclusion: Never use static linking!"
 

I found this on Ulrich Drepper's website - I'm not sure if he wrote it. According to wikipedia he's the lead contributor and maintainer for glibc.
http://www.akkadia.org/drepper/no_static_linking.html

His site also includes things like tutorials on "Optimizing with gcc and glibc" and "How to Write Shared Libraries"

_________________
DEATH TO SPREADSHEETS
- - -
Classic Puppy quotes
- - -
Beware the demented serfers!
Back to top
View user's profile Send private message 
nooby

Joined: 29 Jun 2008
Posts: 10557
Location: SwedenEurope

PostPosted: Sun 19 Feb 2012, 06:02    Post subject:  

Does all puppies have static linking and what can it do
in worst scenario?

_________________
I use Google Search on Puppy Forum
not an ideal solution though
Back to top
View user's profile Send private message 
Terryphi


Joined: 02 Jul 2008
Posts: 759
Location: West Wales, Britain.

PostPosted: Sun 19 Feb 2012, 06:06    Post subject:  

All perfectly reasonable arguments but no answer to the library version mayhem which is the curse of Linux. That is why people choose to use static libraries.
_________________
Classic Opera 12.16 browser SFS package for Precise, Slacko, Racy, Wary, Lucid, Quirky, etc available here Smile
Back to top
View user's profile Send private message Visit poster's website 
disciple

Joined: 20 May 2006
Posts: 6427
Location: Auckland, New Zealand

PostPosted: Sun 19 Feb 2012, 06:43    Post subject:  

Have you really seen that "mayhem"? I just haven't really.

I mentioned the other day: I tend to wonder if "dependency hell" on Linux is mostly just urban legend.

I do see it on Windows at work all the time. Most programs will have their own versions of all their dependencies, kept in their own folders (just as bad as static linking for wasting space and bandwidth). But the odd application will install things in the main "system" folder, and very often these will be obsolete versions, in which case they will break a bunch of other programs, because they override the versions programs keep in their own folder. But if there was a standard "repository" for Windows (there is a real package manager, used by cygwin and osgeo4w and various things), there wouldn't be a problem, because all the programs would be compiled against the same current libs.

_________________
DEATH TO SPREADSHEETS
- - -
Classic Puppy quotes
- - -
Beware the demented serfers!
Back to top
View user's profile Send private message 
disciple

Joined: 20 May 2006
Posts: 6427
Location: Auckland, New Zealand

PostPosted: Sun 19 Feb 2012, 06:45    Post subject:  

I guess what I'm saying is really that if the package management system works acceptably, it is the answer to your "mayhem".
_________________
DEATH TO SPREADSHEETS
- - -
Classic Puppy quotes
- - -
Beware the demented serfers!
Back to top
View user's profile Send private message 
technosaurus


Joined: 18 May 2008
Posts: 4335

PostPosted: Mon 20 Feb 2012, 00:18    Post subject:  

puppy is mostly dynamically linked --- thats why when you "upgrade" libraries, things can break ... not so when statically linked

go ahead and upgrade libxcb... and have fun
upgrade from gtk2.16 to any version past 2.18 and be annoyed

I would take any advice from the maintainer of wontfix-libc with a grain of salt

the problem isn't really shared libs either - its the crap GNU tools we use to build them that unnecessarily links in symbols that are not needed because
pkg-config wrongly says to do so (or auto* thinks it did)

hint when you have a properly configured toolchain you can build a nearly unbreakable gtk2 binary with
gcc $CFLAGS `pkg-config gtk+-2.0 --cflags` *.c -o outputbinary $LDFLAGS -lgtk-x11-2.0

but the stupid autotools link in the entire friggin' dependency toolchain directly, causing every used function to get its own special spot in the global offset table so that it can theoretically start .0000001s faster so long as nothing _ever_ moves, changes, gets rebuilt with slightly modified options or compiler flags ... then it loads much much much slower (not to mention, creating an unnecessarily larger binary)
then god forbid you want/need to upgrade to a version with a changed APi ... say xcb ... even though only libX11 directly depends on it (and a few less popular apps) nearly everything built against libX11 will break... no problem - just recompile X11 and your good right? nope - the linker listened to you when you told it to directly link libxcb, so everything you compiled with autotools is now broken

_________________
Web Programming - Pet Packaging 100 & 101
Back to top
View user's profile Send private message 
disciple

Joined: 20 May 2006
Posts: 6427
Location: Auckland, New Zealand

PostPosted: Mon 20 Feb 2012, 03:19    Post subject:  

So are there alternatives to the gnu tools that solve those problems?
_________________
DEATH TO SPREADSHEETS
- - -
Classic Puppy quotes
- - -
Beware the demented serfers!
Back to top
View user's profile Send private message 
technosaurus


Joined: 18 May 2008
Posts: 4335

PostPosted: Mon 20 Feb 2012, 18:32    Post subject:  

disciple wrote:
So are there alternatives to the gnu tools that solve those problems?
yes, - jwm doesn't mtpaint doesn't (they have their own configure scripts) and it is perfectly acceptable to have an editable Makefile(s) or other custom build script.

The problem is, we dropped the ball so long ago by not integrating the dev environment with the packaging environment, that now its really too late for a small team to go back and pick up the ball and "do it right" ... but we can at least do it better

On a separate note - just for reference I can build (and have) a fully self contained 2.6.32 kernel with a statically linked and built-in userland including X in a single 1Mb kernel image that will run with <4mb RAM ... not really possible with shared glibc ... but then again I am using multicall binaries built with my own userland build scripts (only because it was easier for me to do it that way... Not something I would want to do for _every_ package by myself)

_________________
Web Programming - Pet Packaging 100 & 101
Back to top
View user's profile Send private message 
disciple

Joined: 20 May 2006
Posts: 6427
Location: Auckland, New Zealand

PostPosted: Mon 20 Feb 2012, 20:48    Post subject:  

Quote:
The problem is, we dropped the ball so long ago by not integrating the dev environment with the packaging environment, that now its really too late for a small team to go back and pick up the ball and "do it right" ... but we can at least do it better

I'm struggling to follow you here - have you come straight from the package management thread by any chance?
Who is the "we" you refer to? Puppy packagers? Developers of Linux software in general? Distro builders in general?

_________________
DEATH TO SPREADSHEETS
- - -
Classic Puppy quotes
- - -
Beware the demented serfers!
Back to top
View user's profile Send private message 
technosaurus


Joined: 18 May 2008
Posts: 4335

PostPosted: Tue 21 Feb 2012, 00:30    Post subject:  

disciple wrote:
Quote:
The problem is, we dropped the ball so long ago by not integrating the dev environment with the packaging environment, that now its really too late for a small team to go back and pick up the ball and "do it right" ... but we can at least do it better

I'm struggling to follow you here - have you come straight from the package management thread by any chance?
Who is the "we" you refer to? Puppy packagers? Developers of Linux software in general? Distro builders in general?
not just Linux, pretty much all *nixes. Have you ever watched what a ./configure script does or gone through one? OMG, what a disaster, but I find it hilarious when I download a 1000 byte program with a 100kb config script - Rob Landley says it best
http://landley.net/notes-2011.html#28-08-2011
*BSD has bsdbuild and others which do essentially the same thing

my point with integrating dev and packaging was that all of the garbage that the configure script does, could already be done by a properly setup package management system
for example (not well thought out, but just a "for instance")
if a library (libmyclib) provides snprintf it could add the following to the "<systemconfig_file>"
#define HAS_SNPRINTF -lmyclib
which would not only tell the system that we have snprintf, but how to link it

... the only plausible way to even attempt this (and get it into mainstream) is to try and shim it into the autotools caching mechanism to make it think it has already verified everything

.....sorry to get off topic, but getting back to shared vs. static

shared libs are vulnerable to this
LD_PRELOAD=/tmp/vicious_attacklib.so <binary>

if a shared lib has a vulnerability ALL of the programs linked against it also do (with a static link, some may have been linked against non-vulnerable versions or the vulnerable code may not even be linked in if it isn't used)

and it is FUD that static binaries are slower (in fact they are ~100-4000% faster)
http://sta.li/faq


Quote:
* fixes (either security or only bug) have to be applied to only one place: the new DSO(s). If various applications are linked statically, all of them would have to be relinked. By the time the problem is discovered the sysadmin usually forgot which apps are built with the problematic library. I consider this alone (together with the next one) to be the killer arguments.

breakages only have to be applied in 1 place too Smile
If you maintain your source tree its pretty easy to figure out using simple tools (find and grep).
and verify changes/lack of changes using edelta or xdelta

Quote:
* Security measures like load address randomization cannot be used. With statically linked applications, only the stack and heap address can be randomized. All text has a fixed address in all invocations. With dynamically linked applications, the kernel has the ability to load all DSOs at arbitrary addresses, independent from each other. In case the application is built as a position independent executable (PIE) even this code can be loaded at random addresses. Fixed addresses (or even only fixed offsets) are the dreams of attackers. And no, it is not possible in general to generate PIEs with static linking. On IA-32 it is possible to use code compiled without -fpic and -fpie in PIEs (although with a cost) but this is not true for other architectures, including x86-64.

yes, because they aren't nearly as vulnerable to the primary vectors for those exploits such as LD_* attacks and ldd escalations (people put locks on doors, not walls)
you _can_ "statically" link a pie, just compile your "static" lib(s) with -fpic (you will get the dirty pages and other pic overhead but at least the unused code will be removed)

Quote:
* more efficient use of physical memory. All processes share the same physical pages for the code in the DSOs. With prelinking startup times for dynamically linked code is as good as that of statically linked code.

no, they share _some_ pages (only read only), add quite a few extra dirty pages and if you prelink then the load times skyrocket once you change a single shared lib - dont ever do it, it will suck almost immediately
I have tested this with a plethora of compiler/linker optimizations, hacks and tricks and the closest I could get to the speed of its static binary counterpart's startup was still only half as fast

Quote:
* all kinds of features in the libc (locale (through iconv), NSS, IDN, ...) require dynamic linking to load the appropriate external code. We have very limited support for doing this in statically linked code. But it requires that the dynamically loaded modules available at runtime must come from the same glibc version as the code linked into the application. And it is completely unsupported to dynamically load DSOs this way which are not part of glibc. Shipping all the dependencies goes completely against the advantage of static linking people site: that shipping one binary is enough to make it work everywhere.

another wontfix glibc bug

Quote:
* Related, trivial NSS modules can be used from statically linked apps directly. If they require extensive dependencies (like the LDAP NSS module, not part of glibc proper) this will likely not work. And since the selection of the NSS modules is up the the person deploying the code (not the developer), it is not possible to make the assumption that these kind of modules are not used.
yet another wontfix glibc bug
Quote:
* no accidental violation of the (L)GPL. Should a program which is statically linked be given to a third party, it is necessary to provide the possibility to regenerate the program code.

seriously - you don't think it is possible to accidentally violate lgpl ... if you can't remember what library version a program is linked against (because you didn't track it), what magic makes you remember patching it to build your code
if you are statically linking there is no doubt whether or not you need to include the static libs of lgpl libs

Quote:
* tools and hacks like ltrace, LD_PRELOAD, LD_PROFILE, LD_AUDIT don't work. These can be effective debugging and profiling, especially for remote debugging where the user cannot be trusted with doing complex debugging work.

exactly, but there are others that do that _aren't_ a giant gaping security hole... tools that aren't designed specifically for shared libraries (strace for instance) do work ... thats like saying hammers don't make very good screwdrivers

_________________
Web Programming - Pet Packaging 100 & 101
Back to top
View user's profile Send private message 
disciple

Joined: 20 May 2006
Posts: 6427
Location: Auckland, New Zealand

PostPosted: Tue 21 Feb 2012, 07:22    Post subject:  

Quote:
http://sta.li/faq

Thanks, that's a great link, although it would be good to see a lot of real-life numbers, particularly as those guys are focused on small programs. As a user I don't think I generally care about small programs (exceptions would be a few things used by shell scripts like pburn, but for most of those you ideally use busybox anyway). Where I would notice a big performance increase is in big programs like browsers. They say
Quote:
usually big static executables (which we try to avoid) easily outperform dynamic executables with lots of dependencies

I'm guessing "easily outperform" is a lot less than 4000%... but where are the numbers?
How common are "Good libraries" that
Quote:
implement each library function in separate object (.o) files, this enables the linker (ld) to only extract and link those object files from an archive (.a) that export the symbols that are actually used by a program.
?

================
Off-topic: have you tried Stali or Sabotage Linux or anything? I was quite interested in a distro based around uClibc or something and busybox, but it looked like they weren't that viable (alive and with a good selection of apps). A distro based on static linking would be even more interesting.

_________________
DEATH TO SPREADSHEETS
- - -
Classic Puppy quotes
- - -
Beware the demented serfers!
Back to top
View user's profile Send private message 
technosaurus


Joined: 18 May 2008
Posts: 4335

PostPosted: Tue 21 Feb 2012, 12:39    Post subject:  

disciple wrote:
How common are "Good libraries" that
Quote:
implement each library function in separate object (.o) files, this enables the linker (ld) to only extract and link those object files from an archive (.a) that export the symbols that are actually used by a program.
?
Not very, I can think of one that does it (dietlibc) but it is not really "Good" for other reasons
Quote:
================
Off-topic: have you tried Stali or Sabotage Linux or anything? I was quite interested in a distro based around uClibc or something and busybox, but it looked like they weren't that viable (alive and with a good selection of apps). A distro based on static linking would be even more interesting.
Goingnuts and I have been trying to mary the best of both worlds by merging the best mix of smaller tools (but still useful) static build advantages, small replacement libraries, the multicall binary (mcb) concept and compiler/linker optimizations
we try to strike a balance between size, functionality etc
for instance one mcb contains the X11 apps ... xinit, Xvesa, jwm, rxvt
another has gtk1 apps ... Rox-Filer, minimum profit, dillo1, Xdialog, mtpaint, aumix
I did do testing on this with multiple apps open compared to the same mcb built on my Wary box's shared libs and resource usage in Wary increased at nearly double the rate per app, which is fairly consistent with the firefox and seamonkey builds from lamarelle.org (in case you want a "real world" example) though they only use static mozilla libs (well mostly )

_________________
Web Programming - Pet Packaging 100 & 101

Last edited by technosaurus on Wed 22 Feb 2012, 14:23; edited 1 time in total
Back to top
View user's profile Send private message 
Aitch


Joined: 04 Apr 2007
Posts: 6825
Location: Chatham, Kent, UK

PostPosted: Tue 21 Feb 2012, 13:31    Post subject:  

another good thread

thanks disciple/techno

techno

Is there any light down the BSD tunnel, or are all 'nixes burdened with the packaging/dependency/build problems?

Aitch Smile
Back to top
View user's profile Send private message 
wjaguar

Joined: 21 Jun 2006
Posts: 254

PostPosted: Tue 21 Feb 2012, 14:24    Post subject:  

technosaurus wrote:
Have you ever watched what a ./configure script does or gone through one? OMG, what a disaster, but I find it hilarious when I download a 1000 byte program with a 100kb config script - Rob Landley says it best

In 2006 when I was choosing to which image editor project I would like to contribute my code, one of my primary criteria was "configure script which I could read" Smile mtPaint fit the bill; GIMP didn't. The rest is history.Smile
Back to top
View user's profile Send private message 
sunburnt


Joined: 08 Jun 2005
Posts: 5016
Location: Arizona, U.S.A.

PostPosted: Fri 02 Mar 2012, 13:15    Post subject:  

technosaurus; Have you done a statistical analysis of lib. common usage?
The first thing to do would be making a list of apps. that are relevant, then take statistics on their libs. to use in separating static and shared libs.
The thought being compile libs. statically if they`re small and seldom used.
Though even common libs. with many different versions would qualify too. The main most commonly used version would be shared and others static.
Back to top
View user's profile Send private message 
Display posts from previous:   Sort by:   
Page 1 of 2 [19 Posts]   Goto page: 1, 2 Next
Post new topic   Reply to topic View previous topic :: View next topic
 Forum index » Off-Topic Area » Programming
Jump to:  

You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum
You cannot attach files in this forum
You can download files in this forum


Powered by phpBB © 2001, 2005 phpBB Group
[ Time: 0.1259s ][ Queries: 12 (0.0167s) ][ GZIP on ]