I accidentally delete glibc

Puppy related raves and general interest that doesn't fit anywhere else
Post Reply
Message
Author
jamesbond
Posts: 3433
Joined: Mon 26 Feb 2007, 05:02
Location: The Blue Marble

I accidentally delete glibc

#1 Post by jamesbond »

I accidentally removed glibc.

I was running Fatdog build process and I wanted to remove glibc from its chroot.
The correct command was this:

Code: Select all

ROOT=chroot removepkg glibc32 glibc
but I typed in the wrong way:

Code: Select all

removepkg ROOT=chroot glibc32 glibc
This has the unintended effect of attempting to remove ROOT=chroot package (which didn't exist), and then glibc32, and glibc.

Of course it wasn't fully successful, but the dynamic linker
/lib64/ld-linux-x64_64.so.2 was deleted and that's enough to stop
almost anything.

What to do? Read more: http://lightofdawn.org/blog/?viewDetailed=00179
Fatdog64 forum links: [url=http://murga-linux.com/puppy/viewtopic.php?t=117546]Latest version[/url] | [url=https://cutt.ly/ke8sn5H]Contributed packages[/url] | [url=https://cutt.ly/se8scrb]ISO builder[/url]

amigo
Posts: 2629
Joined: Mon 02 Apr 2007, 06:52

#2 Post by amigo »

Surely you have this:
installpkg glibc32 glibc
although your installpkg&Co are probably hostage to glibc. So, reboot with some sort of live CD and then:
ROOT=/path/to/mounted/ partition installpkg glibc32 glibc
Are your install/removepkg homegrown or from slackware or??

jamesbond
Posts: 3433
Joined: Mon 26 Feb 2007, 05:02
Location: The Blue Marble

#3 Post by jamesbond »

amigo wrote:Surely you have this:
installpkg glibc32 glibc
although your installpkg&Co are probably hostage to glibc.
Exactly. installpkg is a shell script, and /bin/sh requires glibc to run.
So, reboot with some sort of live CD and then:
ROOT=/path/to/mounted/ partition installpkg glibc32 glibc
The point is to recover without re-booting Image
Are your install/removepkg homegrown or from slackware or??
From Slackware 14.0, modified to support some additional features (e.g. faster operation, support for uninstall script) but still fully backward-compatible with Slackware's original. I choose to drop improvements that would break compatibility.
Fatdog64 forum links: [url=http://murga-linux.com/puppy/viewtopic.php?t=117546]Latest version[/url] | [url=https://cutt.ly/ke8sn5H]Contributed packages[/url] | [url=https://cutt.ly/se8scrb]ISO builder[/url]

User avatar
battleshooter
Posts: 1378
Joined: Wed 14 May 2008, 05:10
Location: Australia

#4 Post by battleshooter »

That was an interesting read James, very clever. I've done that many times when playing around with glibc, making the whole system unusable but not wanting to lose all the work I'd done. I'm constantly fascinated by the innovations you and the Fatdog team come up with :)
[url=http://www.murga-linux.com/puppy/viewtopic.php?t=94580]LMMS 1.0.2[/url], [url=http://www.murga-linux.com/puppy/viewtopic.php?t=94593]Ardour 3.5.389[/url], [url=http://www.murga-linux.com/puppy/viewtopic.php?t=94629]Kdenlive 0.9.8[/url]

amigo
Posts: 2629
Joined: Mon 02 Apr 2007, 06:52

#5 Post by amigo »

plus, installpkg will call gz or xz and tar and possibly ln and others from doinst.sh
with glibc gone you may not be unable to start *anything* -even statically compiled stuff because your ld-linux.so is gone.
I would have thought that you had thought about this situation before. You could maybe avoid this by using statically-compiled tools in the installpkg script and /bin/sh.

The other thing to think about here is how does one successfully upgrade glibc or any of the tools used during package installation. Study slackware's doinst.sh for glibc/glibc-solibs and also what upgradepkg does.

jamesbond
Posts: 3433
Joined: Mon 26 Feb 2007, 05:02
Location: The Blue Marble

#6 Post by jamesbond »

@battleshooter: thanks! Necessity is mother of invention - we use the feature and find it lacking, so we strive to do better so next time we all have less pain when *** hits the fan, as my friend likes to say :)

@amigo:
plus, installpkg will call gz or xz and tar and possibly ln and others from doinst.sh
Indeed.
with glibc gone you may not be unable to start *anything* -even statically compiled stuff because your ld-linux.so is gone.
Statically compiled stuff don't need ld-linux.so. They can still run. But most of the stuff in the system are indeed dynamically compiled, once ld-linux.so is gone they all will stop working.
I would have thought that you had thought about this situation before. You could maybe avoid this by using statically-compiled tools in the installpkg script and /bin/sh.
Indeed. My solution is to always carry a statically compiled busybox with all applets, including ash, compiled with PREFER_APPLET.
The other thing to think about here is how does one successfully upgrade glibc or any of the tools used during package installation. Study slackware's doinst.sh for glibc/glibc-solibs and also what upgradepkg does.
Of all pkgtools script, upgradepkg is the only one I haven't modfied. upgradepkg says that what it does is do "installpkg" and the remove files not in the new package. That doesn't sound interesting to me, because you can always do it using removepkg/installpkg combination.
There is a certain value of upgradepkg, but I think in reality it isn't very useful especially for us that runs layered filesystem; we can always boot pristine and do any "upgrades" we need by using "ROOT=/mnt/savefile removepkg or installpkg". This is much safer than actually using upgradepkg; because the binaries/libs more often than not cannot be overwritten (="upgraded") if they're being locked/mmmap-ed by a running process.
That being said, I probably need to start looking at it for completeness reasons (at least for performance improvement if not for anything else).

glibc-solibs update is always very risky; you have said so yourself in many of your other posts. I don't do that, and I don't recommend anyone to do that; which is why what battleshooter does is commendable :) she's trying and succeeding at glibc update; something that are risky and in theory can cause a lot of breakage :)
Fatdog64 forum links: [url=http://murga-linux.com/puppy/viewtopic.php?t=117546]Latest version[/url] | [url=https://cutt.ly/ke8sn5H]Contributed packages[/url] | [url=https://cutt.ly/se8scrb]ISO builder[/url]

amigo
Posts: 2629
Joined: Mon 02 Apr 2007, 06:52

#7 Post by amigo »

upgradepkg != removepkg + installpkg
That procedure won't allow you to update things like tar or xz

upgradepkg:
1. first moves the database file for the old package out of the way,
2. then installs the new package,
3. then removes files from the old package which are not in the new package,
4. installs the new package a second time to 'make sure' (actually to cover an old corner case when doing a major upgrade from long, long ago.

Again #3, uses the (moved) file list from the old package to cross-reference.

Do you understand the reasoning in 1, 2 and 3? The problem is how to upgrade something which is being used by installpkg/removepkg. Upgrading glibc is the same, except it needs special handling in the doinst.sh in order to have working libc.so for all the other tools.

Of course, having a dedicated set of static-tools can ease that some. But the upgrade process is still the same: overwrite old files with new files, remove any old files which were not overwritten. Note also that link-creation must also be done at the proper point.

jamesbond
Posts: 3433
Joined: Mon 26 Feb 2007, 05:02
Location: The Blue Marble

#8 Post by jamesbond »

amigo wrote:upgradepkg != removepkg + installpkg
That procedure won't allow you to update things like tar or xz

upgradepkg:
1. first moves the database file for the old package out of the way,
2. then installs the new package,
3. then removes files from the old package which are not in the new package,
4. installs the new package a second time to 'make sure' (actually to cover an old corner case when doing a major upgrade from long, long ago.

Again #3, uses the (moved) file list from the old package to cross-reference.

Do you understand the reasoning in 1, 2 and 3? The problem is how to upgrade something which is being used by installpkg/removepkg.
Thanks, I've read upgradepkg source and yes I understand both the steps and the reasoning.
Indeed that's the only way to do it in non-layered filesystem.
In a layered filesystem there is a much more comfortable way as I wrote above; altthough obviously upgradepkg will work as well (if the package is created carefully).
Upgrading glibc is the same, except it needs special handling in the doinst.sh in order to have working libc.so for all the other tools.
I can't recall whether I have read glibc-solibs doinst.sh or not. But I remember reading one of package install script (could be debian or could be slackware) where the new incoming glibc has so many files that was renamed during packaging and need to be renamed back, in the correct order, to keep the (non-static) tools going until installation is complete. A lot of hassle :)
Of course, having a dedicated set of static-tools can ease that some.
I think it will help a lot. Static busybox is really helpful. But for this to work, the pkgtools must be written to use only features supported by busybox, which they are (I think); because pkgtools is also designed to be used from a Slackware "floppy rescue" disk (or is it install disk? Can't recall) that only has busybox.

Even glibc can't help it - when you build glibc it installs for itself, a file in /sbin/sln which is 600K in size. It is a glibc-statically-compiled "ln" which is used to create glibc symlinks.
But the upgrade process is still the same: overwrite old files with new files, remove any old files which were not overwritten. Note also that link-creation must also be done at the proper point.
Indeed.
Fatdog64 forum links: [url=http://murga-linux.com/puppy/viewtopic.php?t=117546]Latest version[/url] | [url=https://cutt.ly/ke8sn5H]Contributed packages[/url] | [url=https://cutt.ly/se8scrb]ISO builder[/url]

amigo
Posts: 2629
Joined: Mon 02 Apr 2007, 06:52

#9 Post by amigo »

You also need to understand the nuances of the tar version and options being used both to create and to unpack the packages. if you've ever wondered why slackware uses an ancient version of tar (1.13) it's precisely because it did/does something which newer versions of tar did not. Someone did at tar-1.27 create an option to tar which does the 'old thing' which tar-1.13 did. The thing is to not overwrite links-to-directories with a real directory when unpacking the archives. The 'admin' should be able to use links to mount-points, for instance, at his will, without the installation of a package un-doing that.

Also, installpkg unpacks the archive first, and directly into place, before running the doinst.sh even. I once saw a user who complained that upgrading a package should not overwrite any old files...?? No clue there...

My replacement packaging system (tpkg/tpm) allow also for pre-install, pre-uninstall and post-uninstall scripts. I experimented with first unpacking packages in a discreet location before moving them into the right place. But, just as with the slack pkgtools doing an install/remove/re-install sequence when upgrading, it becomes a long, slow process. The same scrutiny can be done by long-listing the tar archive before unpacking -and even unpacking just the install-scripts so that any pre-installation stuff can be done.

Listing the archive before installing is a good idea anyway -because it provides confirmation that the package is well-formed and complete. Anyway, the really critical points of installation and especially upgrade of any critical binaries, is when the links get destroyed by installpkg and then re-created by the doinst.sh.

tpkg/tpm are now using links *in the archive* as the newer tar does the proper thing with them. PatV's decision to use doinst.sh scripts was owing in-part to the old tars' faulty behaviour when overwriting exisiting links. Most packaging systems frown on having package installation run scripts because of security issues. Instead they use 'triggers' which cause the package installer to carry out the needed tasks -but only the tasks that it knows how to do -no arbitrary commands possible. This also the way the android app-installation process works. Each app contains its' own installer binary and installation script -but the script language is like the triggers system beacause it can only do a limited set of things.

jamesbond
Posts: 3433
Joined: Mon 26 Feb 2007, 05:02
Location: The Blue Marble

#10 Post by jamesbond »

Thank you amigo.

In case you don't know it - I value your opinion highly. Over the years I've learnt from many of your educational posts in this forum; many of those I actually apply in practical situations.
People come and go, and I'm happy that 10 years later you're still here in this forum sharing enlightenment both to the experienced and unexperienced.
You also need to understand the nuances of the tar version and options being used both to create and to unpack the packages. if you've ever wondered why slackware uses an ancient version of tar (1.13) it's precisely because it did/does something which newer versions of tar did not. Someone did at tar-1.27 create an option to tar which does the 'old thing' which tar-1.13 did. The thing is to not overwrite links-to-directories with a real directory when unpacking the archives. The 'admin' should be able to use links to mount-points, for instance, at his will, without the installation of a package un-doing that.
Indeed, package management is a complex thing. It's easy on the surface but the devil is always in the details. I know of the issue you said above - we have been bitten more than once when a package overwrite a symlink with a directory; back during the days when we still use home-brew petget-compatible "fatdog-package-manager"; but I wasn't aware that older tar will preserve the symlink (or the reason why PatV sticks to the old tar).
Also, installpkg unpacks the archive first, and directly into place, before running the doinst.sh even. I once saw a user who complained that upgrading a package should not overwrite any old files...?? No clue there...
Indeed. Anything else will be slow. Actually, even the original installpkg is slow because it de-compresses the package multiple times. This is fine if we're talking about 500kb package, but when deploying 100MB wine package for example this is slow. I've modified installpkg so it only decompress once and keep it cached until installation is done.
My replacement packaging system (tpkg/tpm) allow also for pre-install, pre-uninstall and post-uninstall scripts. I experimented with first unpacking packages in a discreet location before moving them into the right place. But, just as with the slack pkgtools doing an install/remove/re-install sequence when upgrading, it becomes a long, slow process. The same scrutiny can be done by long-listing the tar archive before unpacking -and even unpacking just the install-scripts so that any pre-installation stuff can be done.
Interesting. When we moved from Fatdog 600 to 700 over 3 years ago, I looked for a new package manager that has the following criteria:
a) has separate CLI and GUI tools
b) has ability to pull from remote repo.
c) Repo maintenance is easy
d) In worst case situation when the tools are not available, you can unpack the package manually.

I couldn't find anything else other than pkgtools.
DEB is nice but complex. Same with RPM.
Repo maintenance for these two isn't straightforward too.
paco (now renamed to porg) is nice but it doesn't support remote repo.
pkgtools doesn't have remote capability but fortunately slapt-get handles that.
pkgtools also doesn't have GUI but gslapt fixes that.

I didn't remember if tpm/tpkg already exist then. But I did remember that I was considering srcpkg as the foundation for the Fatdog build system. But it didn't allow be to build packages inside chroot (that is, build using libraries in a chroot instead of the host libraries); so in the end I wrote our own. I'm glad I did because I learnt a lot of things along the way.
Listing the archive before installing is a good idea anyway -because it provides confirmation that the package is well-formed and complete. Anyway, the really critical points of installation and especially upgrade of any critical binaries, is when the links get destroyed by installpkg and then re-created by the doinst.sh.
Noted.
tpkg/tpm are now using links *in the archive* as the newer tar does the proper thing with them. PatV's decision to use doinst.sh scripts was owing in-part to the old tars' faulty behaviour when overwriting exisiting links.
pkgtools supports both links inside package, and via doinst.sh. In all my packages, I always has links inside the tarball and not in doinst.sh. I actually don't see the benefit of doing that; perhaps only useful when we update stuff like glibc or gcc libraries? The kind of stuff that pulls the rugs from under your feet?
Most packaging systems frown on having package installation run scripts because of security issues. Instead they use 'triggers' which cause the package installer to carry out the needed tasks -but only the tasks that it knows how to do -no arbitrary commands possible. This also the way the android app-installation process works. Each app contains its' own installer binary and installation script -but the script language is like the triggers system beacause it can only do a limited set of things.
Indeed. There is always a balance/trade-off between freedom and security. An unchecked doinst.sh script can easily destroy a working system (or worse). As Uncle Ben said - with great power comes great responsibility. But that's why we run as root :) In Fatdog we have sandbox to mitigate this problem somewhat - if you don't trust a package, install it in the sandbox first.
Fatdog64 forum links: [url=http://murga-linux.com/puppy/viewtopic.php?t=117546]Latest version[/url] | [url=https://cutt.ly/ke8sn5H]Contributed packages[/url] | [url=https://cutt.ly/se8scrb]ISO builder[/url]

amigo
Posts: 2629
Joined: Mon 02 Apr 2007, 06:52

#11 Post by amigo »

Thanks for your kind comments. I also respect your accomplishments, knowing full well what it means to cross-compile and/or bootstrap a new architecture. I guess we've never communicated much, but I enjoy being able to 'cut to the chase' with you.

src2pkg with chroot... From the start src2pkg was not supposed to be a *distro* building tool, but simply a smart native package builder -which would be also drivable by other tools which could do whatever. I remember one user experimenting with using 'fakeroot' src2pkg. IIRC, the only change needed to src2pkg might be with it's use of LD_LIBRARY_PATH. Since fakeroot is also using it, src2pkg would need to *append* to already set LD_LIBRARY_PATH.

I've long considered re-writing src2pkg to address the deep structure items which make some things hard, like building several packages from one source -outside of the usual doc,devel,nls sub-packages. But, rewriting src2pkg to have the same features would take forever. Instead, I'm thinking of writing a new tool just for building tpm packages -which would use scripts quite similar to archlinux PKGBUILD scripts -and would be able to easily 'swallow' and adapt PKGBUILD scripts. Of course, the new tool would be able to do all the smart stuff with source-builds like src2pkg -just without all the mess of outputting so many types of packages.

About the tar versions, tar-1.27 & above includes an option '--keep-directory-symlink' which provides the old tar-1.13 behaviour. Of course, versions of tar newer than 1.13 are easier to compile on modern systems

Post Reply