The State of Package Management

What features/apps/bugfixes needed in a future Puppy

Should Puppy's package format be changed?

Yes, without backwards compatibility.
11
28%
Yes, with backwards compatibility.
10
26%
No, but the PET format should be standardized/stricter.
8
21%
No, the PET format works fine.
10
26%
 
Total votes: 39

Message
Author
User avatar
Ray MK
Posts: 774
Joined: Tue 05 Feb 2008, 09:10
Location: UK

#61 Post by Ray MK »

Doesn`t anyone see the value of not using loose files in an O.S.?
Squash files can`t get viruses ( I believe...), and don`t corrupt easily.
SFS files require almost no package management, only config. files.
They stay compressed, load to ram, and can be swapped on-the-fly.
And they can be mounted from anywhere, hd, ram, usb, lan, and web.

One of Puppy`s big strengths is it`s main SFS file holding most of the O.S.
And the devx file ( "DEVeloper eXtra", I think...) is it`s second best setup.

Agreed - absolutely.
Especially now that we have so many good "on-the-fly" loaders/un-loaders.

Also good for those with ram challenged kit.

Best regards - Ray

jpeps
Posts: 3179
Joined: Sat 31 May 2008, 19:00

#62 Post by jpeps »

sunburnt wrote: SFS files require almost no package management, only config. files.
They stay compressed, load to ram, and can be swapped on-the-fly.
And they can be mounted from anywhere, hd, ram, usb, lan, and web.
I don't know about that. All the configs have to match the current environment perfectly, or it won't load. I recall TC's issues...every time there was some minor change (like a file using "-" vs "_"), it wouldn't boot. It's probably fine if every app is completely self contained.

What's to keep users from making and combining SFS apps now? I do. I recall having to remove whiteouts to get an SFS to load when there was a previous version in base (maybe something unique).

noryb009
Posts: 634
Joined: Sat 20 Mar 2010, 22:28

#63 Post by noryb009 »

Status update on makepuppypkg: I've done quite a bit of work on it, and it's working perfectly for me. However, I can't currently commit it to github, but I'll hopefully have it up on the weekend.
but the new format would be need to be flexable enough to be changed/extended, rather than replaced in the future else people will grumble and eventually go elsewhere with complaints like 'it's too hard to add apps'.
Definitely. Having a pet.specs that is separated by "|" may seem like a good idea, but it becomes hard to add more fields without either breaking backward compatibility or adding the field to the end. Having one field per line would fix this, as outdated package managers would ignore unknown fields, and package managers could have default values for outdated packages.
For example, for the opensuse package ConsoleKit-0.4.5-6.2.2.i586.rpm, (in xlm format) the pet.specs is:
<snip>
and this is for just one package!

As much as I would love to use it, I don't think that most puppy builders would be willing to spend that much time working on the package file for a single app
That is the pet.specs, but package builders do not have to worry about them. The program 'rpmbuild' reads a much nicer looking file and converts it to that file.

My 'makepuppypkg' scripts reads a file like arch's PKGBUILD.
Does that mean each and every dependency must be uploaded to a puppy repository?
In other distributions, if a package in a repository has a dependency, it would either be in that repository or an upstream one (in arch linux, a "extra" package can use a dependency from "core", but not vice versa).
And the devx file ( "DEVeloper eXtra", I think...) is it`s second best setup.
I agree with you saying that SFS files allow save files to stay small, but this quote also shows that bad part about the current SFS files - you get everything, or nothing.
SFS files would be great if either:
- every package in puppy is in it's own SFS file or
- every package in puppy is in it's own PET (or a new format) and users can make their own SFS files from it.

amigo
Posts: 2629
Joined: Mon 02 Apr 2007, 06:52

#64 Post by amigo »

Every file, dir and link on a fresh install (or bootable live) system, should be part of a single accountable 'unit'. Really, an SFS is no different from a package in some ways.
You nearly said it here:
"every package in puppy is in it's own PET"

The thing is that your concept of a single-file delivery doesn't hold up in practice -unless you are composing them from (usually) smaller units. An SFS which guarantees the usability of all its' 'members', should be composed of one or more 'packages'. The SFS itself is not really a good unit to use for packages simply because it's unhandy and CPU-intensive to work with from a creation standpoint.

An SFS is a file-system image, the same as an *.iso file or e2fs, e3fs. That aspect of it has no bearing on the contents of it. YXou could also create and distribute 'mega-packages' which combine the files of one or more 'units' of files.

Because of the very nature of how software/libs are created, there will nearly always be something 'external' needed by your delivered packge/FS-image. That means that the method of making your thing available and useful has to take that into account. It *is* possible (usually) to create fully static stuff, but if you go all the way wit that, then everything needs *loads* of stuff which winds up being duplicated -maybe thousands of time. How many progs are you running which depend on glibc? Do you have any idea?

The whole concept of shared libs is meant to avoid that, so statically including libs is nearly never the ritght thing to do.

But, the whole of idea of including everything necessary in one unit is simply wrong -unless you really want to upgrade, re-compile and re-assemble everything every time just one thing changes. If you have a program which is gonna need libs (for example), which are produced from other sources, you need to have a way to make sure those libs are available when your program is run -you need dependency-resolution. But resolving depends doesn't mean just having 'that library' on your system. It needs to be 'that library' exactly as the one used when you compiled your program -some stray library with the same name -even version, may not work, because of compile-time options and any further libs that get linked to simply because they exist at compilation time. Dependency resolution means you have a way to track the existence and origin of every file on the syetem.

While possible to create a list of all files included in your 'unit', if that unit is really composed of lots of little things from all over then you have an accounting mess. Processing any given source code is going to produce a limited number of files to be installed. The list of those files (as composed at built-time) is a unit -a package, if you will. Of course there can be split packages, if needed. But mega-packages which combine bits from various sources completely mess up the scheme -as a rule. There are exceptions there also, where several small utilities from several sources might be combined into one package. But when you start combining things which are, or are likely to be needed by any other programs, then your units start overlapping and accounting becomes difficult at best.

I'm tired, but instead of deleting this I'll let it stand just in case... You are going about it the wrong way around by addressing symptoms instead of the problem. The problem is that you have no way of accurately accounting for the necessary attributes of each and every item on your system at a particular time. That should be broken down into intelligible units. No, the user doesn't have to know or handle any of that, but 'devs' who don't do that have an un-maintainable mess.

jpeps
Posts: 3179
Joined: Sat 31 May 2008, 19:00

#65 Post by jpeps »

amigo wrote: You are going about it the wrong way around by addressing symptoms instead of the problem. The problem is that you have no way of accurately accounting for the necessary attributes of each and every item on your system at a particular time. That should be broken down into intelligible units. No, the user doesn't have to know or handle any of that, but 'devs' who don't do that have an un-maintainable mess.
Very true, and that will inevitably require proposed apps to be submitted to some central authority prior to posting....a horrible idea. Multiply the censored disappearing forum posting issue by 1000X. In the present system, there's no need to maintain a huge database replete with source code for every item. I have yet to experience a boot failure in puppy related some mismatch with an app dependency.

noryb009
Posts: 634
Joined: Sat 20 Mar 2010, 22:28

#66 Post by noryb009 »

The problem is that you have no way of accurately accounting for the necessary attributes of each and every item on your system at a particular time.
With the current PET format, we don't know what each system item attributes are, but we could with a new format. There could be a "provides" list which has all the provided programs/dependencies and their versions.
that will inevitably require proposed apps to be submitted to some central authority prior to posting....a horrible idea. Multiply the censored disappearing forum posting issue by 1000X.
It would "require" submitting a "build script" to a central authority, however this doesn't mean that the authority would have complete power. Any build script that works is better then nothing, and making a build script better is sometimes better. When a patch is in the gray area between being good or bad, it could be either be added as a comment in the build script for interested users to use, or hosted in a separate area which wouldn't be controlled by anyone (other then removing spam/malicious build scripts).
In the present system, there's no need to maintain a huge database replete with source code for every item.
Barry currently hosts the sources he uses. We wouldn't have to with a new system either, but it would be beneficial for everybody to have patches for the original source to make the program work with puppy. Hosting patches in a semi-centralized spot would make compiling new package versions and packages for new architectures much easier.
I have yet to experience a boot failure in puppy related some mismatch with an app dependency.
You might not have experienced a boot failure, but many packages don't work right, or don't even start, with a mismatched or not found dependency.

jpeps
Posts: 3179
Joined: Sat 31 May 2008, 19:00

#67 Post by jpeps »

noryb009 wrote: You might not have experienced a boot failure, but many packages don't work right, or don't even start, with a mismatched or not found dependency.
ldd /app

If they were compiled somewhere else, there's no guarantees anyway. There's also no guarantees an updated app will be backwardly compatible with everything. Even if they are, there's no guarantees that it will work correctly in every environment.

2byte
Posts: 353
Joined: Mon 09 Oct 2006, 18:10

#68 Post by 2byte »

Setting aside what packager to use for a moment, does this rough outline sound reasonable for a start?

Have a repository of proven build scripts like slackbuilds.org, with no precompiled anything.

Build scripts would list all relevant information including:
Puppy version
Package version
Architecture 486, 586, x86_32, x86_64
Packages required (on top of the base system) to build the current one.
All configure arguments to be listed in the script.
General notes needed to successfully compile the application from a base system.
Add others here ….

In order for the build script to be accepted as 'approved' it must compile the application on a base system (as it exists in the distro.iso) using only the official devx and any listed dependencies. The dependencies (other than the base system and devx above) would have to have their own approved build scripts. In other words, a developer could get the scripts to compile the dependencies and his app to be tested on a base system with no other packages installed.

From here a separate repository of applications can be compiled for this distro release. In the package would be a reference to the build script used to create it. Once the dependencies are also in this repository the developer could install the approved dependency when building their package instead of starting with source.

I know there are missing details here but if we had something like this then maintaining our pups and packages would be much easier.

Am I getting warm?


jpeps
Posts: 3179
Joined: Sat 31 May 2008, 19:00

#69 Post by jpeps »

Presently, no devx has to be loaded. Generally, with the exception of needing a lib or two, current pets work across different puppy flavors...so don't really need special build scripts. Compiling...lets say someone wants the latest mplayer...could take a long time..especially on older computers.

I'm guessing that if there was a great advantage to using build scripts, we'd be seeing more of them used already. Regarding the complexity of monitoring, approving, submitting, etc., I doubt that's going to happen (hopefully).

User avatar
sunburnt
Posts: 5090
Joined: Wed 08 Jun 2005, 23:11
Location: Arizona, U.S.A.

#70 Post by sunburnt »

.
# Comments on Barry making another add-on SFS of common libraries.?

One for each of his major releases, so as to provide the needed libs. base.
.

noryb009
Posts: 634
Joined: Sat 20 Mar 2010, 22:28

#71 Post by noryb009 »

ldd /app
You're missing a few steps:
1) google one dep and find a .PET
2) install the .PET
3) run ldd/app
4) go back to step 1

2byte: You are very close to what I'm picturing. slackbuilds.org is close. I'm picturing a two part respository: an official binary one for popular packages maintained by a team of volunteers (but the community would still be allowed to submit changes), and a second build script repository that doesn't have an approval process (but spam packages would still be deleted).

Many of those fields are needed in a build script, along with a few more.
In order for the build script to be accepted as 'approved' it must compile the application on a base system (as it exists in the distro.iso) using only the official devx and any listed dependencies
There will be a list of packages (gcc, make, etc.) that can be used to make programs. If it needs an extra program to make the package (like bacon), it would have a separate "build_deps" field.
From here a separate repository of applications can be compiled for this distro release.
This is partly correct - there will be a repository of applications for each puplet, but it would sit on top of a shared repository. The shared might include the firefox, but the puplet reposiory would include firefox which has a popup asking to make firefox the default browser.
Compiling...lets say someone wants the latest mplayer...could take a long time..especially on older computers.
Compiling can take a very long time, which is why binaries are available. The problem with hosting a binary package for every program is that it takes up a lot of space. To save both of those, a combination can be used: popular programs can be binaries on a server, but less popular programs only have a build script, allowing users to compile it themselves. Mplayer is a common program, so there would be a binary for it. Some small projects aren't too popular, so they would have a build script (written by an interested user, not by the main developers) for other interested users to use.
If good packages are picked for the official binary repository, then most users would never have to use a build script.
I'm guessing that if there was a great advantage to using build scripts, we'd be seeing more of them used already.
Almost every distribution uses build scripts.

User avatar
Q5sys
Posts: 1105
Joined: Thu 11 Dec 2008, 19:49
Contact:

#72 Post by Q5sys »

Aitch wrote:Well, I dunno, but slackware has been good so far :D
.....and Arch seems to have the best package management/update process I've come across....

http://www.linuxforums.org/forum/applic ... itory.html

The missing ingredient from puppy's package management seems to be a database

http://en.wikipedia.org/wiki/Package_management_system

Has Fedora/Yum been tried, or gentoo/portage? :lol: :lol:

/aside
jemimah, can I ask for kernel headers.sfs and or kernel source to be made available for saluki - people are still having compile problems occasionally, and these were omitted from both Lupu and Slacko, AFAIK

Aitch :)
The arch package system is great... but it also has a tendency to break things about once a month. Then you are left with a non usable system and you have to spend time trying to figure out how to fix it. These issues are usually sorted within a day. But if you look in the arch support forums you'll see tons of old threads about pacman breaking the system and people trying to find out how to fix it. It seems that the main devs want you to check the site for possible issues that may arise from updating before you update; instead of updating and then searching for problems.

If this debate is ever sorted... Ive still got a server ready to go for the repo.
Aitch wrote:
I'm with Jemimah
The problem isn't the ppm, it's the repos.....The only way any of this works at all is if puppy developers compile and test the packages, and specify the dependencies correctly, and if the users stick to installing stuff from the tested repositories only.
That seems simple enough...? - though I don't compile/develop.....

thanks devs! :D

Aitch :)
Cant src2pet and dir2pet simply be altered to run LDD on the binary and sed that output into a file to be used in the package for dependency checks?

And lastly since it relates to this issue... a thread about a package site.

amigo
Posts: 2629
Joined: Mon 02 Apr 2007, 06:52

#73 Post by amigo »

It's not pacman which breaks the system -it's the content of the packages. That's the problem with a rolling-update system. Some things, when rebuilt, require the immediate rebuilding of lots of other things. If the rolling updates were released as a complete 'tranch' of such related upgrades, then it would be more reliable.

You have the right idea about usinng ldd to figure out the depends, but that info must be cross-referenced against a list which shows which package contains the files you need. Plus, the output from ldd doesn't tell the whole strory. Many times a program depends on another binary program in order to function -'man' is a good example. It needs 'groff' in order be funtional, but groff is not a library which shows up as a dependency of 'man'.

User avatar
Q5sys
Posts: 1105
Joined: Thu 11 Dec 2008, 19:49
Contact:

#74 Post by Q5sys »

amigo wrote:It's not pacman which breaks the system -it's the content of the packages. That's the problem with a rolling-update system. Some things, when rebuilt, require the immediate rebuilding of lots of other things. If the rolling updates were released as a complete 'tranch' of such related upgrades, then it would be more reliable.
The issues Ive had with pacman have all been from updates to the core repo. Changes to pacman itself seem to be nightmarish at times... other times its seamless. I know about a month ago they required you to create a new pacman.conf file because of some weird change they made. Then a few months before that you had an issue with /etc/mtab. Before that I know I had an issue with mkinitcpio.
Those type of issues are what cause most of the headaches of people that I know that use arch. Otherwise they love it.
Those things listed above I wouldnt really consider issues with a package. They are from changes to the core system which require special action above and beyond normal updates. However a user has no idea about them until he gets a failed upgrade... or suddenly finds his system will not load properly after an upgrade. Sometimes I wish pacman had a feature where major changes like that would alert the user before installing.

User avatar
01micko
Posts: 8741
Joined: Sat 11 Oct 2008, 13:39
Location: qld
Contact:

#75 Post by 01micko »

To address amigo's concern I made a script awhile ago which checks deps of deps and also has a switch to let you know if the dep is in the main puppy-sfs.

here

Probably needs some work..
Puppy Linux Blog - contact me for access

User avatar
sunburnt
Posts: 5090
Joined: Wed 08 Jun 2005, 23:11
Location: Arizona, U.S.A.

#76 Post by sunburnt »

01micko; I made a similar utility years back.
It could make dependency tree files that would solve a lot of problems.

It`s not just the needed library, but the exact library that`s needed.
As amigo points out, change critical stuff and you need to redo everything.

# Gentoo Linux builds everything on the spot if I understand correctly.
This solves problems, but eventually you`d probably need to redo it also.

jpeps
Posts: 3179
Joined: Sat 31 May 2008, 19:00

#77 Post by jpeps »

sunburnt wrote: It`s not just the needed library, but the exact library that`s needed.
As amigo points out, change critical stuff and you need to redo everything.
True, and the app isn't going to run if a dep is altered after it's loaded either. So why go through the hassle of moving to self-contained packages or some other cumbersome process? A good repository of libs would be a plus though (although they're generally fairly easy to find).

amigo
Posts: 2629
Joined: Mon 02 Apr 2007, 06:52

#78 Post by amigo »

A central 'dump' of loose files is not very helpful -more often than not you are going to actually need more than one file -for example a 'real' library and the link that points to it. Most dependencies are really gonna need several files to work. A central repo of loose files is absolutely no help to the developer who tries to maintain it either. You need a way to have a system of grouping related things together.
That's why a list of *everything* is also useless -it remains a nightmare for users and for devs. Also, individual filenames don't always tell you anything about the version of that file, and they certainly don't give you any info about compile-options, etc.
No order is always gonna equal chaos.

User avatar
sunburnt
Posts: 5090
Joined: Wed 08 Jun 2005, 23:11
Location: Arizona, U.S.A.

#79 Post by sunburnt »

# Q:
But if a sfs of common libraries not in Puppy were built for each standard Puppy type,
and then Puppy`s apps. built upon them, they should be reliable and well constructed?

It`s short lived of course, as soon as a Barry makes a new Puppy, a new lib.
But this seems to be the best method of providing a solid base for Puppy?

As I said, I think media would be a big part of the needed libs. ( but not only )
I`ve tried gstreamer apps. with little success. And ffmpeg is a real pain!

Thinking in retrospect, my success with binaries has been with simple apps.
My attempts at media with binary and source files has been mostly failures.

amigo
Posts: 2629
Joined: Mon 02 Apr 2007, 06:52

#80 Post by amigo »

The best way to have a solid base is to deal with workable units in an orderly way -just what I've been saying all along. Each source triggers a new unit of installable material. They must be compiled in a certain order so that everything works as expected. In order to know which unit a certain dependency belongs to, you need to account for each file.

Creating and using sfs's or other mountable file-system images is fine, for what it is. But, the only rational way to construct them is by using individual packages -you install them into a subdir then make your sfs out of that. That would at least eliminate some of the disorderiness. But having a single sfs install the material from what is really several units starts causing problems of duplication. And what about this:
1. you create an sfs which includes a certain version of a lib.
2. I want to create an sfs which includes a later version of the same lib, or the same version, but compiled with different options included.

How do propose to resolve the conflict? If your sfs gets 'loaded' before mine, then my prog is gonna try to use your libs and will fail. Handling duplicate libs or separate versions of the same lib can be done of course, but involves other tricks besides simply relying on a certain load order.

If you were to create sfs's of each unit needed by the program you want to work, then it would be exactly the same problem as handling 'packages'. You'd still need a way to reolve the dependencies logically and orderly.

Why not simply 'install' packages on-the-fly? Some LiveCD systems used to handle it that way? Well, that's where your sfs's have an advantage because they don't need to use any RAM since they are simply mounted. The problem of which sfs's are needed and in what order they must be loaded remains though. Lists of files included in each 'unit', unit names which include both 'version' and 'build' information -plus architecture info if you are to ever have more than one arch supported. Units are most easily divided along the lines they come as -individual sources. Anything else is a mess.

Why can't you do a real upgrade of Puppy? Simply because the above has not been done. Why can't you easily remove a program or library that you don't need or want? Because the above has not been done. Why can't you 'replace' one program or lib with another which has superceded it? Because the above has not been done.

Post Reply