Application to Build a Modular Pup -- A Different View

A home for all kinds of Puppy related projects
Post Reply
Message
Author
User avatar
mikeslr
Posts: 3890
Joined: Mon 16 Jun 2008, 21:20
Location: 500 seconds from Sol

Application to Build a Modular Pup -- A Different View

#1 Post by mikeslr »

First Off: I would appreciate that if any statements or assumptions I make below are inaccurate or misleading you take the time to correct them. I make them in good faith. But they are based on my recollection, my understanding, and in some cases my guesswork.

Spoiler Alert: This will be essentially be a scripting challenge. Those familiar with gtk-dialog, yad, BASH, etc. are urged to join; or looked at another way, are provided with the opportunity to use their skills in creating something to include in a community edition.

What if there was a method which would enable any Puppys to operate faster, and cooler while increasing the size of the files they have open at the same time by up to 45%? What if that method also ameliorated some of Puppy's long standing problems? Don't get me wrong. Puppy is my choice of computer operating systems. That doesn't mean it couldn't be better.

Compressed files are Puppy's strength. Their overuse, however, is also Puppy's limiting factor. The puppy-xxx.sfs which contains a Puppy's kernel, instruction set, and user applications is a compressed file. If a SaveFile is created to preserve settings and applications not provided by a Puppy's Dev, it also is a compressed file. Applications in SFS format are also compressed files. The problem is that in order for any of the information and instructions contained in those compressed files to be put to use, they have to be in memory, which requires that the files be decompressed. While file contents and compression systems produce some variance, a useful Rule of Thumb is that for any file to be decompressed into RAM, three times as much RAM as compressed size of the file is needed. If there is insufficient RAM to decompress all files in their entirety when they are needed, Puppys use a paging system, decompressing parts of files as needed, and as other parts are needed with greater urgency, transferring the now less needed parts out of RAM to storage (which in a Frugally Installed Pup also requires it to be compressed) so as to make room for the decompression of the now more urgently needed parts. The paging system itself, of course, requires RAM. All the aforementioned activity requires employment of the CPU. Additionally, when a Puppy is not a Full Install, in order to make use of the settings, and applications contained in the SaveFile and applications in SFS format, it generates in RAM a “merged file
Attachments
ProgFoldrCompare.png
Comparative RAM and CPU usage
(52.67 KiB) Downloaded 625 times
Last edited by mikeslr on Sun 08 Dec 2013, 02:33, edited 1 time in total.

User avatar
mikeslr
Posts: 3890
Joined: Mon 16 Jun 2008, 21:20
Location: 500 seconds from Sol

Reserved

#2 Post by mikeslr »

Reserved

mikesLr

dancytron
Posts: 1519
Joined: Wed 18 Jul 2012, 19:20

#3 Post by dancytron »

I don't pretend to understand everything you've posted.

I think that a "core" of puppy, without most user applications as you describe is a great idea. Not only as a tool for your idea (which I don't really understand) but also for RSH's sfs based ideas, as a base for remasters, and A drive based puppies.

I think for general release, it could be in the form of the core, z-drive for drivers, and a-drive for standard puppy applications. Then, it could be a base for all sorts of different configurations and experiments.

Also modifying the PPM so that you can always choose to download instead of install is a good idea, making creating custom sfs files easier.

I posted about this on RSH's modularity thread earlier.

wanderer
Posts: 1098
Joined: Sat 20 Oct 2007, 23:17

#4 Post by wanderer »

I use symlinked uncompressed folders (on ext2 filesystems) or uncompressed 2fs image files (on fat32 filesystems) on the system i built using an independent ramdrive as a core. I link the folder/image as /user and symlink /etc /root and other changable directories into the image from the ramdrive. I works great and has a lot of advantages. I continue to work on my system for fun and look forward to reading your posts. However my system is not similiar to the standard puppy in any way.

I assume from reading your post that this is similiar to your program files. I think using a core, with your program files, as well as the other puppy systems, would be a great way to build the CE project because it increases the manageability and flexiblility of the basic puppy system.

wanderer

amigo
Posts: 2629
Joined: Mon 02 Apr 2007, 06:52

#5 Post by amigo »

The devil is in this: define precisely what 'core' consists of. In the process of finding that out, you'll begin to see that all linux OS's are built from pretty well-defined units of functionality -each of which is part of a limited set of such tools or libraries -they are commonly called 'packages'. The kernel is one thing and can be interchanged nearly at will.

The rest of the system is composed of a few or thousands of units of functionality -with many alternatives which can be used in place of some other tool -but the traditional usage and design of things has meant that you can boot and run a system from nearly anywhere and in any way you can imagine. Someone had all the cool ideas *decades ago* which make possible all these cool things that get talked about and used here.

If you consider the idea of a small software appliance that only does one thing, then you can build a tiny system which only does that. The linux kernel only needs two things from the OS which it supports:
1. a special device file at /dev/console, so that it can output its' messages as it is booting. Nowadays, the kernel can even supply this device and others for you.
2. A bit of executable code at /sbin/init or /etc/init . If it is not found, then the kernel will look for /bin/sh.
If the kernel doesn't find these things it panics and locks up, and of course you have no OS.

The 'init' program can be anything at all -even a tiny bit of statically-compiled C code which does nothing more than print 'Hello World'. But, after that the OS is gonna lock up. Why? Because init is supposed to keep running until shutdown -at which time it turns control back to the kernel which shuts itself down.

Of course, nobody here wants such an appliance. So, lets try to get a more practical definition of 'core':
A system which is able to boot from a local medium and let us login. That still means no network, no extendability and certainly no GUI. But we want that because 'core' should be as small and well-defined as possible. Only by considering things from this scale can you get a handle on what modular truly implies.

So, we have an /sbin/init, but what does it need? well, nothing since it is -er should be, a statically compiled binary which handles the init process. What is the init process? That's all the actions which init starts which setup a complete running system -which could be nothing but a shell, if you like. If we take the idea that we are gonna build a big system, then the first thing we need for an extendible system is some shared libraries for the shell and any others to use. That's glibc in the normal case.

Now, how many packages do we have so far?
1. kernel
2. sysvinit
3. glibc
4. /bin/sh (bash or whatever)

We need a few more things for being able to login after checking drives and other basic setup tasks -like mounting any external drives, mounting /proc/ and /sys, etc. For such a system you need just over 20 normal packages. All these 'extras' will depend directly on glibc.

To add extensibility you add in a few archive tools like tar, gzip and your *package manager* -now you can do anything with that you want that involves packages. Now you can add that network support -even to the early boot process, You can add all those extra libs and programs to have a full GUI and as many applications as you like. gzip and many other programs will need zlib and other 'early-boot' programs may need libreadline.

Most packages will depend on one or more other packages being installed -since those *packages contain* something this program needs. There is not even one universal dependency -see, even /sbin/init doesn't need it. So, every program has a unique combination of dependencies. They cannot be managed as groups at all. There is no way to have a one-size-fits-all combination of libs or other which will cover the needs of X programs. Sure, you can *do* something like that, but it's not manageable. Only by considering the individual units can dependencies be managed.

Of course, packages or 'bundles' of any sort need a package manager which hopefully does intelligent things, with an intelligent package format, using uniform-created packages, created by intelligent software and guided by intelligent package creators.

Only at the level of packages can incremental upgrades, security and bugfixes, additions and deletions be possible. There is no 'core' which falls outside of the package paradigm.

wanderer
Posts: 1098
Joined: Sat 20 Oct 2007, 23:17

#6 Post by wanderer »

hi amigo

thanks for all the info. great explanation of the big picture.

wanderer

amigo
Posts: 2629
Joined: Mon 02 Apr 2007, 06:52

#7 Post by amigo »

Here's how the ~20 package core looks like in my distro:
########## base
aaa-base # this is just the directory structure
aaa-etc # a nice set of basic default conf files for 7etc
bash # for /bin/sh
coreutils # all the nice utils
e2fsprogs # fdisk and Co.
file # needed by init scripts for identifying file types
findutils # needed by init scripts for finding files
grep # needed by init scripts for parsing strings
module-init-tools # needed for loading kernel modules
procps # only needed because we use pkill and Co. in our init scripts
sed # needed by init scripts for parsing strings
shadow # needed for login
sysvinit-scripts # the basic init scripts -each service will install its' own additions
sysvinit # the real whiz-bang init
udev # for managing devices (and kernel modules)
util-linux(-ng) # all the *other* nice utils expected everywhere
########## libs
glibc # needed by nearly everything
ncurses # needed by util-linux(-ng) and later, many others
libtermcap # needed by bash
zlib # needed by file and later, many others
kernel-kiss # the kernel
kernel-modules

Those ~20 packages need about 40-45MB of space when normally installed on a hard disk -and the packages are not cut-down in any way -except that the kernel modules could be anywhere from 1-100MB alone, depending on how much hardware you need to support.

These 40-50Mb of basic stuff is simply expected to always be there -the full versions and not cut-down busybox stuff. Starting with that will set you on the glorious path to forward-compatibility. Skimping on this base by leaving out what *you think* is not needed or by using cut-down versions will cause you many problems down the road.

Add in whatever tools are needed to add packages and the road is free ahead. Be sure that you create all those packages yourself so that users will never have problems with wrong library versions. Create all those packages yourself so that they can include accurate information about what other packages each package needs. Provide your users with a package manager which can take that dependency info and sanely provide the proper library versions. Once you have detailed info about which other packages a package needs, you can use the concept of meta-packages so that a user can install, for instance, a single meta-package called jwm-desktop which will install all the things that jwm needs, like Xorg, libX11, etc., instead of ahving to individually choosee each package.

wanderer
Posts: 1098
Joined: Sat 20 Oct 2007, 23:17

#8 Post by wanderer »

thanks again amigo

a lot of great info

wanderer

User avatar
greengeek
Posts: 5789
Joined: Tue 20 Jul 2010, 09:34
Location: Republic of Novo Zelande

#9 Post by greengeek »

Despite the failings of puppy, one of it's greatest assets for me personally (non Linux background) has been the presence of it's wizards, enabling me to get somewhere without indepth knowledge.

If a "puppy" were to be built upon a core structured as amigo has suggested: -

Where would the wizards fit into the picture?
Would Puppy still look and handle the same to a newbie?
Would it install by the same methods?
Would it still qualify be called a "puppy"?

Post Reply