Puppy Linux Discussion Forum Forum Index Puppy Linux Discussion Forum
Puppy HOME page : puppylinux.com
"THE" alternative forum : puppylinux.info
 
 FAQFAQ   SearchSearch   MemberlistMemberlist   UsergroupsUsergroups   RegisterRegister 
 ProfileProfile   Log in to check your private messagesLog in to check your private messages   Log inLog in 

The time now is Thu 18 Dec 2014, 16:25
All times are UTC - 4
 Forum index » Advanced Topics » Puppy Projects
Application to Build a Modular Pup -- A Different View
Moderators: Flash, JohnMurga
Post new topic   Reply to topic View previous topic :: View next topic
Page 1 of 1 [9 Posts]  
Author Message
mikeslr


Joined: 16 Jun 2008
Posts: 840
Location: Union New Jersey USA

PostPosted: Sat 07 Dec 2013, 22:18    Post subject:  Application to Build a Modular Pup -- A Different View  

First Off: I would appreciate that if any statements or assumptions I make below are inaccurate or misleading you take the time to correct them. I make them in good faith. But they are based on my recollection, my understanding, and in some cases my guesswork.

Spoiler Alert: This will be essentially be a scripting challenge. Those familiar with gtk-dialog, yad, BASH, etc. are urged to join; or looked at another way, are provided with the opportunity to use their skills in creating something to include in a community edition.

What if there was a method which would enable any Puppys to operate faster, and cooler while increasing the size of the files they have open at the same time by up to 45%? What if that method also ameliorated some of Puppy's long standing problems? Don't get me wrong. Puppy is my choice of computer operating systems. That doesn't mean it couldn't be better.

Compressed files are Puppy's strength. Their overuse, however, is also Puppy's limiting factor. The puppy-xxx.sfs which contains a Puppy's kernel, instruction set, and user applications is a compressed file. If a SaveFile is created to preserve settings and applications not provided by a Puppy's Dev, it also is a compressed file. Applications in SFS format are also compressed files. The problem is that in order for any of the information and instructions contained in those compressed files to be put to use, they have to be in memory, which requires that the files be decompressed. While file contents and compression systems produce some variance, a useful Rule of Thumb is that for any file to be decompressed into RAM, three times as much RAM as compressed size of the file is needed. If there is insufficient RAM to decompress all files in their entirety when they are needed, Puppys use a paging system, decompressing parts of files as needed, and as other parts are needed with greater urgency, transferring the now less needed parts out of RAM to storage (which in a Frugally Installed Pup also requires it to be compressed) so as to make room for the decompression of the now more urgently needed parts. The paging system itself, of course, requires RAM. All the aforementioned activity requires employment of the CPU. Additionally, when a Puppy is not a Full Install, in order to make use of the settings, and applications contained in the SaveFile and applications in SFS format, it generates in RAM a “merged file” system assigning priority to the instructions found in one compressed file over the instructions found in another.

I've attached a chart comparing the RAM and CPU usage of LibreOffice4 used by Puppy via 3 different methods: loaded as an SFS, installed as a Pet, and linked as an external “Program Folder”. If you do the math, you'll find that running the app as a Program Folder can reduce RAM and CPU usage up to 45%. That translates into more of your resources available to actually do anything. I'm sorry if you have to magnify the Chart to read it. It's been rescaled to comply with the Forum's attachment limitations. It was previously posted at http://murga-linux.com/puppy/viewtopic.php?p=686093&sid=ab8f263a8562def9f355f3c62261f675#686093 in which I gave the method I used to obtain the figures. The most important thing to note before examining the Chart is that to even install Libreoffice4 as a pet, I had to increase the SaveFile by 500 Mb. And in case you've missed its definition, a Program-folder is a folder anywhere on your drives in which an entire application, including all the libs, and associated files –other than platforms such as QT or Java-- exist in uncompressed form. The program's executable is linked to Puppy via a script somewhere “on the path.” To provide a menu entry to that program (or use in an lxpanel launcher) a desktop file in /usr/share/applications identifies that script as its executable, and some icon as its icon.
If you want to see what a Program-Folder looks like, then load any of the following SFSes and open your file manager to /opt: Libreoffice, OpenOffice, any google application such as picasa, Google-earth, Chrome or its clones. Within /opt you'll find an entire application contained in a folder requiring nothing more than the appropriate glibc libraries from your OS, albeit these applications, being in /opt are not outside your OS.

What I suggest for the Modular System Model is this: (a) A frugally installed “core” operating system consisting of the structures and applications needed to expand it, such applications to configure your ethernet and/or wireless adapter, PPM, SFS-downloader, SFS-tools, a Program-Folder creation tool, and (b) otherwise devoid of “user” applications with the possible exception of “geany.” PPM should be modified to enable downloading without installation. Although such a Pup could be traditionally expanded with the installation of pets or other applications containers into a SaveFile, or through the employment of SFSes as RSH proposes, my recommendation is that with the exception of the installation of platforms such as perl, qt and java, and the use of SFSes as “accumulators” as discussed below with reference to running without a SaveFile, all applications be run from Program-folders. [I make the exception for platforms as I am uncertain whether they could be employed via Program-folders or, if they could, whether doing so would significantly increase demands on RAM and CPU so as to negate any benefit]. [Folders such as “my-documents” currently now found at /root, could be moved to /mnt/home and symlinked back (a simple script can do that) or if not included in future pups, simply symlinked from /mnt/home to /root]. /my-applications would remain as the place “on the path” into which would be installed the scripts necessary to link to Program-folders' executables.

So, yes, this is mikeslr again promoting Program Folders and proposing the creation of an application which would make it easy to obtain them. Consider the following advantages:

Some Fans prefer running Puppys as a Full Install. Full Installs minimize a Pup's usage of RAM and CPU. A Full Install decompresses a Puppy into a partition. When a pet is installed, it also is decompressed into that partition. Without the need to decompress/re-compress files, a Full Install utilizes less RAM and CPU to perform the same task as its compressed Frugal counter-part. However, a Full Install has its own draw-backs.
To Fully Install Puppy Linux requires an entire partition. Of course, you could guess right about your future needs in deciding how big or small to make that partition, or successfully resize it in the future. When you install or upgrade a Pet, you had better be certain that it will work and doesn't conflict with applications already installed, because uninstalling it may break one or more applications. And upgrading a Pup involves the Hobson's choice of dedicating, at least temporarily, a second partition or over-writing your current Pup without knowing whether the old Pup was actually more useful than the new.
Experience, however, has shown that unless a new Pup is a radical departure from an old Pup –either by reason that it is built from the binaries of a different distro, or the source distro has made substantial changes-- many of the applications which ran under the old Pup will also run under the new Pup. [T2 builds don't seem to have any expiration date, and many Lupu apps still function under precise and some under raring. I believe the same may be true of Slackware builds]. The exceptions are applications which are kernel dependent. Upgrading a “Semi-Full” install –a core with Program Folders--consists of installing the new Pup into a folder, adding it to the boot menu, booting into it and linking it to Program-folders [and data folders] already in existence. Applications can be upgraded as needed or convenient. If desired, either the old or the new Pup can be removed. Unlike Full Installs which require a dedicated partition of predefined size, Pups, data folders and the Program-folders will only occupy so much of a partition as is needed. Additionally, by installing an application such as auto-mount, and modifying the necessary scripts, both Program-folders and Data Folders can be located on any available hard-drive and be accessible immediately on bootup.
Some Fans prefer running Puppys without a SaveFile. In order to use applications which weren't included by that Puppy's Dev, they have two choices: (1) load SFSes-on-the-fly as needed or (2) load SFSes and remaster. Either of those methods can significantly increase the amount of RAM and/or CPU used by the Pup. Program Folders are linked to a Pup using three files, a script, an icon, and a .desktop file which creates a menu entry by specifying the script as its executable and the icon as its icon. Together those files use only a few kilobytes, mostly depending on the size of the icon. Combining those files into an SFS which can be loaded-on-the-fly would require less RAM and CPU than loading the application itself on-the-fly. Indeed, an SFS containing the necessary files to link many applications in Program Folder form would use little resources. SFSes could be customized to link “Usually Needed” apps, “Developer Tool” apps or “Media Suites” or any combination. Fans running without a SaveFile could be up and running quickly, while still preserving their available RAM and CPU for actual work.

The General Problem with Applications – Compatibility—and PPM INADEQUACIES

PPM, the Puppy Package Manager, while it now checks to determine whether components a pet developer identified as necessary are within a pet package, is blissfully unaware of anything already “on the system” by reason of its inclusion by the Pup's Dev, or its inclusion specifically or coincidentally by the Pup user through installation into the SaveFile. Install the wrong pet into your SaveFile and you may have to delete it and rebuilt it from scratch. The installed pet, now part of the “merged file system” when opened, may not run as something is missing. Or worse yet, it may run, but having over-written files, some other application may not run; and deleting the “offending” application won't solve the problem as deletion doesn't restore the component which was over-written.

Woof's potentially unfulfilled dream:

Woof was a vision of a method by which problems associated with previous methods of building a Puppy were to be overcome.
“...
PET packages are heavily cut-down binary packages. There is often a lot of work in creating a PET package as we trim out all the fat.
Having a repository of PET binary packages means that we are also legally obliged to maintain a repository of the source packages.
Upgrading all the core infrastructure packages, such as glibc, gcc, gtk, cups, ghostscript and the other system libraries and creating new PET packages and 'devx' file then getting it all to work, takes us a very long time.” http://bkhome.org/woof/

An indirect consequence of woof is that we now have about a dozen repositories with multiples of sub-repositories for essentially the same applications, albeit slightly different: that is, for example, an Abiword for Lupu, an Abiword for Slacko, an Abiword for precise, an Abiword for raring, an Abiword for Carolina and so on. I don't think the time Devs spend in creating Pet packages has been lessened. Web-based storage certainly hasn't. More time is spent building standard applications leaving less time to build special ones. And of course, with all these variations floating around and a PPM not designed to handle them, accidents will happen.

Effect on Future Pup Development:
Pup Devs need only create a "core" and a "devx" file, expediting development and testing. Pet and SFS devs need only compile new versions of applications when needed or convenient. Repository needs would be lessened, as would the amount of time users must spend downloading Pups and applications.

Building Program-folders isn't hard since even I can do it. It merely consists of decompressing a pet or SFS or a tar.gz from such publishers as Opera and Firefox. Building one from a .deb is more difficult as Ubuntu and debian have probably built those on the assumption that they are to be used in conjunction dynamically with their OSes and dependencies will be resolved via synaptic. Slackware packages may pose similar complexity. Building an application which would automate the creation of Program-folders and their links to Pups can vary from easy to hard. Easy is if limited to pets, SFSes and applications downloaded directly from their manufacturers. Scattered throughout the forum there are already yad and/or gtk-dialog applications that handle some aspect of the manual tasks I perform. The hardest part of building a Pet to install links to a Program Folder is locating the application's executable. The worst that can happen is the application won't run, the folder can be deleted, and the pet –which contained a unique executable, .desktop and perhaps a unique icon-- uninstalled without breaking anything. Or the application can be started via a terminal and an effort made to determine why it wouldn't run.
Of course, the foregoing --except the last statement-- presumed that pets and SFSes were statically built.
An automated Program-folder-Creator gets harder if .debs from Ubuntu or debian are also to be used, and perhaps slackware packages. Theoretically at least, a Program-folder built from Slackware packages being statically built should run under a Pup whose core was built from Ubuntu binaries, and vice-versa. But as PPM isn't built to perform the dependency checking required, to successfully create Program-folders from such packages, the Program-folders-Creator would have to perform such dependency checking. Perhaps a project best left for another day.
But also another reason I strongly suggest that some enterprising Dev create a new Pup woofed from T2 packages.
There, of course, is a downside to Program Folders. They use hard-drive space. But I don't think that's a significant disadvantage. I have a 12 year old Dell Latitute with 20 Gbs of Hard-drive, which is about twice as much as any Full Installed Pup fleshed-out to the max would need. But if it need more hard-drive space, for about $15 I could install a 40 Gb IDE-drive, and if it could use either [it can't], for the same $15 I could install an 100 Gb Sata drive or plug in a 16 Gb USB-Key. How about that, for about $15 a 12 year old computer could be adapted to using a Pup built around a modern kernel because its RAM and CPU could be used to run applications rather than operating systems.
My next post will discuss the specifics of a Program Folder Creator.

mikesLr
ProgFoldrCompare.png
 Description   Comparative RAM and CPU usage
 Filesize   52.67 KB
 Viewed   577 Time(s)

ProgFoldrCompare.png


Last edited by mikeslr on Sat 07 Dec 2013, 22:33; edited 1 time in total
Back to top
View user's profile Send private message 
mikeslr


Joined: 16 Jun 2008
Posts: 840
Location: Union New Jersey USA

PostPosted: Sat 07 Dec 2013, 22:18    Post subject: Reserved  

Reserved

mikesLr
Back to top
View user's profile Send private message 
dancytron

Joined: 18 Jul 2012
Posts: 298

PostPosted: Sun 08 Dec 2013, 16:59    Post subject:  

I don't pretend to understand everything you've posted.

I think that a "core" of puppy, without most user applications as you describe is a great idea. Not only as a tool for your idea (which I don't really understand) but also for RSH's sfs based ideas, as a base for remasters, and A drive based puppies.

I think for general release, it could be in the form of the core, z-drive for drivers, and a-drive for standard puppy applications. Then, it could be a base for all sorts of different configurations and experiments.

Also modifying the PPM so that you can always choose to download instead of install is a good idea, making creating custom sfs files easier.

I posted about this on RSH's modularity thread earlier.
Back to top
View user's profile Send private message 
wanderer

Joined: 20 Oct 2007
Posts: 230

PostPosted: Wed 11 Dec 2013, 17:24    Post subject:  

I use symlinked uncompressed folders (on ext2 filesystems) or uncompressed 2fs image files (on fat32 filesystems) on the system i built using an independent ramdrive as a core. I link the folder/image as /user and symlink /etc /root and other changable directories into the image from the ramdrive. I works great and has a lot of advantages. I continue to work on my system for fun and look forward to reading your posts. However my system is not similiar to the standard puppy in any way.

I assume from reading your post that this is similiar to your program files. I think using a core, with your program files, as well as the other puppy systems, would be a great way to build the CE project because it increases the manageability and flexiblility of the basic puppy system.

wanderer
Back to top
View user's profile Send private message 
amigo

Joined: 02 Apr 2007
Posts: 2288

PostPosted: Thu 12 Dec 2013, 06:00    Post subject:  

The devil is in this: define precisely what 'core' consists of. In the process of finding that out, you'll begin to see that all linux OS's are built from pretty well-defined units of functionality -each of which is part of a limited set of such tools or libraries -they are commonly called 'packages'. The kernel is one thing and can be interchanged nearly at will.

The rest of the system is composed of a few or thousands of units of functionality -with many alternatives which can be used in place of some other tool -but the traditional usage and design of things has meant that you can boot and run a system from nearly anywhere and in any way you can imagine. Someone had all the cool ideas *decades ago* which make possible all these cool things that get talked about and used here.

If you consider the idea of a small software appliance that only does one thing, then you can build a tiny system which only does that. The linux kernel only needs two things from the OS which it supports:
1. a special device file at /dev/console, so that it can output its' messages as it is booting. Nowadays, the kernel can even supply this device and others for you.
2. A bit of executable code at /sbin/init or /etc/init . If it is not found, then the kernel will look for /bin/sh.
If the kernel doesn't find these things it panics and locks up, and of course you have no OS.

The 'init' program can be anything at all -even a tiny bit of statically-compiled C code which does nothing more than print 'Hello World'. But, after that the OS is gonna lock up. Why? Because init is supposed to keep running until shutdown -at which time it turns control back to the kernel which shuts itself down.

Of course, nobody here wants such an appliance. So, lets try to get a more practical definition of 'core':
A system which is able to boot from a local medium and let us login. That still means no network, no extendability and certainly no GUI. But we want that because 'core' should be as small and well-defined as possible. Only by considering things from this scale can you get a handle on what modular truly implies.

So, we have an /sbin/init, but what does it need? well, nothing since it is -er should be, a statically compiled binary which handles the init process. What is the init process? That's all the actions which init starts which setup a complete running system -which could be nothing but a shell, if you like. If we take the idea that we are gonna build a big system, then the first thing we need for an extendible system is some shared libraries for the shell and any others to use. That's glibc in the normal case.

Now, how many packages do we have so far?
1. kernel
2. sysvinit
3. glibc
4. /bin/sh (bash or whatever)

We need a few more things for being able to login after checking drives and other basic setup tasks -like mounting any external drives, mounting /proc/ and /sys, etc. For such a system you need just over 20 normal packages. All these 'extras' will depend directly on glibc.

To add extensibility you add in a few archive tools like tar, gzip and your *package manager* -now you can do anything with that you want that involves packages. Now you can add that network support -even to the early boot process, You can add all those extra libs and programs to have a full GUI and as many applications as you like. gzip and many other programs will need zlib and other 'early-boot' programs may need libreadline.

Most packages will depend on one or more other packages being installed -since those *packages contain* something this program needs. There is not even one universal dependency -see, even /sbin/init doesn't need it. So, every program has a unique combination of dependencies. They cannot be managed as groups at all. There is no way to have a one-size-fits-all combination of libs or other which will cover the needs of X programs. Sure, you can *do* something like that, but it's not manageable. Only by considering the individual units can dependencies be managed.

Of course, packages or 'bundles' of any sort need a package manager which hopefully does intelligent things, with an intelligent package format, using uniform-created packages, created by intelligent software and guided by intelligent package creators.

Only at the level of packages can incremental upgrades, security and bugfixes, additions and deletions be possible. There is no 'core' which falls outside of the package paradigm.
Back to top
View user's profile Send private message 
wanderer

Joined: 20 Oct 2007
Posts: 230

PostPosted: Thu 12 Dec 2013, 11:44    Post subject:  

hi amigo

thanks for all the info. great explanation of the big picture.

wanderer
Back to top
View user's profile Send private message 
amigo

Joined: 02 Apr 2007
Posts: 2288

PostPosted: Thu 12 Dec 2013, 15:31    Post subject:  

Here's how the ~20 package core looks like in my distro:
########## base
aaa-base # this is just the directory structure
aaa-etc # a nice set of basic default conf files for 7etc
bash # for /bin/sh
coreutils # all the nice utils
e2fsprogs # fdisk and Co.
file # needed by init scripts for identifying file types
findutils # needed by init scripts for finding files
grep # needed by init scripts for parsing strings
module-init-tools # needed for loading kernel modules
procps # only needed because we use pkill and Co. in our init scripts
sed # needed by init scripts for parsing strings
shadow # needed for login
sysvinit-scripts # the basic init scripts -each service will install its' own additions
sysvinit # the real whiz-bang init
udev # for managing devices (and kernel modules)
util-linux(-ng) # all the *other* nice utils expected everywhere
########## libs
glibc # needed by nearly everything
ncurses # needed by util-linux(-ng) and later, many others
libtermcap # needed by bash
zlib # needed by file and later, many others
kernel-kiss # the kernel
kernel-modules

Those ~20 packages need about 40-45MB of space when normally installed on a hard disk -and the packages are not cut-down in any way -except that the kernel modules could be anywhere from 1-100MB alone, depending on how much hardware you need to support.

These 40-50Mb of basic stuff is simply expected to always be there -the full versions and not cut-down busybox stuff. Starting with that will set you on the glorious path to forward-compatibility. Skimping on this base by leaving out what *you think* is not needed or by using cut-down versions will cause you many problems down the road.

Add in whatever tools are needed to add packages and the road is free ahead. Be sure that you create all those packages yourself so that users will never have problems with wrong library versions. Create all those packages yourself so that they can include accurate information about what other packages each package needs. Provide your users with a package manager which can take that dependency info and sanely provide the proper library versions. Once you have detailed info about which other packages a package needs, you can use the concept of meta-packages so that a user can install, for instance, a single meta-package called jwm-desktop which will install all the things that jwm needs, like Xorg, libX11, etc., instead of ahving to individually choosee each package.
Back to top
View user's profile Send private message 
wanderer

Joined: 20 Oct 2007
Posts: 230

PostPosted: Thu 12 Dec 2013, 16:18    Post subject:  

thanks again amigo

a lot of great info

wanderer
Back to top
View user's profile Send private message 
greengeek

Joined: 20 Jul 2010
Posts: 2740
Location: New Zealand

PostPosted: Fri 13 Dec 2013, 14:59    Post subject:  

Despite the failings of puppy, one of it's greatest assets for me personally (non Linux background) has been the presence of it's wizards, enabling me to get somewhere without indepth knowledge.

If a "puppy" were to be built upon a core structured as amigo has suggested: -

Where would the wizards fit into the picture?
Would Puppy still look and handle the same to a newbie?
Would it install by the same methods?
Would it still qualify be called a "puppy"?
Back to top
View user's profile Send private message 
Display posts from previous:   Sort by:   
Page 1 of 1 [9 Posts]  
Post new topic   Reply to topic View previous topic :: View next topic
 Forum index » Advanced Topics » Puppy Projects
Jump to:  

You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum
You cannot attach files in this forum
You can download files in this forum


Powered by phpBB © 2001, 2005 phpBB Group
[ Time: 0.1201s ][ Queries: 13 (0.0047s) ][ GZIP on ]