Follow the /opt standard for added software

What features/apps/bugfixes needed in a future Puppy
Message
Author
User avatar
Bert
Posts: 1103
Joined: Fri 30 Jun 2006, 20:09

#21 Post by Bert »

jemimah wrote: Compiling everything as a portable app is convenient for the user, but not small.
The question I think is: who cares about size nowadays, when the apps live outside of the save file?

How much longer will we try to cramp modern apps into an archaic and questionable file system?

In the end only the user's convenience will count. Not some theoretical or historically grown concept.

More importantly, how will Linux ever become more mainstream, if it cannot come up with some sort of apps standard that works for everyone, everywhere? I've seen huge wastes of time and energy in the many linux forums I follow, because of old and stubborn patterns obstructing progress.

I don't pretend to know the answers, these are just my observations after years of patient reading.
[url=http://pupsearch.weebly.com/][img]http://pupsearch.weebly.com/uploads/7/4/6/4/7464374/125791.gif[/img][/url]
[url=https://startpage.com/do/search?q=host%3Awww.murga-linux.com%2F][img]http://i.imgur.com/XJ9Tqc7.png[/img][/url]

User avatar
Flash
Official Dog Handler
Posts: 13071
Joined: Wed 04 May 2005, 16:04
Location: Arizona USA

#22 Post by Flash »

I agree with Bert. I run Puppy from a multisession DVD in a computer without a hard disk drive but lots of RAM. I keep Puppy small so it boots quickly from the DVD. Puppy comes with a good selection of application programs, so I don't need to install many more. Small apps that I find very useful (such as tree) I install and save on the Puppy DVD, but large programs, or programs I don't use very often, I save as .pets on a separate flash drive and reinstall them each time I need to use them.. Since I don't usually save to the DVD when I shut down, these do not cause bloat on the DVD.

So anyway, applications-in-a-directory (ROX-apps?) that are compiled with all their dependencies are far less of a problem for me than trying to figure out how to make a program work that is scattered all over the operating system and leaves its dependencies as exercises for the reader. :x

scsijon
Posts: 1596
Joined: Thu 24 May 2007, 03:59
Location: the australian mallee
Contact:

#23 Post by scsijon »

Personally,

For anything that is source independant and small, I follow the normal structure, however anything large or source dependant I've always used /opt.

Means if it 'craps out' it's easier to cleanup.

Like with my qt stuff, lib is not a small set, creator and developers all big, apps are small but are qtdependant so will be in /opt;

but games belong in /var/games and that's where I put them;

the other common base (generic) 'stuff' will go as normal into /usr/local or /usr/share depending on their structure and size;

also /opt can be a link to another partition as I have, means I can move and resize as I need to, without affecting my base or as I have two bases using a common /opt, easier to work, test and play with.

scsijon
edit: just realized that I left out copying the last line across "links to the apps in all cases added to /root/my-applications/bin when needed", sorry about that.
Last edited by scsijon on Mon 19 Dec 2011, 00:24, edited 1 time in total.

User avatar
technosaurus
Posts: 4853
Joined: Mon 19 May 2008, 01:24
Location: Blue Springs, MO
Contact:

#24 Post by technosaurus »

Before anyone starts doing this insanity, echo $PATH and $LD_LIBRARY_PATH

If the place you are putting the binaries/libraries is outside of those ... There are several things that need to be considered ... I'm not going to get into the details but trust me ... It's more of a hassle than its worth. And IMHO much easier/better to use other methods during build.
Check out my [url=https://github.com/technosaurus]github repositories[/url]. I may eventually get around to updating my [url=http://bashismal.blogspot.com]blogspot[/url].

Plume
Posts: 34
Joined: Mon 12 May 2008, 19:05

#25 Post by Plume »

My /opt is a symlink to a separate partition which is shared by all the linuxes installed on my PC. In this place stand libreoffice and opera for instance. On an other PC I have kde coming from Slax (hard installed) in /opt and kde can be used by my Vector Linux as well.

JegasLLC
Posts: 4
Joined: Mon 19 Dec 2011, 18:13

Why the opt standard?

#26 Post by JegasLLC »

/opt has been around forever

it's less wordy (keystrokes) than /home/[username]/my applications/

Who ever started putting spaces in directory paths? Its Micro-Lame in my opinion - sorry, short tangent.

I find cd /opt to get to my master list of OPTIONAL software quite brief, intuitive and fast.

I'm glad the majority are following suit with the ground work on the best Operating System that was ever available... and what the entire Linux movement is based on.

I will say along with one of the early posters in this thread that I find /usr/local and /usr ambiguous and of little value as a standard. opt makes it really clean to "remove" all your optional software in one pass (less /etc/opt/ data if present) Even though /usr and /usr/local have always been around, I think two tiers makes it complicated. if /usr/ was non OS specific/user area alone I'd probably not even give it a second thought.

Go OPT - It's not an Option (hehe) 8)

User avatar
sunburnt
Posts: 5090
Joined: Wed 08 Jun 2005, 23:11
Location: Arizona, U.S.A.

#27 Post by sunburnt »

technosaurus; Unless the libs. are seldom used, then statically compile them into the app. If they`re not shared then why share them?

/opt is just another place to put stuff, as if Unix didn`t already have enough.

Consolidating all of the Unix dirs. is like loosing all of it`s other legacy crap... GOOD !
Legacy drivers, boot methods, dependence on the initrd.gz file. And probably most of all using a legacy loose file setup for read-only files.

amigo
Posts: 2629
Joined: Mon 02 Apr 2007, 06:52

#28 Post by amigo »

"Consolidating all of the Unix dirs" *requires* the use of an initrd. If it weren't for all the 'legacy crap' no distro would work at all. Only because some basic assumptions are made, can things work together reliably. Imagine if every application had to search your entire system to look *everywhere* for the resources it needed.

This brings up the idea of adding a unique PATH and/or LD_LIBRARY_PATH element for each weirdly-located program or library. What that does is add more work for the system to look for these things. Most systems set up a PATH with 4-5 elements for normal users, LD_LIBRARY_PATH is normally unused and ld-linux.so.2 (the linker which actually runs your programs) looks in about 4-5 locations for libraries. So, when you 'run' a program -say from a terminal window, you tell the shell "run wget". So, the shell starts searching each element of the PATH until it finds wget. The the shell tells the *kernel* ro run the program. But, the kernel doesn't do this itself. It tells ld-linux.so.2 (BTW it fully *expects* to find ld-linux.so in a *known* location. You can change that location and name of course, by patching the kernel) -anyway, the kernel tells ld-linux.so to run the program. ld-linux begins reading the ELF header of the program to find out which libraries the thing is gonna need, if any(still has to look to see). If it needs libs, then ld-linux loads them first into RAM and dynamically links them to specific locations which it passes to the executable whe it finally runs it. So, adding elements to PATH or LD_LIBRARY_PATH or in /etc/ld.so.conf -did I mention that the linker of course will look specifically for /etc/ld.so.conf and not over the whole system?

Many of the old conventions' original motivations are no longer important for most users. But, meanwhile, innovative folks have found imaginative ways to use these assumptions/conventions to good use. The whole idea of a LiveCD relies an several of these 'crap' ideas which were made real by someone -the first LiveCD was out in 1993, IIRC.

I went through a phase where I also thought that the Linux VFS should be renamed and reorganized. I even built an example project which used a naming scheme like gobo does(long before gobo existed) -but using truly-modified paths. That approach is very high maintenance because it requires patching nearly every program and lib you want to use. I came to my senses... Apple OSX took the more reasonable approach -the kernel's VFS is unchanged -only the virtual representation of it we see. Even from the CLI, the file-system is highly virtualized. I mean, we aren't seeing raw disk block numbers there -even they are a virtualization! So, all you really need to make every 'real' underlying structure disappear, is to have shell and/or GUI which changes the way you 'see' the file system. Simple!

Somehow I find it strange that the same people who promote a dynamic and modular system might want to shun the great existing modularity and flexibility which is already available. Running from RAM is only possible because someone had a great idea a long time ago. The concept of the initrd *followed* the ramdisk concept.

UNIX gave us lots of compartments as standard -each with its' own well-thought-out motivation and implementation. They gave us enough that we rarely need to do any invention -and if we feel we must, they gave us /opt for that. If it needs to look or feel different, you can always change each little aspect in the way that you like -because they thought of that too!

User avatar
sunburnt
Posts: 5090
Joined: Wed 08 Jun 2005, 23:11
Location: Arizona, U.S.A.

#29 Post by sunburnt »

Yes, we do not want to increase the length of the paths too much. And I don`t suggest removing "needed code" from the kernel ( of course ).
The initrd makes a boot time pre-run environment allowing for many different O.S. setups. How about booting from a Squash file instead?
The kernel already has Squashfs included, it`s just another type of image file. Boot directly to Puppy`s main SFS file ( requires a kernel mod. ).

Unix and Linux work very well the way they were designed. That doesn`t mean that they can`t be improved any. Simplify is almost always a good idea.
Boot devices and storage, network and I/O connections, attached user interface devices all have new additions and also the loss of legacy ones.

Choice pup added lib. paths for each Squash app. file it "linked". I told him that this was not necessary, that there were much better ways to do it...

No intent on shunning any of the Linux setup, but depreciating legacy items for better methods is the standard manner of change in Linux.

User avatar
DocSalvage
Posts: 11
Joined: Sat 30 Jun 2012, 18:59
Location: Tallahassee, FL, USA
Contact:

#30 Post by DocSalvage »

I agree amigo.

Just like a society, any system that works well enough to last as long as *nix is going to carry a lot of "cruft" and remnants of past attempts at standardization. Doesn't mean we give up. Just means we try to understand the history of what was done and why, so we can improve on it instead of making the same mistakes. It's called "progress."

re: /opt
Neither Ubuntu nor Puppy put anything in /opt as distributed so I've taken to using it as an "optional filesystem structure" to make administration easier. In keeping with the spirit of several posts like those amigo points to, I use /opt to bring all the desparate pieces of a given software package together by way of symlinks. So /opt looks something like...

Code: Select all

    /opt/gdm
    /opt/ntop
    /opt/picasa
    /opt/rsync
    /opt/samba
    /opt/ssh
    /opt/synergy
    /opt/X11
In each subdirectory will be symlinks to the configuration files in /etc, the binaries (wherever they are), log files, etc. This gives me a 1-stop place for managing daemons and other packages. Shell scripts in particular are simplified.

re: Consolidating large directories like /bin and /usr/bin
Though commandlines are second nature to most of us, when you just want to get something done without having to remember syntax details, nothing beats a GUI. I've found most file manager GUIs choke on directories with hundreds of files. This admittedly is an implementation issue, best solved by pagination in the GUI, but the fact remains that most of them don't and we don't have time to rewrite every tool we use.

File manager authors need to recognize the move towards consolidation and make their tools handle it better by using background processes and things so control can be returned to the GUI before every file is read. In the meantime, consolidation proponents need to keep these real-world limitations in mind when dealing with directories that are already massive.

re: Read-Only vs. Read-write filesystems
I very much like the layered filesystem approach of unionfs/aufs that Puppy has adopted. It's one of the main reasons I'm switching. Adding this 3rd dimension to filesystem structure goes a long way in simplifying the locating of resources and thus improving reliability of installs and operation.
[i][color=blue]DocSalvager (a.k.a. DocSalvage)[/color][/i]
[url]http://www.docsalvage.info[/url], [url]http://www.softwarerevisions.net[/url]

User avatar
technosaurus
Posts: 4853
Joined: Mon 19 May 2008, 01:24
Location: Blue Springs, MO
Contact:

#31 Post by technosaurus »

DocSalvage wrote:This admittedly is an implementation issue, best solved by pagination in the GUI, but the fact remains that most of them don't and we don't have time to rewrite every tool we use.
gtk needs to show some love to the file widgets anyways. Aside from file managers themselves (and a few projects that got fed up waiting for gtk to fix their crap - mtpaint for one) most programs wouldn't (or shouldn't) need to change anything.

Your /opt approach would be quite simple in BSD - they can bind a directory onto another (similar to union/aufs) yet linux still has no unioning support in mainline while nearly very single distro maintains some type of unioning patches.

<sidebar>
The LTS kernels were a good step for stability, but backports are essentially forbidden, so adding new hardware is difficult at best. News flash! when new products come out they sometimes have new hardware.

With a simplified file system, boot process, graphics subsytem and stable API, it would make great sense for large software to be made for Linux only - No Mac or Windows port required since it could be sandboxed in a virtual machine, or run directly by booting directly into the software the way your old PS3/2/1/Sega/Nintendo/Atari/Colecovision/Pong boots right up to a game. They've been trying to get PCs as game consoles for years, but this is the obvious connection that noone is making.
</sidebar>
Check out my [url=https://github.com/technosaurus]github repositories[/url]. I may eventually get around to updating my [url=http://bashismal.blogspot.com]blogspot[/url].

User avatar
sunburnt
Posts: 5090
Joined: Wed 08 Jun 2005, 23:11
Location: Arizona, U.S.A.

#32 Post by sunburnt »

technosaurus; Yeah, GTK should be abandoned, except there`s not much else...

DocSalvage; Actually the union is just an unnecessary complexity.
Adding this 3rd dimension to filesystem structure goes a long way in simplifying the locating of resources and thus improving reliability of installs and operation.
A union causes more file system problems than it solves.
It`s a patch to blend Squash files ( r/o ) with loose files ( r/w ).
So separate the r/o and the r/w F.S. parts and you don`t need the union!

# Perhaps unions aren`t in the kernel code because they`ll be phased out!
Evidently the kernel folks have something of the same thoughts I do...

Post Reply