Introducing Just-Lighthouse64

A home for all kinds of Puppy related projects
Post Reply
Message
Author
User avatar
Q5sys
Posts: 1105
Joined: Thu 11 Dec 2008, 19:49
Contact:

#121 Post by Q5sys »

Is there any interest in making a dedicated update/package repo for Mariner64?

I could probably whip one up fairly eaisly, but i'd have a few questions as to target. Are we aiming for Slackware64 14.1 compat now... or are we still working with a mashup of 14.0. and 14.1... or are we looking at something more independent as FD7xx series is doing.

Also if we are moving towards the 14.1 branch... should we change our numbering so people who grab the OEM LHP release dont think our stuff is compat out of box with that. Most people will assume that if an SFS is marked 602... it'll work on LHP 602.

The reason i ask is because if I should start working on a repo... version numbers are going to matter.

Dry Falls
Posts: 616
Joined: Tue 16 Dec 2014, 23:37
Location: Upper Columbia

#122 Post by Dry Falls »

Q5sys, I'm very interested in a package repo other than g-drive. There will be two options, as you say: 14.0 and 14.1. So far, save-files need minimal tweaking and all the packages I've compiled/recompiled work with the original lh64-602, so it's distro name could remain the same - lighthouse64-602. 14.1 has so much built in, it will need its own. I was waiting for some more testing before making those kind of changes, but the host name is already Mariner64-Chubby. If it works out, it will probably be L64-603. I'll upload it to g-drive in the next few days but chubby will be ONLY for testing, at this point.

Thanks,
df

User avatar
Q5sys
Posts: 1105
Joined: Thu 11 Dec 2008, 19:49
Contact:

#123 Post by Q5sys »

Dry Falls wrote:Q5sys, I'm very interested in a package repo other than g-drive. There will be two options, as you say: 14.0 and 14.1. So far, save-files need minimal tweaking and all the packages I've compiled/recompiled work with the original lh64-602, so it's distro name could remain the same - lighthouse64-602. 14.1 has so much built in, it will need its own. I was waiting for some more testing before making those kind of changes, but the host name is already Mariner64-Chubby. If it works out, it will probably be L64-603. I'll upload it to g-drive in the next few days but chubby will be ONLY for testing, at this point.

Thanks,
df
The reason I was talking about reverse compatibility is that if something is built with the newer glibc 2.17... there's no guarantee that it'll work on LHP with its ancient glibc of 2.15. We might get lucky and they work for now, but there's no promise that if we compile Package A and it works, and Package B which has Package A as a dependency will also work on the older glibc.
It's a mess waiting to happen. And if my years in linux have taught be anything, its that the mess you might make trying to do things right is always more manageable than the mess you will make if you take the lazy way out. :P

Toolchain incompatibility is the main reason why I've been having issues with LHP the past few months. The software I need requires a newer glibc, and I will openly admit that I hate doing the toolchain upgrade compiling merry-go-round.

Not being entirely sure where your guys end goal is for this... i'm not sure how to best fit in.

Dry Falls
Posts: 616
Joined: Tue 16 Dec 2014, 23:37
Location: Upper Columbia

#124 Post by Dry Falls »

I see what you mean. If I'm not mistaken, the lighthouse64-14.0 has both glibc 2.15 and 2.17 (the old did not uninstall in the upgrade). I was thinking this would get one out of the incompatibility issue, but admittedly, sheepherding and midwifing are more my areas of expertise and not computers. I'm not sure I even know what you mean by tool-chain, but I'm aware of compiling nightmares.

14.1 (chubby) has glibc 2.17 upgraded to 2.19. I think the devx will need rebuilt (if it's even necessary). I know puppyfrontend has troubles with udevd upgrade and ppm and gslapt are having some arguments.

My goal is to have this thing work with itself, but primarily to run on more modern (younger than 3!) as well as older boxes. Hence, the kernel upgrade. Until I messed with the kernel, I couldn't even keep lighthouse running on this box (a mere one-year-old) without some catastrophe or other.

thanks
df

By the way, the two versions I'm trying to deliver are not the same as those offered at the g-drive links in the opening post, although p64 was remastered for 14.0. Fathouse has become redundant unless you want to run chromium or google-chrome. The more recent browsers (the above, opera, midori, even dilllo) are the most guilty as far as introducing incompatibilities requiring even more system, software as well as hardware upgrades. It's the sort of thing bill gates would approve of, so I don't.

User avatar
Q5sys
Posts: 1105
Joined: Thu 11 Dec 2008, 19:49
Contact:

#125 Post by Q5sys »

Dry Falls wrote:I see what you mean. If I'm not mistaken, the lighthouse64-14.0 has both glibc 2.15 and 2.17 (the old did not uninstall in the upgrade). I was thinking this would get one out of the incompatibility issue, but admittedly, sheepherding and midwifing are more my areas of expertise and not computers. I'm not sure I even know what you mean by tool-chain, but I'm aware of compiling nightmares.

14.1 (chubby) has glibc 2.17 upgraded to 2.19. I think the devx will need rebuilt (if it's even necessary). I know puppyfrontend has troubles with udevd upgrade and ppm and gslapt are having some arguments.

My goal is to have this thing work with itself, but primarily to run on more modern (younger than 3!) as well as older boxes. Hence, the kernel upgrade. Until I messed with the kernel, I couldn't even keep lighthouse running on this box (a mere one-year-old) without some catastrophe or other.

thanks
df

By the way, the two versions I'm trying to deliver are not the same as those offered at the g-drive links in the opening post, although p64 was remastered for 14.0. Fathouse has become redundant unless you want to run chromium or google-chrome. The more recent browsers (the above, opera, midori, even dilllo) are the most guilty as far as introducing incompatibilities requiring even more system, software as well as hardware upgrades. It's the sort of thing bill gates would approve of, so I don't.
Slackware 14.1 right now uses glibc 2.17, so for most things we are probably ok with that unless we want to future proof ourselves by going to 2.19. (probably not a bad idea since it will save us a TON of headache in the future.)

I'll try to explain this in the simplest way that I can, but if you are still confused please feel free to ask so i can clarify.
The core of the OS consists of the Kernel, Toolchain, The rest of the userland.

The kernel you know. The Toolchain is what interfaces with the kernel for programs. The usually toolchain consists of the GNU-Binutils, GCC, Glibc.
The toolchain is the 2nd most important thing in the OS second only to the kernel. All programs which are built utilize those three elements.

When building a toolchain you have the chicken or the egg problem. You can only build binutils with gcc... but you have to binutils to be able to build gcc.

So you need to start with an existing system and update to a newer toolchain. The big distros have all this scripted out so its easy to do, but we in puppy land have to do it all manually if we want. So for us updating a system would consist of the following steps.

* We start with Binutils, gcc, glibc.
* We build a new gcc using the older versions.
* That gives us a new version, but its linked to the older binutils (due to how you build it)
* Then you build a new binutils with the new gcc
* Now you have binutils based off the new gcc, but the new gcc is based off the old binutils
* Then you rebuild gcc (again) with the new binutils.
* Now you have a new binutils and gcc that are linked together
* Same process is done for building a new glibc
* Once that's all done you then rebuild your kernel with the new toolchain.
* You then go and rebuild any blobs that you need too which are dependent on the kernel and toolchain (like your GPU drivers)
* It's wise at this point to also rebuild your compiling enviroments as well. (C++, ASM, etc)
* NOTE: Since we are using Busybox, that'd have to get rebuilt as well.
Hence... the Merry-go-round.

it's not hard per se, its just tedious. Hence why people dont do it often. Now if you're a wizard at doing cross compiling you can side step the two pass thing... but to be honest I've never bothered to learn how.

I've never looked into the PPM code, just TazOC's update mechinism. So I can handle that side of things, If/when I get time I'll peek into the PPM. It shouldn't be overly difficult. Once I get a simple repo up.. we can simply populate it and go from there. If we sync up with the Slackware 14.1 branch we can pretty much import their entire package tree. The only downside is that means puppy will become bloated really quickly as people use it.

The solution to that is to write a script that would take the shipped slackware txz packages and split them into Base Package, man/info pages/docs, and locales. So you'd have 3 packages for puppy per slackware package. This would greatly help cut down on space as well as decrease the data someone needs to download in most cases. man/info pages and documention is all online these days, and I'd venture a guess most people have never bothered to read the man pages on their system anyway. So it'd be benefit to the end user.

Taking this one step further, this also means that we can avoid the whole gslapt program if we want to, since we can tie it through the existing PPM or TazOC's update mechinism.
Gslapt does offer some dependency resolution, so it may be wise to simply fork it and tweak it for our use cases.

Taken even one step further, this also would give us the opertunity to be able to ship 'service packs' to users via SFS deltas. So if you wanted to roll a major update to the base SFS, you could. That way people keep their save file smaller since they arent loading in a few hundred MB during an update.
As for logistics, it'd go like...

* Download update delta for LHP-603.sfs
* Grep for installed packages user has installed
* Compare to list of packages user has installed - output duplicates
* Apply delta (keep backup of SFS in case of failure)
* If packages are in delta and were installed, uninstall package from userspace (so its removed from the save file); while ensuring that user settings in ~/ remain.
* User reboots into newer enviroment.

Newer toolchains could be rolled out in this way, along with entire new versions of the OS.

FWIW, this was all work I was planning to do with Slackbones, but never had the time or help to get it done. Meeki was going to help me with the SFS delta scripting, the slackware package splitter and I was going to be making the repo, and doing all the compiling.

With my 24 Core system I can compile like no other. :P Which was one of the reasons for building this machine. I'm more than willing to build whatever for whoever... if they provide me the source and/or they write a script that'll automate it so I can click it and go.
For reference... I recompiled my kernel today in 11 minutes. :P


Edit: Oh one other thing that i forgot to mention. Updating the toolchain doesnt mean that we have to update EVERY package. We can still use older versions but build them against the newer toolchain for compatibility and system stability. We only have to update packages which are required by newer programs.

I'd venture a guess that some of the older stuff that is failing is probably due to not being rebuilt against the newer toolchain, or the new toolchain not clean.

stemsee

#126 Post by stemsee »

24 true cores = 48 threads ??

gcmartin

#127 Post by gcmartin »

Hyperthreading per core on Intel processors where a single core appears as 2. Not a feature of any other CPU manufacturer excepting IBM's. It not called Hyperthreading by IBM but the hardware's internal functioning is the same.

What is suggested is that if the toolchain is updated, some/many/most current issues may disappear.
Q5sys also wrote:... willing to build whatever for whoever ...
Wondering if you envision a process that this thread's contributors should follow.

Hope that is helpful
Last edited by gcmartin on Tue 24 Feb 2015, 09:37, edited 1 time in total.

Gobbi
Posts: 255
Joined: Fri 09 Mar 2012, 14:01

#128 Post by Gobbi »

gcmartin wrote:Hyperthreading per core on Intel processors where a single core appears as 2. Not a feature of any other CPU manufacturer excepting IBM's.
AMD's FX CPUs do the same thing . I am using one .

@Q5sys - thank's for your extended explaining :!:

stemsee

#129 Post by stemsee »

gcmartin my enquiry was not aimed at you. Therefore how can you answer for someone else, unless you either have the same machine or the specs from his machine? I basically know what threads are and how they relate to hardware. I also fully understand what Q5sys explains about the toolchain. Do you not feel yourself frequently overstepping the mark?

To clarify my enquiry does your machine have 24 true physical cores or 12? Not important just curious, as my son is curious to see cars with more exhaust pipes.

User avatar
Q5sys
Posts: 1105
Joined: Thu 11 Dec 2008, 19:49
Contact:

#130 Post by Q5sys »

stemsee wrote:24 true cores = 48 threads ??
I have Dual Hexa-core Xeons. So 2 CPUs with 6 cores each = 12 physical cores, with HT I have 24 HT cores.

stemsee

#131 Post by stemsee »

How then with so much power and speed, could it be 'tedious' to compile the toolchain? It would be finished just after beginning!

Dry Falls
Posts: 616
Joined: Tue 16 Dec 2014, 23:37
Location: Upper Columbia

tedious?

#132 Post by Dry Falls »

stemsee wrote:How
then with so much power and speed, could it be 'tedious' to compile the
toolchain? It would be finished just after beginning!
Actually, it was such a good instructional by Q5sys (thanks, btw) that it only looked scary. Lighthouse did all the work. With devx, I extracted the txz files for gcc (three files) binutil's and udisks -- 'extracttxz' does all the processing; deleted the man/info/locale etc; static compiled gvs1.12.3.sfs to upgrade to 1.16.3 and renamed it; mksquash the new directories; loaded, made save file and remastered (gvs is still an addon).

Changed distrospecs to 603 in initrd and rebooted. Lighthouse automatically performed a version upgrade on reboot. Changed back to 602 (base file & distrospecs) and the thing with a new "toolchain" is working better than ever. A few tweaks and this is probably all I'll upload for testing. Folks can make their own "chubby". PPM has both 14.0 and 14.1. Gslapt doesn't interfere. drive icons change correctly "on the fly" mounting and unmounting, although that process is a bit slower.

Is there a non-tedious way to statically or dynamically merge directories for a documentation sfs including locales? I've got a start by hand, but that last one is a bugger. It will be a real project to rebuild the devx, but it will be considerably smaller. New lighthouse base file is now 260 M.

df

User avatar
Q5sys
Posts: 1105
Joined: Thu 11 Dec 2008, 19:49
Contact:

#133 Post by Q5sys »

gcmartin wrote:
Q5sys also wrote:... willing to build whatever for whoever ...
Wondering if you envision a process that this thread's contributors should follow.
finish reading the sentence I wrote...
Q5sys wrote:...if they provide me the source and/or they write a script that'll automate it so I can click it and go.
process =

Step 1. Someone wants something compiled
Step 2. Someone gives me the source code or a script that automates the download and compiling (Like the kernel-kit)
Step 3. I compile it.
Step 4. I give them the compiled package.

Did I really need to write that out?

gcmartin

#134 Post by gcmartin »

Testy are we???
  • How are they suppose to do this so that it is acceptable to you?
  • where is it suppose to be delivered so that you know what the need is?
  • what is the timetable?
  • How will it be sent back?
  • what specifics will they be required so that the whole operation is smooth?
  • why does everything with you need be testy?
This is just a few of the business questions most professionals ask. I remember you sharing what you do professionally at one point.

This was NOT to be a "testy" question? Hope you are clear on that.

Further, lets stop this behavior now and NOT pollute the thread as we try to move this Testament to TaZoC's work forward.

Anything I say or do from this point forward is to mean NOT to you, if that will make you happy. If so, either way, send me a PM and allow this thread to move forward!

User avatar
Q5sys
Posts: 1105
Joined: Thu 11 Dec 2008, 19:49
Contact:

#135 Post by Q5sys »

Was able to do some digging today and found some stuff in the current Mariner64 build that might explain some of the oddities that have come up.

Lets first look at my recently build kernel.
Image
Now lets look at that gcc version
Image
So good so far.
Now lets look at our main linker.
Image
that's not good. the linker was built with another version of gcc. Lets look at another program.
Image
So one of our base userland utilities was actually compiled with an older glibc version

Wasnt able to dig into busybox to see what it was build with.

User avatar
Q5sys
Posts: 1105
Joined: Thu 11 Dec 2008, 19:49
Contact:

#136 Post by Q5sys »

gcmartin wrote:
  • How are they suppose to do this so that it is acceptable to you?
  • where is it suppose to be delivered so that you know what the need is?
  • what is the timetable?
  • How will it be sent back?
  • what specifics will they be required so that the whole operation is smooth?
Uh... they can send me a PM and we can talk about the details. Each person is going to have a different scenario, trying to list them all out in a forum post would be rather silly.
Pretty much everything you asked I would talk to whoever on a person by person basis. All of the details would obviously have to be discussed, and since this is a forum and it has a Private Message Function... that seems like the obvious place for someone to start... or if it has to do with something for Mariner64, they can post a message in the thread.
gcmartin wrote:
  • why does everything with you need be testy?
It doesn't, but you're asking what seems to be an entirely unneed question. If people want me to help them... they have to contact me. Seeing as how this is simply done through a PM... i see no reason to treat people like children and behave like they aren't intelligent enough to figure out how to contact me.

This forum is full of very intelligent people. To believe that anyone who has manged to:
A) register here,
B) install linux,
C) have even the slightest understanding of what compiling source code means...
is unable to figure out how to contact me, is IMHO, quite insulting.

gcmartin

#137 Post by gcmartin »

As I asked in the prior thread, let's move on. Put that testy behavior behind.

Your explanation can be interpreted to mean "you want it done via PM". Now that wasn't too insulting, now, was it?

Let's move on.

User avatar
Q5sys
Posts: 1105
Joined: Thu 11 Dec 2008, 19:49
Contact:

#138 Post by Q5sys »

gcmartin wrote:Put that testy behavior behind.
Maybe if you stopped trying to give orders and stopped treating people like idiots, people wouldn't take your words to be so rude. You do not have the right to tell me how to behave, but yet here you are... doing just that.
gcmartin wrote:Your explanation can be interpreted to mean "you want it done via PM". Now that wasn't too insulting, now, was it?
To be blunt... I think each person in this forum and on the internet can interpret what I mean on their own. I believe they are smart enough to do so on their own without needing someone else to 'interpret' my words for them. You did the same thing to stemsee earlier in this thread.
gcmartin wrote:What is suggested is that if the toolchain is updated, some/many/most current issues may disappear.
stemsee wrote:I also fully understand what Q5sys explains about the toolchain. Do you not feel yourself frequently overstepping the mark?
Is there a reason you feel you have to 'explain' what others say?

gcmartin wrote:Let's move on.
If you dont want me to reply to a question... then dont ask me one.
If you dont want me to comment... then perhaps you shouldn't direct a comment towards me or try to 'interpret' my words for others.

Shall I give you a reminder... http://q5sys.info/gc-pm-2013-02-08-4.png

As for everyone else, can we return to the technical discussion?

gcmartin

#139 Post by gcmartin »

Q5sys, Now that you've got that out of your system...Let's move on.

User avatar
Q5sys
Posts: 1105
Joined: Thu 11 Dec 2008, 19:49
Contact:

#140 Post by Q5sys »

Dry Falls, PM coming your way.

Post Reply