pUPnGO - 6Mb ISO - Basic Building Block Puplet

A home for all kinds of Puppy related projects
Message
Author
amigo
Posts: 2629
Joined: Mon 02 Apr 2007, 06:52

#811 Post by amigo »

Yes, be very careful with e2defrag as it is very old. Don't use it on anything you don't have a copy of elsewhere.

Somewhere here I have a gtk1 app which shows the fragmentation status of a drive or file -but I don't find it right now -something with 'dav' in the name IIRC. Ahh, here it is, but it's not on my site davl it's called:
http://davl.sourceforge.net/

goingnuts
Posts: 932
Joined: Sun 07 Dec 2008, 13:33
Contact:

#812 Post by goingnuts »

e2defrag is maintained - cant say its safe - but seems quite up to date.
Thanks for the gdavl-link - cool!
Attachments
snap0005.png
(114.37 KiB) Downloaded 878 times

amigo
Posts: 2629
Joined: Mon 02 Apr 2007, 06:52

#813 Post by amigo »

He, He, that davl really reminds of the windows tool for that. Ummm, are you gonna patch it so that it uses e2defrag to actually do a defrag instead of just showing fragmentation??? Yeah, Yeah, Yeah?? Just kidding, but I think I read you okay -most of the time.

Nice find there about e2defrag. from the website:
This poor ancient package used to be known as the defrag packge but was removed from Debian and hence Ubuntu due to it not having had a maintainer in many years and suffering from bit rot. I am rescuing it from the bit bucket.
Very nice indeed.
There is also e4defrag included with e2fsprogs, but nothing for ext3. I still use ext3 for my daily use as ext4 still 'hits a bump' every now and then.

Ibidem
Posts: 549
Joined: Wed 26 May 2010, 03:31
Location: State of Jefferson

#814 Post by Ibidem »

goingnuts wrote:From dmesg:

Code: Select all

kjournald starting.  Commit interval 5 seconds
EXT3 FS on sdc4, internal journal
EXT3-fs: mounted filesystem with ordered data mode.

Code: Select all

# find /mnt/sdc4 -iname *.jpg -o -iname *.jpeg |wc -l
1204
and foremost running on unmounted sdc4:

Code: Select all

Foremost started at Thu Dec 12 05:35:32 2013
Invocation: foremost -v -T -t jpg /dev/sdc4 
...
1512 FILES EXTRACTED
	
jpg:= 1512
Drive holds mostly source packages, unpacked/packed. Quite a lot of deletion and unpacking/compiling/packaging is done on an everyday basis...

The jpg´s found - are lots of small icon-images, background-images where some seems to come from webpages or manpages...
That's about 300 (or 20%) that aren't from jpg files.
data=ordered appears to not result in the file contents getting saved in the journal...as far as I can tell.
I'd say that's reasonable....

goingnuts
Posts: 932
Joined: Sun 07 Dec 2008, 13:33
Contact:

#815 Post by goingnuts »

amigo: :)
Ibidem: I don't catch your point: "find" finds files not deleted, foremost finds deleted files...

To speed up testing and avoid using drive with precious content I created a smaller (6Gb) partition by resizing 2 ntfs - and then create the new in between with gparted.

To start out its ext2.

Code: Select all

# find /mnt/sdc9 -iname *.jpg -o -iname *.jpeg | wc -l
0
Then I run foremost on unmounted partition

Code: Select all

# foremost -v -T -w -t jpg /dev/sdc9
...
526 FILES EXTRACTED
        
jpg:= 526
So jpg-left overs from the ntfs can be found...
Now I try to wipe with

Code: Select all

dd if=/dev/zero of=zero.small.file bs=1024 count=102400
dd if=/dev/zero of=zero.file bs=1024
df reports

Code: Select all

/dev/sdc9              6048132   6048132         0 100% /mnt/sdc9
I let the files stay and umount partition. So now I expect to find nothing there with foremost (drive is full):

Code: Select all

0 FILES EXTRACTED
:)
Then I delete the two files created with dd and run foremost again and

Code: Select all

0 FILES EXTRACTED
Good! This is the expected behavior - now I need to verify that if the drive is fragmented the above wont wipe free space...later today...

Ibidem
Posts: 549
Joined: Wed 26 May 2010, 03:31
Location: State of Jefferson

#816 Post by Ibidem »

goingnuts wrote:amigo: :)
Ibidem: I don't catch your point: "find" finds files not deleted, foremost finds deleted files...
From what I understand, foremost finds all files having that signature, whether deleted or not.

BTW, there are a few files that contain embedded jpegs...some mp3 files, for example. But if it's mainly source code, that's irrelevant.
To speed up testing and avoid using drive with precious content I created a smaller (6Gb) partition by resizing 2 ntfs - and then create the new in between with gparted.

To start out its ext2.

Code: Select all

# find /mnt/sdc9 -iname *.jpg -o -iname *.jpeg | wc -l
0
Then I run foremost on unmounted partition

Code: Select all

# foremost -v -T -w -t jpg /dev/sdc9
...
526 FILES EXTRACTED
        
jpg:= 526
So jpg-left overs from the ntfs can be found...
Now I try to wipe with

Code: Select all

dd if=/dev/zero of=zero.small.file bs=1024 count=102400
dd if=/dev/zero of=zero.file bs=1024
df reports

Code: Select all

/dev/sdc9              6048132   6048132         0 100% /mnt/sdc9
I let the files stay and umount partition. So now I expect to find nothing there with foremost (drive is full):

Code: Select all

0 FILES EXTRACTED
:)
Then I delete the two files created with dd and run foremost again and

Code: Select all

0 FILES EXTRACTED
Good! This is the expected behavior - now I need to verify that if the drive is fragmented the above wont wipe free space...later today...
My suspicion had been that it was something to do with the journalling. But it looks like it probably isn't, so I don't have any ideas.
Last edited by Ibidem on Fri 13 Dec 2013, 19:39, edited 1 time in total.

goingnuts
Posts: 932
Joined: Sun 07 Dec 2008, 13:33
Contact:

#817 Post by goingnuts »

Ibidem: thanks for the explanation. I might have had a few mp3-files there as well - but the found images did not look like mp3-stuff.

The continuation of the journey comes here:

Now I fill partition with unpacked/extracted source files and large amounts of videos.

fsck reports 145 non-contiguous inodes (0.2%) - and foremost says:

Code: Select all

19 FILES EXTRACTED
        
jpg:= 19
??? I recognize a scrambled video-cover between them - so best guess is that they have entered via the copy of files to the partition...

Now I delete something until approx. 90% of drive still filled.
fsck reports same 0.2% fragmentation. - and foremost find same things as before...
I fill partition with dd, delete the created dd files - and foremost finds - same as before.

OK - time to exercise e2defrag. It goes without problems. fsck reports 18 non-contiguous inodes (0.0%) and gdavl reports fragmented files: 23.
Foremost finds - same 19 jpg files. I do the dd filling again - hoping - but no luck. Those files that foremost finds are resistant.

One last trial: I delete things down to 55% filled partition, run e2defrag, fill partition with dd and run foremost - now only 18 files are found - only 1 has gone.

Well - one more: delete everything - fill with dd - foremost finds nothing now. So whenever creating a new partition it might me good practice to do the dd-thing before starting to use the partition - just to get rid of all old stuff.

Now that was at lot of testing and unfortunately with a poor outcome concerning a simple privacy app. But it seems that e2defrag works and gdavl is a nice tool too. And foremost finds things quite well - so thats a good tool for undelete files...

User avatar
technosaurus
Posts: 4853
Joined: Mon 19 May 2008, 01:24
Location: Blue Springs, MO
Contact:

#818 Post by technosaurus »

In case anyone is building jwm with translucency support...
I tracked down steam's patched xcompmgr:
http://repo.steampowered.com/steamos/po ... .14.tar.gz

it may also need:
http://repo.steampowered.com/steamos/po ... 1.8.tar.gz

NOTE: They left a bunch of debugging code lying around.
Check out my [url=https://github.com/technosaurus]github repositories[/url]. I may eventually get around to updating my [url=http://bashismal.blogspot.com]blogspot[/url].

goingnuts
Posts: 932
Joined: Sun 07 Dec 2008, 13:33
Contact:

#819 Post by goingnuts »

technosaurus: Thanks for the links!

I realize that no pUPnGO2013 is going to be published - maybe a 2014...working on it.

Having fun with the basic core at the moment - converting various original puppys to squash-3.1 formate and loading them after boot - thats easy kernel switch - if you like 2.6.25.16.
Attachments
snap0011.png
wary on top of pupngo
(91.34 KiB) Downloaded 579 times

User avatar
technosaurus
Posts: 4853
Joined: Mon 19 May 2008, 01:24
Location: Blue Springs, MO
Contact:

#820 Post by technosaurus »

I've been messing with reimplementing hotplug here if anyone is interested in playing with it. Currently it does about the same that mdev does, but since it is written in shell, it can easily be modified.

Re: kernel... It would be nice to have some of the new syscalls (rfkill, finit_module) backported to 2.6.32 (oldest maintained LTS kernel) and use that for a 586+mmx kernel. I suggest this because anything less does not run many things efficiently (486 only got to ~133Mhz with a few exceptions) and there are still mainstream CPUs that are not 686 (technically they are but they are missing CMOV) but AFAIK they all have mmx (but not necessarily 3dnow and others)
We should use 3.10 (the newest LTS) for other architectures (basically what musl-libc and aboriginal linux support) ... for non-x86 architectures it is essential to use a newer kernel since much work on these has been a result of android and has accelerated over the last few years.
Check out my [url=https://github.com/technosaurus]github repositories[/url]. I may eventually get around to updating my [url=http://bashismal.blogspot.com]blogspot[/url].

goingnuts
Posts: 932
Joined: Sun 07 Dec 2008, 13:33
Contact:

#821 Post by goingnuts »

Might be a stupid question - if such exist...but I have been wondering why it is so important with Puppy to be build for other than i486? Is speed/features using i586/i686/64bit really recognizable?

I run AMD Athlon 64/3000+ with 1Gb ram and have no issues with speed/features - even though I am stuck in P412. I even build all stuff for pupngo with -mtune=i386...

starhawk
Posts: 4906
Joined: Mon 22 Nov 2010, 06:04
Location: Everybody knows this is nowhere...

#822 Post by starhawk »

techno, goingnuts, et al., pardon the n00b question, but why can't a new kernel be compiled for i486? Are the 3 series kernels totally incompatible with that processor?

(I don't know a thing about kernel level stuff, so please be gentle :shock: )

User avatar
technosaurus
Posts: 4853
Joined: Mon 19 May 2008, 01:24
Location: Blue Springs, MO
Contact:

#823 Post by technosaurus »

starhawk wrote:techno, goingnuts, et al., pardon the n00b question, but why can't a new kernel be compiled for i486? Are the 3 series kernels totally incompatible with that processor?

(I don't know a thing about kernel level stuff, so please be gentle :shock: )
486 is possible but pointless, even dillo-0.8x runs slowly on these systems that are now 20+yrs old and likely close to component failure. AFAIK there is only 1 manufacturer still making i486 (the vortex86sx) and the price is not better than other things on the market (they even make a 586 with mmx). If some chinese firm starts mass producing a high speed, SMP 486 in bulk due to patent expiration this logic would change, but for now the best option is i586+mmx since the rest of the low end CPUs (via, vortex*MX, cyrix, geode,...) in current production and the mainstream CPUs produced over the last 20 years (PentiumMMX+, amd-k7+,...) will perform best with this configuration.
Check out my [url=https://github.com/technosaurus]github repositories[/url]. I may eventually get around to updating my [url=http://bashismal.blogspot.com]blogspot[/url].

amigo
Posts: 2629
Joined: Mon 02 Apr 2007, 06:52

#824 Post by amigo »

'-mtune=i386' is superfluous because glibc doesn't support i386 since 10 years ago.
The jump from i386 to i486 brings the most significant performance benefits for any single jump in kernel arch. i486 is still the best arch to choose if you want to support really old hardware -I'm thinking particularly of geode processors. My current KISS-linux is using i586 which actually is not a good choice. Historically, i586 has had less use and certain combinations of glibc/gcc/binutils will not work for i586 -whereas they work(ed) fine for i486 or i686.

I'm going to be upgrading KISS soon and am going to change to i686 -I figure that there are so few Pentium I's out there that they can be ignored. i686 means a minimum of Pentium II which still offers a way-back reach. And, I still use non-SMP kernel configuration and non-SMP kernel headers for glibc. I figure that SMPÜAE belongs to 64-bit systems. Yes, there are smp processors which are not 64-bit, but even though an SMP-enabled kernel will run on *some* non-SMP machines, they will not run on all of them. So, you'd still need both smp and non-smp kernels for 32-bit systems.

3-series kernels can still be compiled for 486, but 386 is no longer supported at all. And since i586 is an 'iffy' choice, one should either stick with i486 or jump to i686.

All of these choices are made much easier when working with a truly modular system where things are thought-out and built with flexibility in mind. And having a good system for building packages makes it possible to maintain more than one package tree for different arches from the same sources and build scripts.

Looking forward, then 64-bit is the only way to go. But, if wanting to look back at the same time, then it is time to have builds of multiple arches. Of course, you have that now with fatdog & Co., but the common build system is not there. A rethink is needed because trying to build robust systems by robbing packages from here and there is simply not maintainable.

goingnuts
Posts: 932
Joined: Sun 07 Dec 2008, 13:33
Contact:

#825 Post by goingnuts »

technosaurus & amigo: Thanks for the detailed explanations! Still not sure if I now know if there are any advantage by compiling applications with i486 versus i686 in respect to size/speed/features when running Puppy. It seems that if we use i486 we make sure they will run with old cpu as well as new cpu. If i686 is chosen only "newer" cpu will be supported.

For the kernel version my experience is that P216 will run (boot) on pentium1 but P412 will not. Is it possible to compile P412 kernel to run (boot) on pentium1 as well or is there a clear break at a certain kernel number where pentium1 was excluded? Could explain the need for the P412retro version...

I have 2 old notebooks which refuse to boot P412 based pupngo but they happily boots P216 based. P216 & P412 can share squashfs if version 3.0 is used. It only have minor influence on the squashfs-file size as shown below testing compressing the main-sfs files of Puppy-3.00:

org_P300_main.sfs unpacked: 217M
squashfs3.0 (gzip): 77M
squashfs3.1 (gzip): 76M
squashfs4.0 (gzip): 76M
squashfs4.2 (gzip): 76M
squashfs4.2 (xz): 64M

If possible this opens for a shared core-pupngo-sfs for kernel P216 & P412 as well as shared application squashfs-files. Might even open for a shared initrd. Could mean a quite compact 2-kernel version might be possible...

amigo
Posts: 2629
Joined: Mon 02 Apr 2007, 06:52

#826 Post by amigo »

If used alone, mtune sets both the upper and lower limit of hardware support. To have things support a range of hardware, you should use -march as well, like this:
-march=i486 -mtune=i686
This means that i486 will be supported, but when used on a newer machines, then features up to i686 will also be supported. In theory, you could use an even newer arch for mtune, but this is probably not recommended -no one else does it that way.
for my system, I use -march=i586 -mtune=i686, but this is done for everything on the system -and the kernels are compiled with i586 as the arch.
It never makes any sense to compile software for use on an arch which is less than that supported by glibc. If you have a glibc compiled for i686, then the system(OS) will not work on a lesser system -even if you use a kernel which is compiled for i486, for example.

Also, the , --build=??, --host=?? and --target=?? directives are meant to help configure routines find a *toolchain* -they don'T directly enable any features at all. Using them keeps confiure from having to guess and possibly guess wrong about which toolchain to use. Also, the build directive tells configure which machine the software is being compiled on. The host directive tells configure which machine the software will be running on. Of course, when compiling natively, build will be equal to host. When compiling natively, none of these directives are needed (target only ever makes sense in the context of building a compiler or binutils.)

As I mentioned before, the biggest single jump in hardware support is that from i386 to i486. Later jumps provide less benefits for each jump.

To build a system, one should decide from the outset what should be the minimum system to supprt and then use the appropriate (and same) flags for compiling *everything*. Repeating, the flags used to compile glibc will set the lower limit for everything (except for anything which is statically compiled). None of these limit the upper range of what the system can run on. When you use march=i486 -mtune=i686, then the system will still run on i786- it just won't use any of the extra features available for i786.

Finally, 99% of all software is completely indifferent to these directives. The exceptions are things like video players, sound and video editing programs and high-performance libs like boost. Some of these will detect and use features automatically, but most require that options be specified. For instance, to use SSE, a lib or program will provide the --enable-sse option to configure. Some things, like mplayer, will try to detect which kernel is being used and whether such hardware features are available. In such cases, you'll want to definitely specify the hardware level to support. Otherwise, the software will be compiled to support the machine on which it is being built. For instance, if you don't turn off automatic CPU-detection when building mplayer, then it will pick up that you are building on an i786 machine and only support that arch -instead of your bottom limit of i486 or whatever.

User avatar
technosaurus
Posts: 4853
Joined: Mon 19 May 2008, 01:24
Location: Blue Springs, MO
Contact:

#827 Post by technosaurus »

486 is the mimimum, there are a couple of non-FPU boards we don't support (vortex86SX ... any 486SX processors for that matter)
586 is not a significant jump as far as instructions only cmpxchg (it was mostly architectural improvement)
586mmx _is_ a significant jump and is almost required to get decent video playback
686 adds cmov which is 1)technically wrong 2)not widely supported 3)not very useful. See Linus's comments on the subject

Current CPUs should work with 586mmx (including the vortex86MX in Barry's gecko) all the way back to 2nd gen pentiums and amd k6 circa 1997 ... so if we want to support 20+ yr old computers and the vortex86DX, then we would need to do 486 for a few more years otherwise I am all for 586mmx
Check out my [url=https://github.com/technosaurus]github repositories[/url]. I may eventually get around to updating my [url=http://bashismal.blogspot.com]blogspot[/url].

goingnuts
Posts: 932
Joined: Sun 07 Dec 2008, 13:33
Contact:

#828 Post by goingnuts »

Thank you so much guys! I think I got it now :)

amigo
Posts: 2629
Joined: Mon 02 Apr 2007, 06:52

#829 Post by amigo »

One more detail I didn't fill in. When using --build=??, --host=?? --target=?? configure is expecting host-triplet style input. In other words they should be valid targets, like i486-slackware-linux or -i486-t2-linux. You can find the (build) triplet for your gcc with the command:
gcc -dumpversion

In src2pkg I have an option which lets you pass build&Co. when desired. But, since very few sources need or use this, it is not used by default. Slackware's SlackBuild build scripts always pass the option, but in a different way. Instead of
"--build=i486-slackware-linux, --host=i486-slackware-linux"
a single option is passed at the end of other options like tghis:
"i486-slackware-linux"
This is interpreted to mean --host=i486-slackware-linux, but it also triggers warnings about improper use of --host=??.

In any case, --build is the one you'd only need to specify when cross-compiling or using an alternate compiler. Otherwise the configure system will simply find your compiler anyway.

To make clear the meanings of build, host and target, it's helpful to consider (strangely) the most complex cross-build -called the canadian cross. For a canadian-cross build, host and target are all different. Consider this case:
I have an ix86 machine which I am using to build stuff with. But, I have an old PPC iMac which I wish to use build binaries for an ARM system. I need to build a compiuler which runs on PPC, but produces machine code for an ARM. And I need to build the compiler on my ix86 machine beacuse the PPC machine has no compiler. In this case, I would configure gcc using:
--build=i486-slackware-linux --host=ppc-slackware-linux --target=arm-slackware-linux

User avatar
technosaurus
Posts: 4853
Joined: Mon 19 May 2008, 01:24
Location: Blue Springs, MO
Contact:

#830 Post by technosaurus »

amigo wrote:I have an ix86 machine which I am using to build stuff with. But, I have an old PPC iMac which I wish to use build binaries for an ARM system. I need to build a compiuler which runs on PPC, but produces machine code for an ARM. And I need to build the compiler on my ix86 machine beacuse the PPC machine has no compiler. In this case, I would configure gcc using:
--build=i486-slackware-linux --host=ppc-slackware-linux --target=arm-slackware-linux
So you are the one they invented canadian cross compilation for...
Check out my [url=https://github.com/technosaurus]github repositories[/url]. I may eventually get around to updating my [url=http://bashismal.blogspot.com]blogspot[/url].

Post Reply