Copying files takes longer if there's a Save file
Copying files takes longer if there's a Save file
Dear All,
I notice, copying files takes longer in the presence of save file and sfs than on a pure ram boot. I compared two fatdog 611. On one, I have a save file and a 5-6 sfs loaded. On the other, same fatdog 611 but booted with no save file and no sfs. I see normal file copy generally takes much longer in first system than the second.
Longtime back, I had observed the similar behavior... http://murga-linux.com/puppy/viewtopic.php?t=106611
It is a regular observation... that means it is irrespective of the current session. CPU is mostly idle, ram even free a lot... copy also not to save file (/root), mostly either from one hard disk partition to other or usb drive to/from disk...
Similar observation on wary 530 too...
Did you observe the similar anytime?
Is it that presence of sfs layers or save file affects? I can't even convince myself, why this will affect...
Any guess? Kindly check on your systems once...
All these systems run on bare machine, not on virtualbox. I have several puppy os frugally installed on my disk.
Question is, why pure RAM boot is faster for file operations than systems with sfs+savefile...
Sincerely,
Srinivas Nayak
I notice, copying files takes longer in the presence of save file and sfs than on a pure ram boot. I compared two fatdog 611. On one, I have a save file and a 5-6 sfs loaded. On the other, same fatdog 611 but booted with no save file and no sfs. I see normal file copy generally takes much longer in first system than the second.
Longtime back, I had observed the similar behavior... http://murga-linux.com/puppy/viewtopic.php?t=106611
It is a regular observation... that means it is irrespective of the current session. CPU is mostly idle, ram even free a lot... copy also not to save file (/root), mostly either from one hard disk partition to other or usb drive to/from disk...
Similar observation on wary 530 too...
Did you observe the similar anytime?
Is it that presence of sfs layers or save file affects? I can't even convince myself, why this will affect...
Any guess? Kindly check on your systems once...
All these systems run on bare machine, not on virtualbox. I have several puppy os frugally installed on my disk.
Question is, why pure RAM boot is faster for file operations than systems with sfs+savefile...
Sincerely,
Srinivas Nayak
[Precise 571 on AMD Athlon XP 2000+ with 512MB RAM]
[Fatdog 720 on Intel Pentium B960 with 4GB RAM]
[url]http://srinivas-nayak.blogspot.com/[/url]
[Fatdog 720 on Intel Pentium B960 with 4GB RAM]
[url]http://srinivas-nayak.blogspot.com/[/url]
What specific device is Puppy installed on?
What specific size is the file you are coping?
How much actual difference in speed are you seeing?
Is there a Linux swap?
Sure the save is not being auto saved when you are doing this copy process?
Could be some program, that is in the save, that gets loaded and is affecting this
You are using some older versions of the Puppies.
They could have some core Puppy files that have greatly been improved in newer versions of Puppy.
SFS files loaded do take up space in RAM.
Just what are those SFS packages?
Copy from a device back to the same device does require more ram usage. You can not use the read write head in a hard drive read and write at the same time.
The head has to read first and than write.
The read data goes into memory, than it is written.
I do notice a speed difference going from the same device back to the same device. Slower.
From one device to another. Faster.
What specific size is the file you are coping?
How much actual difference in speed are you seeing?
Is there a Linux swap?
Sure the save is not being auto saved when you are doing this copy process?
Could be some program, that is in the save, that gets loaded and is affecting this
You are using some older versions of the Puppies.
They could have some core Puppy files that have greatly been improved in newer versions of Puppy.
SFS files loaded do take up space in RAM.
Just what are those SFS packages?
Copy from a device back to the same device does require more ram usage. You can not use the read write head in a hard drive read and write at the same time.
The head has to read first and than write.
The read data goes into memory, than it is written.
I do notice a speed difference going from the same device back to the same device. Slower.
From one device to another. Faster.
The things they do not tell you, are usually the clue to solving the problem.
When I was a kid I wanted to be older.... This is not what I expected
YaPI(any iso installer)
When I was a kid I wanted to be older.... This is not what I expected
YaPI(any iso installer)
Dear bigpup,
Thanks for your reply.
chrome
devx
virtualbox
teamviewer
32bit-slacko-2x
But one thing worries me, why one partition to another partition copy is faster in OS without save file.
Thanks for your reply.
On my Lenovo G570 laptop. Fatdog 611 on Intel Pentium B960 with 4GB RAM.What specific device is Puppy installed on?
Same observation for many small files as well as single big file.What specific size is the file you are coping?
It is quite a big difference. Once I measured, almost 10 fold. But that is not always, may be that time I copied a big file? Even then, I feel slowness.How much actual difference in speed are you seeing?
Yes, 2GB.Is there a Linux swap?
Not sure, what is auto save?Sure the save is not being auto saved when you are doing this copy process?
I think this is unlikely, since I could see friendly processes only on htop.Could be some program, that is in the save, that gets loaded and is affecting this
True, but why the same OS without save file is too fast?You are using some older versions of the Puppies.
They could have some core Puppy files that have greatly been improved in newer versions of Puppy.
jreSFS files loaded do take up space in RAM.
Just what are those SFS packages?
chrome
devx
virtualbox
teamviewer
32bit-slacko-2x
Perfect observation. I too seen this.Copy from a device back to the same device does require more ram usage. You can not use the read write head in a hard drive read and write at the same time.
The head has to read first and than write.
The read data goes into memory, than it is written.
I do notice a speed difference going from the same device back to the same device. Slower.
From one device to another. Faster.
But one thing worries me, why one partition to another partition copy is faster in OS without save file.
[Precise 571 on AMD Athlon XP 2000+ with 512MB RAM]
[Fatdog 720 on Intel Pentium B960 with 4GB RAM]
[url]http://srinivas-nayak.blogspot.com/[/url]
[Fatdog 720 on Intel Pentium B960 with 4GB RAM]
[url]http://srinivas-nayak.blogspot.com/[/url]
-
- Posts: 1543
- Joined: Mon 22 Feb 2016, 19:43
The one difference I can think of is aufs.
Also, as bigpup mentioned, copy to same device can be slower. Be mindful that copying partition to partition may still be the same device, and can even be slower since the HD head has to travel farther.
I don't have FD611 and don't plan to load it, but since you've done lots of testing already, perhaps you can clarify a bit more.
In your other post, you quantified the speed as being 6MB/s being normal, and 0.6MB/s as slow for flash.(both are very slow)
Can you specify exactly where you are copying from/to and the speed observed, like if save/sfs loaded:
copy /aufs/devbase to /mnt/sda2 - 6MB/s
or no save/sfs loaded:
copy /mnt/sda1 to /mnt/sda2 - 12MB/s
Also, is this GUI copy or command line?
And what is you free memory (from free command) with save/sfs loaded, and then without save/sfs loaded before, during and after?
Also, as bigpup mentioned, copy to same device can be slower. Be mindful that copying partition to partition may still be the same device, and can even be slower since the HD head has to travel farther.
I don't have FD611 and don't plan to load it, but since you've done lots of testing already, perhaps you can clarify a bit more.
In your other post, you quantified the speed as being 6MB/s being normal, and 0.6MB/s as slow for flash.(both are very slow)
Can you specify exactly where you are copying from/to and the speed observed, like if save/sfs loaded:
copy /aufs/devbase to /mnt/sda2 - 6MB/s
or no save/sfs loaded:
copy /mnt/sda1 to /mnt/sda2 - 12MB/s
Also, is this GUI copy or command line?
And what is you free memory (from free command) with save/sfs loaded, and then without save/sfs loaded before, during and after?
Sorry for little late, but yesterday there was a need to do a massive copy.
1. I used GUI copy:
from /mnt/sda7/bigfolder_with_cd_dvd_isos
to /mnt/sda8
So it is a GUI copy onto same disk, but to different partition.
This folder contains almost 20GB+ data. All files are 700MB iso and 4GB iso.
2. output of iostat -m command, fired 4 times during copy.
This shows 2MBps read and write speed!
I never imagined such a speed from Linux and 7200RPM 500GB laptop hard disk.
3. details of mounted file systems, save file and sfs as seen from mount command.
4. out put of top.
5. output of free
Hope this gives some clue to my agony.
Sincerely,
Srinivas Nayak
1. I used GUI copy:
from /mnt/sda7/bigfolder_with_cd_dvd_isos
to /mnt/sda8
So it is a GUI copy onto same disk, but to different partition.
This folder contains almost 20GB+ data. All files are 700MB iso and 4GB iso.
2. output of iostat -m command, fired 4 times during copy.
Code: Select all
~# iostat -m
Linux 3.4.18 (fatdog611) 08/14/2017 _x86_64_ (2 CPU)
avg-cpu: %user %nice %system %iowait %steal %idle
2.01 0.00 1.00 4.76 0.00 92.23
Device: tps MB_read/s MB_wrtn/s MB_read MB_wrtn
sda 9.12 0.78 0.78 10749 10775
~# iostat -m
Linux 3.4.18 (fatdog611) 08/14/2017 _x86_64_ (2 CPU)
avg-cpu: %user %nice %system %iowait %steal %idle
1.99 0.00 1.02 5.49 0.00 91.49
Device: tps MB_read/s MB_wrtn/s MB_read MB_wrtn
sda 11.01 0.97 0.94 13441 13040
~# iostat -m
Linux 3.4.18 (fatdog611) 08/14/2017 _x86_64_ (2 CPU)
avg-cpu: %user %nice %system %iowait %steal %idle
1.93 0.00 1.11 8.22 0.00 88.74
Device: tps MB_read/s MB_wrtn/s MB_read MB_wrtn
sda 18.02 1.64 1.62 23797 23433
~# iostat -m
Linux 3.4.18 (fatdog611) 08/14/2017 _x86_64_ (2 CPU)
avg-cpu: %user %nice %system %iowait %steal %idle
1.85 0.00 1.20 11.44 0.00 85.51
Device: tps MB_read/s MB_wrtn/s MB_read MB_wrtn
sda 25.93 2.40 2.41 36958 37061
I never imagined such a speed from Linux and 7200RPM 500GB laptop hard disk.
3. details of mounted file systems, save file and sfs as seen from mount command.
Code: Select all
~# mount
rootfs on / type rootfs (rw)
proc on /proc type proc (rw,relatime)
sysfs on /sys type sysfs (rw,relatime)
devtmpfs on /dev type devtmpfs (rw,relatime,size=1890764k,nr_inodes=472691,mode=755)
tmpfs on /aufs/pup_init type tmpfs (ro,relatime)
aufs on / type aufs (rw,relatime,si=1030ce7bbf407180)
devpts on /dev/pts type devpts (rw,relatime,gid=3,mode=620)
tmpfs on /dev/shm type tmpfs (rw,relatime,mode=777)
tmpfs on /tmp type tmpfs (rw,relatime)
aufs on /usr/lib type aufs (rw,relatime,si=1030ce7bbf407180)
aufs on /usr/X11R7/lib type aufs (rw,relatime,si=1030ce7bbf407180)
fusectl on /sys/fs/fuse/connections type fusectl (rw,relatime)
/dev/loop0 on /aufs/kernel-modules type squashfs (ro,relatime)
/dev/loop1 on /aufs/pup_ro type squashfs (ro,relatime)
/dev/loop2 on /aufs/pup_save type ext4 (rw,relatime,data=ordered)
/dev/loop3 on /mnt/data type ext3 (rw,relatime,errors=continue,user_xattr,barrier=1,data=ordered)
/dev/loop10 on /aufs/pup_ro10 type squashfs (ro,relatime)
/dev/loop11 on /aufs/pup_ro11 type squashfs (ro,relatime)
/dev/loop12 on /aufs/pup_ro12 type squashfs (ro,relatime)
/dev/loop13 on /aufs/pup_ro13 type squashfs (ro,relatime)
/dev/loop14 on /aufs/pup_ro14 type squashfs (ro,relatime)
/dev/loop15 on /aufs/pup_ro15 type squashfs (ro,relatime)
/dev/loop16 on /aufs/pup_ro16 type squashfs (ro,relatime)
/dev/sda2 on /aufs/devsave type ext3 (rw,relatime,errors=continue,user_xattr,barrier=1,data=ordered)
/dev/sda8 on /mnt/sda8 type ext3 (rw,relatime,errors=continue,user_xattr,barrier=1,data=ordered)
/dev/sda7 on /mnt/sda7 type ext3 (rw,relatime,errors=continue,user_xattr,barrier=1,data=ordered)
/dev/sda6 on /mnt/sda6 type ext3 (rw,relatime,errors=continue,user_xattr,barrier=1,data=ordered)
~#
Code: Select all
# top
Mem: 3739076K used, 248824K free, 0K shrd, 78500K buff, 3466536K cached
CPU: 4.5% usr 4.5% sys 0.0% nic 45.4% idle 45.4% io 0.0% irq 0.0% sirq
Load average: 2.54 2.85 2.65 2/109 20796
5. output of free
Code: Select all
~# free
total used free shared buffers
Mem: 3987900 3744076 243824 0 78648
-/+ buffers: 3665428 322472
Swap: 0 0 0
Sincerely,
Srinivas Nayak
[Precise 571 on AMD Athlon XP 2000+ with 512MB RAM]
[Fatdog 720 on Intel Pentium B960 with 4GB RAM]
[url]http://srinivas-nayak.blogspot.com/[/url]
[Fatdog 720 on Intel Pentium B960 with 4GB RAM]
[url]http://srinivas-nayak.blogspot.com/[/url]
Dear 6502coder,
Thanks for having a look at my problem.
Having two systems with me, I always forget their settings.
I am shameful for my mistake. On my laptop with Fatdog611, I have no swap.
On this machine, I did my last "copy" excercise.
Sincerely,
Srinivas Nayak
Thanks for having a look at my problem.
Having two systems with me, I always forget their settings.
I am shameful for my mistake. On my laptop with Fatdog611, I have no swap.
On this machine, I did my last "copy" excercise.
Sincerely,
Srinivas Nayak
Last edited by snayak on Wed 16 Aug 2017, 06:57, edited 1 time in total.
[Precise 571 on AMD Athlon XP 2000+ with 512MB RAM]
[Fatdog 720 on Intel Pentium B960 with 4GB RAM]
[url]http://srinivas-nayak.blogspot.com/[/url]
[Fatdog 720 on Intel Pentium B960 with 4GB RAM]
[url]http://srinivas-nayak.blogspot.com/[/url]
To be honest, yesterday I was even surprised seeing my free output. I always thought I have swap. But when I saw this, I switched ON both of my machines and reconfirm. My laptop/fatdog611 doesn't have swap, since it has more RAM, I didn't add a swap. On my Athlon/wary530 machine I used to have 2GB swap, which I changed, seeing no benefit, to 1GB when I installed precise571 discarding wary530.
[Precise 571 on AMD Athlon XP 2000+ with 512MB RAM]
[Fatdog 720 on Intel Pentium B960 with 4GB RAM]
[url]http://srinivas-nayak.blogspot.com/[/url]
[Fatdog 720 on Intel Pentium B960 with 4GB RAM]
[url]http://srinivas-nayak.blogspot.com/[/url]
Looks like more sfs loading is the cause of slowness. Since aufs has some slowness.
from net:
The AUFS storage driver can introduce significant latencies into container write performance. This is because the first time a container writes to any file, the file has to be located and copied into the containers top writable layer. These latencies increase and are compounded when these files exist below many image layers and the files themselves are large.
source: https://docs.docker.com/v17.09/engine/u ... erformance
The granularity of AUFS is the file, not the byte/record/extent/whatever. This means, that the layers are traversed when a file is open. Once you hold a file descriptor, the performance difference should be unnoticeable.
However, there can be two adverse effects if you have many layers, especially with deep-nested directory hierarchies:
each stat/open/... will traverse all layers, and require a directory entry look-up for each path component, on each layer (until the component is found); i.e., when you try to open /a/b/c/d with layers L1 L2 L3, it will actually look for /a on L3, L2, and L1 (until it finds one); then it will look for /a/b on L3, L2, and L1 again; and so on;
source: https://github.com/moby/moby/issues/11
Adding too many layers to AUFS reduces performance and slows down saving.
source: http://puppylinux.org/wikka/HowtoMakeSFSPackage
Adding more layers makes it slower because each whiteout file needs to be checked on each layer.
source: http://murga-linux.com/puppy/viewtopic.php?t=64570
from net:
The AUFS storage driver can introduce significant latencies into container write performance. This is because the first time a container writes to any file, the file has to be located and copied into the containers top writable layer. These latencies increase and are compounded when these files exist below many image layers and the files themselves are large.
source: https://docs.docker.com/v17.09/engine/u ... erformance
The granularity of AUFS is the file, not the byte/record/extent/whatever. This means, that the layers are traversed when a file is open. Once you hold a file descriptor, the performance difference should be unnoticeable.
However, there can be two adverse effects if you have many layers, especially with deep-nested directory hierarchies:
each stat/open/... will traverse all layers, and require a directory entry look-up for each path component, on each layer (until the component is found); i.e., when you try to open /a/b/c/d with layers L1 L2 L3, it will actually look for /a on L3, L2, and L1 (until it finds one); then it will look for /a/b on L3, L2, and L1 again; and so on;
source: https://github.com/moby/moby/issues/11
Adding too many layers to AUFS reduces performance and slows down saving.
source: http://puppylinux.org/wikka/HowtoMakeSFSPackage
Adding more layers makes it slower because each whiteout file needs to be checked on each layer.
source: http://murga-linux.com/puppy/viewtopic.php?t=64570
[Precise 571 on AMD Athlon XP 2000+ with 512MB RAM]
[Fatdog 720 on Intel Pentium B960 with 4GB RAM]
[url]http://srinivas-nayak.blogspot.com/[/url]
[Fatdog 720 on Intel Pentium B960 with 4GB RAM]
[url]http://srinivas-nayak.blogspot.com/[/url]