Puppy Linux Discussion Forum Forum Index Puppy Linux Discussion Forum
Puppy HOME page : puppylinux.com
"THE" alternative forum : puppylinux.info
 
 FAQFAQ   SearchSearch   MemberlistMemberlist   UsergroupsUsergroups   RegisterRegister 
 ProfileProfile   Log in to check your private messagesLog in to check your private messages   Log inLog in 

The time now is Mon 28 Jul 2014, 00:30
All times are UTC - 4
 Forum index » Off-Topic Area » Programming
Speeding up the SnapMerge
Post new topic   Reply to topic View previous topic :: View next topic
Page 1 of 6 [90 Posts]   Goto page: 1, 2, 3, 4, 5, 6 Next
Author Message
jemimah


Joined: 26 Aug 2009
Posts: 4309
Location: Tampa, FL

PostPosted: Sat 05 Feb 2011, 15:38    Post subject:  Speeding up the SnapMerge  

This topic started in the sfs-on-the-fly thread but really needs its own thread so here it is.

jemimah wrote:
Here is some code from the snapmerge script. Adding more layers makes it slower because each whiteout file needs to be checked on each layer. This script is already painfully slow and the main reason I don't want to add more layers,

I supposed it's worth experimenting and seeing what how much difference it makes.

Code:
while read N
do
 BN="`basename "$N"`"
 DN="`dirname "$N"`"
 [ "$BN" = ".wh.aufs" ] && continue #w003 aufs has file .wh..wh.aufs in /initrd/pup_rw.
 [ "$DN" = "." ] && continue
 if [ "$BN" = "__dir_opaque" ];then #w003
  #'.wh.__dir_opaque' marks ignore all contents in lower layers...
  rm -rf "${BASE}/${DN}/*" 2>/dev/null #wipe anything in save layer.
  #also need to save the whiteout file to block all lower layers (may be readonly)...
  touch "${BASE}/${DN}/.wh.__dir_opaque" 2>/dev/null
  rm -f "$SNAP/$DN/.wh.__dir_opaque" #should force aufs layer "reval".
  continue
 fi
 #comes in here with the '.wh.' prefix stripped off, leaving actual filename...
 rm -rf "$BASE/$N"
 #if file exists on a lower layer, have to save the whiteout file...
 BLOCKFILE=""
 [ -e "/initrd/pup_ro1/$N" ] && BLOCKFILE="yes"
 [ -e "/initrd/pup_ro2/$N" ] && BLOCKFILE="yes"
 [ -e "/initrd/pup_ro3/$N" ] && BLOCKFILE="yes"
 [ -e "/initrd/pup_ro4/$N" ] && BLOCKFILE="yes"
 [ -e "/initrd/pup_ro5/$N" ] && BLOCKFILE="yes"
 [ -e "/initrd/pup_ro6/$N" ] && BLOCKFILE="yes"
 [ -e "/initrd/pup_ro7/$N" ] && BLOCKFILE="yes" #v424
 [ -e "/initrd/pup_ro8/$N" ] && BLOCKFILE="yes" #v424
 [ -e "/initrd/pup_ro9/$N" ] && BLOCKFILE="yes" #v424
 [ "$BLOCKFILE" = "yes" ] && touch "${BASE}/${DN}/.wh.${BN}"
 rm -f "$SNAP/$DN/.wh.$BN" #remove whiteout file. should force aufs layer "reval".
done


shinobar wrote:
jemimah wrote:
Here is some code from the snapmerge script. Adding more layers makes it slower because each whiteout file needs to be checked on each layer. This script is already painfully slow

Yes, jemimah. It is so slow.

Code:
#also need to save the whiteout file to block all lower layers

I wonder why we need to check them. Why not unconditionally copy all the file in pup_rw...?
I also wonder what the rc.update does...? Rolling Eyes


jemimah wrote:
shinobar wrote:

Code:
#also need to save the whiteout file to block all lower layers

I wonder why we need to check them. Why not unconditionally copy all the file in pup_rw...?
I also wonder what the rc.update does...? Rolling Eyes


Say I create a new file then delete it and a white out file gets saved to the save file. Then later I add an sfs containing a file of the same name. The file will not appear because the white out file is there blocking it. I believe there is code in the init script to check for this condition and delete the interefering white outs when you add an SFS, but I know from experience that even that doesn't always work.

But that's an interesting thought - maybe the whiteout checking code in snapmerge is redundant and can be removed. However, It may be an error condition in AUFS to have a whiteout file with no file below it. I know for sure UnionFS is really picky about that, but I think AUFS is more tolerant.

However, I think the real bottleneck in the script is checking for free space in the save file for every single file copied down. That could be omitted in the case where your save file has more free space than the size of the files in RAM - but otherwise I think you need to do it.


jamesbond wrote:
shinobar, sorry to hijack your thread. I'll move off your lawn very quickly after this.

Been thinking about it too ... I'm comparing the situation that requires snapmergepuppy and the one where /pup_rw is mounted directly on pupsave file. In this case, no management of whiteout files is done (as shinobar said) - and yet things will work correctly.

In the specific PUPMODE where merge script is required, these are the conditions:
a) there are, effectively, two pupsave files - the tmpfs layer, and the real pupsave (mounted ro by aufs)
b) we want to create the impression that this two pupsaves work as one
c) we don't want to duplicate items from pupsave to tmpfs
d) optionally, tmpfs and pupsave is allowed to have different size

a) & b) is rather easy to accomplish, it's c) & d) which causes the most headache and the need for merge script. Actually, c) is also the cause of problem if your real pupsave file is almost full, yet the tmpfs is empty (ie fresh boot). One can keep adding things without knowing that one cannot save the stuff anymore. Kinda like vmware thin provisioning, but without enough backing storage Shocked

If it's only a) & b) - easy - just load pupsave to tmpfs at start, and then rsync everything to pupsave during shutdown (or during merge). The real pupsave don't even need to be part of the branch.

But we need to do c) and d) since that's the agreed design criteria for now. Based on the above, I think the only check needed is as follows, for a combination of a "real file" and its corresponding whiteout file:

1. whiteout file exists in tmpfs, real file exists in pupsave
Cause ==> the file has just been deleted during user session.
Action ==> delete real file in pupsave & create the whiteout file (to prevent any file from lower layer getting exposed).
Then delete the whiteout in tmpfs.

2. whiteout file exist in tmpfs, real file doesn't exist in pupsave
Cause ==> whiteout is for a file in lower layer
Action ==> create whiteout file in pupsave
Then delete the whiteout in tmpfs.

3. real file exists in tmpfs, whiteout exist in pupsave
Cause ==> new file created over previously deleted file (from previous session)
Action ==> copy file from tmpfs to pupsave, and delete whiteout in pupsave
Then delete the real file in tmpfs.

4. real file exists in tmpfs, whiteout doesn't exist in pupsave
Cause ==> new file created in this session
Action ==> copy file from tmpfs to pupsave,
Then delete the real file in tmpfs.

5. real file exist in tmpfs, real file also exist in pupsave
Cause ==> file is updated in this session
Action ==> copy file from tmpfs to pupsave,
Then delete the real file in tmpfs.

Of course when I say "file" it also applies to directories.

I think that should handle 90% of the cases. We skip corner cases of "we only save the whiteout files only if the lower layer SFS have the real files" - I don't really see why this is necessary.

If the slowness comes from checking all those files in the SFS layers, then by dealing only with tmpfs and pupsave, this delay should be greatly reduced. If it's not, then the above may not help. In fact, I'm doubting the need to have c) and d) in the first place ... I mean, you have that very important big file you need to save, you can always save it in /mnt/home (ie the real storage).

Ok, I'm off - jemimah we can start another thread on this if you want to.

Shinobar, thanks for the update, I'll test it and get back to you.
Back to top
View user's profile Send private message Visit poster's website 
jemimah


Joined: 26 Aug 2009
Posts: 4309
Location: Tampa, FL

PostPosted: Sat 05 Feb 2011, 15:43    Post subject:  

I think what happens if you copy down unneeded whiteout files is that you get an I/O error when you try to create the file later. I will verify this and get back.
Back to top
View user's profile Send private message Visit poster's website 
jamesbond

Joined: 26 Feb 2007
Posts: 2045
Location: The Blue Marble

PostPosted: Sat 05 Feb 2011, 20:31    Post subject:  

jemimah wrote:
I think what happens if you copy down unneeded whiteout files is that you get an I/O error when you try to create the file later. I will verify this and get back.


Tested with aufs 2.1-standalone.tree-35-20100920 (the one that comes with FD64). No I/O error - everything works as expected. A non-needed whiteout file on the second layer will be obscured by a file of the same name on the higher branch.

_________________
Fatdog64, Slacko and Puppeee user. Puppy user since 2.13
Back to top
View user's profile Send private message 
jpeps

Joined: 31 May 2008
Posts: 3220

PostPosted: Sat 05 Feb 2011, 23:23    Post subject:  

Thanks for this great discussion....it's unearthing some mysteries (at least for me).
Back to top
View user's profile Send private message 
technosaurus


Joined: 18 May 2008
Posts: 4279

PostPosted: Sun 06 Feb 2011, 02:16    Post subject: Re: Speeding up the SnapMerge  

jemimah wrote:
This topic started in the sfs-on-the-fly thread but really needs its own thread so here it is.

jemimah wrote:
Here is some code from the snapmerge script. Adding more layers makes it slower because each whiteout file needs to be checked on each layer. This script is already painfully slow and the main reason I don't want to add more layers,

I supposed it's worth experimenting and seeing what how much difference it makes.

Code:
while read N
do
 BN="`basename "$N"`"
 DN="`dirname "$N"`"
 [ "$BN" = ".wh.aufs" ] && continue #w003 aufs has file .wh..wh.aufs in /initrd/pup_rw.
 [ "$DN" = "." ] && continue
 if [ "$BN" = "__dir_opaque" ];then #w003
  #'.wh.__dir_opaque' marks ignore all contents in lower layers...
  rm -rf "${BASE}/${DN}/*" 2>/dev/null #wipe anything in save layer.
  #also need to save the whiteout file to block all lower layers (may be readonly)...
  touch "${BASE}/${DN}/.wh.__dir_opaque" 2>/dev/null
  rm -f "$SNAP/$DN/.wh.__dir_opaque" #should force aufs layer "reval".
  continue
 fi
 #comes in here with the '.wh.' prefix stripped off, leaving actual filename...
 rm -rf "$BASE/$N"
 #if file exists on a lower layer, have to save the whiteout file...
 BLOCKFILE=""i
 [ -e "/initrd/pup_ro2/$N" ] && BLOCKFILE="yes"
 [ -e "/initrd/pup_ro3/$N" ] && BLOCKFILE="yes"
 [ -e "/initrd/pup_ro4/$N" ] && BLOCKFILE="yes"
 [ -e "/initrd/pup_ro5/$N" ] && BLOCKFILE="yes"
 [ -e "/initrd/pup_ro6/$N" ] && BLOCKFILE="yes"
 [ -e "/initrd/pup_ro7/$N" ] && BLOCKFILE="yes" #v424
 [ -e "/initrd/pup_ro8/$N" ] && BLOCKFILE="yes" #v424
 [ -e "/initrd/pup_ro9/$N" ] && BLOCKFILE="yes" #v424
 [ "$BLOCKFILE" = "yes" ] && touch "${BASE}/${DN}/.wh.${BN}"
 rm -f "$SNAP/$DN/.wh.$BN" #remove whiteout file. should force aufs layer "reval".
done


shinobar wrote:
jemimah wrote:
Here is some code from the snapmerge script. Adding more layers makes it slower because each whiteout file needs to be checked on each layer. This script is already painfully slow

Yes, jemimah. It is so slow.

Code:
#also need to save the whiteout file to block all lower layers

I wonder why we need to check them. Why not unconditionally copy all the file in pup_rw...?
I also wonder what the rc.update does...? Rolling Eyes


jemimah wrote:
shinobar wrote:

Code:
#also need to save the whiteout file to block all lower layers

I wonder why we need to check them. Why not unconditionally copy all the file in pup_rw...?
I also wonder what the rc.update does...? Rolling Eyes


Say I create a new file then delete it and a white out file gets saved to the save file. Then later I add an sfs containing a file of the same name. The file will not appear because the white out file is there blocking it. I believe there is code in the init script to check for this condition and delete the interefering white outs when you add an SFS, but I know from experience that even that doesn't always work.

But that's an interesting thought - maybe the whiteout checking code in snapmerge is redundant and can be removed. However, It may be an error condition in AUFS to have a whiteout file with no file below it. I know for sure UnionFS is really picky about that, but I think AUFS is more tolerant.

However, I think the real bottleneck in the script is checking for free space in the save file for every single file copied down. That could be omitted in the case where your save file has more free space than the size of the files in RAM - but otherwise I think you need to do it.


jamesbond wrote:
shinobar, sorry to hijack your thread. I'll move off your lawn very quickly after this.

Been thinking about it too ... I'm comparing the situation that requires snapmergepuppy and the one where /pup_rw is mounted directly on pupsave file. In this case, no management of whiteout files is done (as shinobar said) - and yet things will work correctly.

In the specific PUPMODE where merge script is required, these are the conditions:
a) there are, effectively, two pupsave files - the tmpfs layer, and the real pupsave (mounted ro by aufs)
b) we want to create the impression that this two pupsaves work as one
c) we don't want to duplicate items from pupsave to tmpfs
d) optionally, tmpfs and pupsave is allowed to have different size

a) & b) is rather easy to accomplish, it's c) & d) which causes the most headache and the need for merge script. Actually, c) is also the cause of problem if your real pupsave file is almost full, yet the tmpfs is empty (ie fresh boot). One can keep adding things without knowing that one cannot save the stuff anymore. Kinda like vmware thin provisioning, but without enough backing storage Shocked

If it's only a) & b) - easy - just load pupsave to tmpfs at start, and then rsync everything to pupsave during shutdown (or during merge). The real pupsave don't even need to be part of the branch.

But we need to do c) and d) since that's the agreed design criteria for now. Based on the above, I think the only check needed is as follows, for a combination of a "real file" and its corresponding whiteout file:

1. whiteout file exists in tmpfs, real file exists in pupsave
Cause ==> the file has just been deleted during user session.
Action ==> delete real file in pupsave & create the whiteout file (to prevent any file from lower layer getting exposed).
Then delete the whiteout in tmpfs.

2. whiteout file exist in tmpfs, real file doesn't exist in pupsave
Cause ==> whiteout is for a file in lower layer
Action ==> create whiteout file in pupsave
Then delete the whiteout in tmpfs.

3. real file exists in tmpfs, whiteout exist in pupsave
Cause ==> new file created over previously deleted file (from previous session)
Action ==> copy file from tmpfs to pupsave, and delete whiteout in pupsave
Then delete the real file in tmpfs.

4. real file exists in tmpfs, whiteout doesn't exist in pupsave
Cause ==> new file created in this session
Action ==> copy file from tmpfs to pupsave,
Then delete the real file in tmpfs.

5. real file exist in tmpfs, real file also exist in pupsave
Cause ==> file is updated in this session
Action ==> copy file from tmpfs to pupsave,
Then delete the real file in tmpfs.

Of course when I say "file" it also applies to directories.

I think that should handle 90% of the cases. We skip corner cases of "we only save the whiteout files only if the lower layer SFS have the real files" - I don't really see why this is necessary.

If the slowness comes from checking all those files in the SFS layers, then by dealing only with tmpfs and pupsave, this delay should be greatly reduced. If it's not, then the above may not help. In fact, I'm doubting the need to have c) and d) in the first place ... I mean, you have that very important big file you need to save, you can always save it in /mnt/home (ie the real storage).

Ok, I'm off - jemimah we can start another thread on this if you want to.

Shinobar, thanks for the update, I'll test it and get back to you.


Could stop some unnecessary checks by combining them
[ -e "/initrd/pup_ro1/$N" -o -e ".... ] && BLOCKFILE="yes"

And try moving some stuff outside the loops

_________________
Web Programming - Pet Packaging 100 & 101
Back to top
View user's profile Send private message 
Q5sys


Joined: 11 Dec 2008
Posts: 1047

PostPosted: Sun 06 Feb 2011, 12:04    Post subject: Re: Speeding up the SnapMerge  

jemimah wrote:
Say I create a new file then delete it and a white out file gets saved to the save file. Then later I add an sfs containing a file of the same name. The file will not appear because the white out file is there blocking it. I believe there is code in the init script to check for this condition and delete the interefering white outs when you add an SFS, but I know from experience that even that doesn't always work.

Ok I'm trying to understand this as I've never looked into this before. So let me ask a question to better understand you.
1) You create a new file and delete it (Im assuming you mean in your save file? if so... how does this change when youre using a full install)
2) After you add an SFS containing that file, you say 'the file will not appear' Where will the file not appear? In the SFS or in the safe file? Or is it 'in' the safe file but the system wont read it. If Im understanding this has to do with the layering and that higher layers cant be overwritten by lower layers... IE in a decending order: the main SFS file, then pupsave file, then extra SFS files.
3)When/how/under what condition do these white out files get deleted?
4)At the risk of asking a dumb question, why are 'whiteout files' even being made? Why not have a single file say 'whiteoutfile.txt' where the files which are 'whiteout'd' (for lack of a better term) are sed... so instead of checking each layer for a file you can just grep the whiteoutfile.txt and see if what you are looking for is there and has been 'whiteout'd'


jemimah wrote:
Adding more layers makes it slower because each whiteout file needs to be checked on each layer.

Last question.
When are these white out files checked? Is it only when loading an SFS, at startup, at shutdown... or every time the file system is accessed?
I'm trying to wrap my head around it, but i'm not quite understanding the order of events here. Anyway someone can bullet point summary it for me? Razz

_________________



My PC is for sale
Back to top
View user's profile Send private message 
jemimah


Joined: 26 Aug 2009
Posts: 4309
Location: Tampa, FL

PostPosted: Sun 06 Feb 2011, 13:19    Post subject:  

Ok here is the situation that definitely gets you I/O errors.

File exists in Read-Only layer
Both file and its whiteout exist in Read-Write layer.


Code:
# pwd
/initrd/pup_rw/root/puppy-reference
# ls -al
total 8
drwxr-xr-x  2 root root 4096 2011-02-06 12:18 .
drwxr-xr-x 44 root root 4096 2011-02-06 12:17 ..
-rw-r--r--  1 root root    0 2011-02-06 12:18 audio
-r--r--r-- 22 root root    0 2011-01-31 06:52 .wh.audio



Code:
pwd
/root/puppy-reference
# ls
ls: cannot access audio: Input/output error
audio        doc         mini-icons  ps-pdf        text           video
backgrounds  midi-icons  pixmaps     spreadsheets  vector-images  xml


This does on occaision happen unintentionally with Puppy and the only fix is to delete the offending whiteouts by hand.

So at the very least, it is necessary to check the save file.
Back to top
View user's profile Send private message Visit poster's website 
jemimah


Joined: 26 Aug 2009
Posts: 4309
Location: Tampa, FL

PostPosted: Sun 06 Feb 2011, 13:32    Post subject: Re: Speeding up the SnapMerge  

Q5sys wrote:

Ok I'm trying to understand this as I've never looked into this before. So let me ask a question to better understand you.
1) You create a new file and delete it (Im assuming you mean in your save file? if so... how does this change when youre using a full install)
2) After you add an SFS containing that file, you say 'the file will not appear' Where will the file not appear? In the SFS or in the safe file? Or is it 'in' the safe file but the system wont read it. If Im understanding this has to do with the layering and that higher layers cant be overwritten by lower layers... IE in a decending order: the main SFS file, then pupsave file, then extra SFS files.
3)When/how/under what condition do these white out files get deleted?
4)At the risk of asking a dumb question, why are 'whiteout files' even being made? Why not have a single file say 'whiteoutfile.txt' where the files which are 'whiteout'd' (for lack of a better term) are sed... so instead of checking each layer for a file you can just grep the whiteoutfile.txt and see if what you are looking for is there and has been 'whiteout'd'


jemimah wrote:
Adding more layers makes it slower because each whiteout file needs to be checked on each layer.

Last question.
When are these white out files checked? Is it only when loading an SFS, at startup, at shutdown... or every time the file system is accessed?
I'm trying to wrap my head around it, but i'm not quite understanding the order of events here. Anyway someone can bullet point summary it for me? Razz


1) None of this applies to a full install, as AUFS is not used at all in that case. The main topic of interest here is shutting down in Pupmode 13. It's pretty annoying to wait 10 minutes for the snapmerge script to save your files before letting you shut down.

2) The file will not appear for the user. The user shouldn't have to care about the layers. The whiteouts in the top layers, the save file and the RAM layer, override the existance of files in lower layers.

3) If you delete a file that exists in a read-only layer a whiteout file is created in the read-write layer. After that, there's no way to get the original file back (without munging the layers yourself). You can create a new file with the same name - in which case the whiteout file will be deleted but now your new file is there blocking the original version.

4) I assume the AUFS developers thought of that and rejected it for some good reason - but I don't know what that reason might be.

5) Whiteout files are checked in the init script, and the snapmerge script. Also maybe newer versions of pet install script that write directly to the save file in pupmode 13 but it's been a while since I looked at that. I think the checks in the init script can be moved to shinobars sfs-loader script, totally obviating the need for a reboot - I don't believe there's a compelling reason the filesystem needs to be unmounted for a layer update.
Back to top
View user's profile Send private message Visit poster's website 
jpeps

Joined: 31 May 2008
Posts: 3220

PostPosted: Sun 06 Feb 2011, 14:16    Post subject:  

I've also had to manually remove whiteout files that get into pupsave and prevent subsequent loading of files. For example, picpuz was separated out in an old remaster, although there in the present lupu-sfs. However, there's

/initrd/pup_rw/usr/share/pixmaps/.wh.picpuz.png
/initrd/pup_rw/usr/local/.wh.picpuz

..so files are missing, and it won't run. Delete the whiteouts, reboot, and all is well.
Back to top
View user's profile Send private message 
technosaurus


Joined: 18 May 2008
Posts: 4279

PostPosted: Sun 06 Feb 2011, 17:35    Post subject:  

Just changing from bash to busybox ash gives ~2000% improvement because ash can use applets as builtins ... saving .02s per call on average just by not having to "locate" a separate file (this is usually the bulk of time usage since the actual actions are very basic)

Use stat -c <format> instead of stat --format=<format>
And similar changes (df wrapper is slow - use busybox df)
Fork child processes for in loop actions that don't affect the loop ( adding a & to the end of chown, chgrp, chmod, cp... will let the loop continue)
Use $((2 + 2)) instead of expr... it is way faster at least in ash by over 2000%

I did significant testing for bashbox to determine which methods were fastest.
...and pulling the loop operations out to a function that you can just spawn with a
Firstloopfxn $params &
Is one of the fastest and easiest ways to speed things up (it _will_ still use lots of CPU )

Just remember that scripts don't get optimized by a compiler, so we have to do it ourselves, especially inside loops.

_________________
Web Programming - Pet Packaging 100 & 101
Back to top
View user's profile Send private message 
jpeps

Joined: 31 May 2008
Posts: 3220

PostPosted: Sun 06 Feb 2011, 17:49    Post subject:  

technosaurus wrote:


Use stat -c <format> instead of stat --format=<format>
And similar changes (df wrapper is slow - use busybox df)


Think that's busybox stat, otherwise it's the same.
Back to top
View user's profile Send private message 
technosaurus


Joined: 18 May 2008
Posts: 4279

PostPosted: Sun 06 Feb 2011, 19:18    Post subject:  

jpeps wrote:
technosaurus wrote:


Use stat -c <format> instead of stat --format=<format>
And similar changes (df wrapper is slow - use busybox df)


Think that's busybox stat, otherwise it's the same.
that really depends
On really old busybox versions, that is true.
Newer versions have an option to prefer applets.
In ash this feature first looks for an applet with the same name.
Therefore busybox stat is used automatically.
(Just by changing the shabang to #!/bin/ash)
To use the "full" version you need the path also.

_________________
Web Programming - Pet Packaging 100 & 101
Back to top
View user's profile Send private message 
jemimah


Joined: 26 Aug 2009
Posts: 4309
Location: Tampa, FL

PostPosted: Sun 06 Feb 2011, 19:22    Post subject:  

Here is a patch from Dougal that implements some of Technosaurus' suggestions and some other stuff.

I've also added /root/.cache and /var/cache to the excludes and put ampersands at the end of file modifiing operations per Technosaurus' suggestion.

I'm going to bump up the number of SFS layers as well for the next Fluppy and we'll see what the performance is like and if saving still works correctly.
snapmergepuppy_dougal.diff.gz
Description 
gz

 Download 
Filename  snapmergepuppy_dougal.diff.gz 
Filesize  2.02 KB 
Downloaded  491 Time(s) 
Back to top
View user's profile Send private message Visit poster's website 
scsijon

Joined: 23 May 2007
Posts: 1023
Location: the australian mallee

PostPosted: Sun 06 Feb 2011, 19:55    Post subject:  

Not sure if this should be in this topic, but do know it shouldn't be in it's previous source topic!

Looking at the code and comments in both.

>?I wonder if there is the ability to control which "layer" users.sfs's could be located in, something like those you know you intend to leave permanently (or semi-permanent) go into one group and be managed one way, while those you are going to use on an intermittant basis or just wish to "try out" go into another group.
Back to top
View user's profile Send private message Visit poster's website 
jpeps

Joined: 31 May 2008
Posts: 3220

PostPosted: Sun 06 Feb 2011, 22:43    Post subject:  

technosaurus wrote:
jpeps wrote:
technosaurus wrote:


Use stat -c <format> instead of stat --format=<format>
And similar changes (df wrapper is slow - use busybox df)


Think that's busybox stat, otherwise it's the same.
that really depends
On really old busybox versions, that is true.
Newer versions have an option to prefer applets.
In ash this feature first looks for an applet with the same name.
Therefore busybox stat is used automatically.
(Just by changing the shabang to #!/bin/ash)
To use the "full" version you need the path also.


I tried with bash/sh/ash in Lucid. All use gnu coreutils unless I specifically specify "busybox"

Code:

BusyBox v1.16.2 (2010-06-19 18:02:46 GMT-8) multi-call binary.

Usage: stat [OPTIONS] FILE...

Display file (default) or filesystem status

Options:
   -c fmt   Use the specified format



Code:

Usage: stat [OPTION]... FILE...
Display file or file system status.

  -L, --dereference     follow links
  -f, --file-system     display file system status instead of file status
  -c  --format=FORMAT   use the specified FORMAT instead of the default;
                          output a newline after each use of FORMAT
Back to top
View user's profile Send private message 
Display posts from previous:   Sort by:   
Page 1 of 6 [90 Posts]   Goto page: 1, 2, 3, 4, 5, 6 Next
Post new topic   Reply to topic View previous topic :: View next topic
 Forum index » Off-Topic Area » Programming
Jump to:  

You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum
You cannot attach files in this forum
You can download files in this forum


Powered by phpBB © 2001, 2005 phpBB Group
[ Time: 0.1593s ][ Queries: 12 (0.0047s) ][ GZIP on ]