![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
I have a directory with 244 files with names like m12331246123468911531238951802368109467.mlog, which I want to rename to names like C038.123312.mlog
time for u in m*mlog; do B=$(echo $u | cut -dm -f2 | cut -d. -f1); echo $u C${#B}.$(echo $B | cut -c1-6).mlog; done
takes 17 seconds
time for u in m*mlog; do B=$(echo $u | cut -dm -f2 | cut -d. -f1); echo $u C${#B}.${B:0:6}.mlog; done
takes eight seconds
time for u in m*mlog; do B=${u:1}; B=${B%.mlog}; echo $u C${#B}.${B:0:6}.mlog; done
takes 0.2 seconds.
Of course, when I replace 'echo' with 'mv' it still takes fourteen seconds, but I am not that shocked that mv over NFS might be slow.
Which suggests that doing $() to start a new shell is taking something like a hundredth of a second on a one-year-old PC. I didn't know that. On the other hand, if I start writing code this dense in unclear bashisms, my colleagues at work will disembowel me with spoons.
PS: if I stop running a CPU-intensive program on each of my eight cores, starting new processes gets about fifteen times faster. I can understand if it got twice as fast, but I really don't understand fifteen.
time for u in m*mlog; do B=$(echo $u | cut -dm -f2 | cut -d. -f1); echo $u C${#B}.$(echo $B | cut -c1-6).mlog; done
takes 17 seconds
time for u in m*mlog; do B=$(echo $u | cut -dm -f2 | cut -d. -f1); echo $u C${#B}.${B:0:6}.mlog; done
takes eight seconds
time for u in m*mlog; do B=${u:1}; B=${B%.mlog}; echo $u C${#B}.${B:0:6}.mlog; done
takes 0.2 seconds.
Of course, when I replace 'echo' with 'mv' it still takes fourteen seconds, but I am not that shocked that mv over NFS might be slow.
Which suggests that doing $() to start a new shell is taking something like a hundredth of a second on a one-year-old PC. I didn't know that. On the other hand, if I start writing code this dense in unclear bashisms, my colleagues at work will disembowel me with spoons.
PS: if I stop running a CPU-intensive program on each of my eight cores, starting new processes gets about fifteen times faster. I can understand if it got twice as fast, but I really don't understand fifteen.
no subject
Date: 2010-07-21 08:50 am (UTC)tcsh would be well and truly stupid enough to reparse all of .cshrc etc, but I don't fell like testing it (why oh why aren't astronomers brave enough to move on from 30 year old evil history?). It definitely does parse all of that crap when you have a #!/bin/csh script - fortunately bash doesn't do that unless you also supply -i.
No, you probably replaced the programs because most OSes have been traditionally very slow at fork() (and that goes for non-shell programs too). Slowaris is called Slowarsis for a reason :)
Linux has always had lower overheads at fork. The other OSes still did copy-on-write and everything, but just did it... badly.
The slowness of fork in this case when the CPUs are busy is surprising - possibly just a scheduler issue - the forking process is held too long on the wait queue and is starved of the resources needed to fork?
no subject
Date: 2010-07-21 10:28 am (UTC)