|Steve Carlson||Jul 31, 2000 12:32 pm|
|Zhihui Zhang||Jul 31, 2000 12:44 pm|
|Alfred Perlstein||Jul 31, 2000 12:46 pm|
|Terry Lambert||Jul 31, 2000 5:31 pm|
|Kris Kennaway||Jul 31, 2000 5:45 pm|
|Zhihui Zhang||Jul 31, 2000 5:46 pm|
|Terry Lambert||Aug 2, 2000 1:49 pm|
|Terry Lambert||Aug 2, 2000 1:53 pm|
|Marius Bendiksen||Aug 2, 2000 2:05 pm|
|Marius Bendiksen||Aug 2, 2000 2:11 pm|
|Subject:||Re: FFS performance for large directories?|
|From:||Zhihui Zhang (zzh...@cs.binghamton.edu)|
|Date:||Jul 31, 2000 12:44:35 pm|
On Mon, 31 Jul 2000, Steve Carlson wrote:
First off, my apologies in advance if this is not the type of technical question expected in this forum - I checked the charter and archives to get a feel for the theme, but still wasn't sure if this would be inappropriate. -questions was no help, either... I'm trying to figure out at what point I can expect performance issues with an FFS filesystem if I have directories with a massive number of small files or symlinks. As far as I understand it, there are a number of inodes located within a cylinder group, and the inodes for files are ideally placed in the same cylinder group as their parent directory. But if I were to have a massive number of small files or symlinks in a directory, wouldn't I run out of local inodes and thus start to see a performance issue when working in that directory? How can I determine the maximum number of files I should safely place in a directory without my performance suffering? I've been unable to find commentary on this in print or on the web - everything I've read centers only on performance issues when the disk becomes full.
Since nobody answer your question, let me try:
FFS is not very good at large directories because it has to search the directory linearly. If you have 1000 entries in a directory, then on the average, you have to search 1000/2 entries before you can get the entry you want. Other solutions exist, such as B*-Tree or Hash table. They will speed up directories look up time.
Having said this, you can try to put all directory file into the memory. This is the idea of matt's VMIO directory. You can definitely find discussions on this in the mailing list archive.
A third thing is that FFS performs poor accessing /usr/ports. This has something to do with how FFS layout directory inode (not file inode). The book 4.4 BSD design and implementation explains this well. If fact, read that book carefully, you can have better idea than you can get from a mailing list. Good luck!
To Unsubscribe: send mail to majo...@FreeBSD.org with "unsubscribe freebsd-fs" in the body of the message