you want. Other solutions exist, such as B*-Tree or Hash table. They will
speed up directories look up time.
On a side note, as to B-trees or such, I think this could be done as a
hack to UFS, but I'm not very intrigued by the prospect of looking into
it, nor the idea of hacking it up even further.
Having said this, you can try to put all directory file into the
memory. This is the idea of matt's VMIO directory. You can definitely
find discussions on this in the mailing list archive.
This will improve efficiency for situations where you reuse names or at
least access stuff in a somewhat non-random manner. You may not want to
use this in cases where the expected lifetime of a cache entry is low.
It does not, of course, help the seek issue. Striping your raid array
correctly to keep the disks from tending to be bound to eachother would
probably alleviate some of the problem, but not much, I think.
A third thing is that FFS performs poor accessing /usr/ports. This has
Actually, it performs poorly on any complex directory hierarchy,
especially when traversed depth-first.
something to do with how FFS layout directory inode (not file inode). The
It (IIRC) tries to spread the directory inodes evenly across the cylinder
groups, while trying to keep the file inodes in the same cylinder group as
the directory inode.
To Unsubscribe: send mail to majo...@FreeBSD.org
with "unsubscribe freebsd-fs" in the body of the message