Problems with number of files in a folder
Problems seems to exist.
Rule of thumb is to aim for a directory size of <= 20k files, although I’ve seen relatively decent performance with up to 100k files/directory.
One, restructure the directory so that it does not contain that many files. I did some tests, and in a default (untuned) Ext3 partition, each subsequent write degrades horribly past the 2000 file limit. So, keeping the items in a directory to within 2000 files should be fine.
I checked my filesystem with:
tune2fs -l /dev/sda3 | grep features
and discovered that I have
dir_index enabled, so no need to do
I also checked if my directory is indexed with:
lsattr -d /var/www/gerne.ch/web/wp-content/uploads/
and noticed the big letter I in a result (
everything is ok. Should be no problem.
Folders that grow beyond a single filesystem block (normally 4k) will be indexed once dir_index is enabled.
Should I set
No - not really, anymore. Today,
relatime is part of the default mount
options, unless overridden. It gives the same speed benefits as noatime.
Still, the the number of writes to a disk for
mount is close to double relative to a
noatime. See more here in thsi resources:
Change I/O scheduler?
To see what I/O scheduler is currently in use:
No need, as mine is by default
SSD benchmark of I/O schedulers