The Lounge is rated Safe For Work. If you're about to post something inappropriate for a shared office environment, then don't post it. No ads, no abuse, and no programming questions. Trolling, (political, climate, religious or whatever) will result in your account being removed.
Don't rush him. It's only been a little more than 9 years. He probably needs time to think of one...
Anything that is unrelated to elephants is irrelephant Anonymous - The problem with quotes on the internet is that you can never tell if they're genuine Winston Churchill, 1944 - Never argue with a fool. Onlookers may not be able to tell the difference. Mark Twain
Why would the size of the files matter? -- Because if the "files" are small enough, sticking them in some other cataloging system might be a better idea. Maybe a database, maybe a custom archiving system. Think of things like version control systems.
File access is far more infrequent. -- Then just do whatever you want, it won't matter.
IIRC, NTFS uses a B-tree variant to store file names in a directory. This guarantees fast access to a single file, but may slow down access if you are trying e.g. to enumerate all files in the directory.
FAT32 has a limit of just under 64K entries. The search is linear. Note that a long filename takes at least two entries - one for the short name and one for the long name.
I don't know how exFAT stores directories.
Freedom is the freedom to say that two plus two make four. If that is granted, all else follows.
-- 6079 Smith W.
If there are reasons to distribute the files over a series of subdirectories, what are the reasons (/explanations) why it would be an advantage?
If performance downgrade with number of files in a directory, there is only 1 explanation:
The directory is organized as a flat list of files, unsorted.
This imply that to find a file, you have to scan the list/directory sequentially. In O(n).
If an OS can have the directory sorted in the order you look for (file name), cost of finding a file is in O(log(n))
“Everything should be made as simple as possible, but no simpler.” Albert Einstein
As already mentioned, the problems will start when you try to browse the disk in question with pretty much any existing application.
A better option would be to put the files in a database as blobs. At that point, you'll only have one file on the disk for the database itself. It wouldf also be easier to organize and manage than a complex folder hierarchy.
".45 ACP - because shooting twice is just silly" - JSOP, 2010 ----- You can never have too much ammo - unless you're swimming, or on fire. - JSOP, 2010 ----- When you pry the gun from my cold dead hands, be careful - the barrel will be very hot. - JSOP, 2013
We have some directories that contain that big number of files, the record I can remember right now is around 450k files in a folder.
They come from long time meassurements that trigger a data file a between 3 and 5 in a minute, each between 1 and 5 Mb.
Accessing the directory is slow, changing the order from name to timestamp is slow, moving the directory to another place is slow, getting the properties of the folder is slow, deleting the folder once is not needed anymore is slow.
Windwos 10 even slower specially the "folder properties" it needs over 15 minutes to count the files and give the size of the folder.
Windows 7 did it in 30 or 40 seconds.
We can't move that to FAT drives, due to number limitations as other said. Need to be NFTS.
If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about?
Help me to understand what I'm saying, and I'll explain it better to you
Rating helpful answers is nice, but saying thanks can be even nicer.
Neither Windows nor Linux do well when putting too many files in a single folder. I've tried it with a million files, it is very painful. Some operations, like simply listing the directory, or even trying to delete the files take absurdly long.
It seems to be doing some operations that are simply not designed for large numbers of files.
Like said, around 10,000 files in a folder is a reasonable max. I simply make it 1,000. So for a million files, spread them across 1,000 folders. There is a nice symmetry here, and it works like a charm.
Last Visit: 11-Aug-20 6:14 Last Update: 11-Aug-20 6:14