Algorithms for searching and sorting recursively- defined data structures (like trees) nearly always use recursion. The problem with recursive algorithms is that the stack requirements are limited only by the size of the data structure to be searched, which cannot be known until runtime. It is not uncommon for a search of a large directory tree to consume large amounts of memory to the point of affecting system performance or to cause a stack overflow.
The example source code illustrates an algorithm for searching the NetWare file system that minimizes stack requirements. This algorithm has the properties that there is no function level recursion, hence the only stack usage is to pass parameters to functions one level deep. The algorithm stores the handle/DTA for each subdirectory as it proceeds deeper into the tree, so as it returns it can continue from where it left off.
However, the algorithm does not store the complete subdirectory listing of each subdirectory along the way. This means a given directory can have as many subdirectories as it wants, and the search algorithm will handle them. This approach comes, however, at some cost of speed, as each directory is scanned twice (once to count subdirectories, and once to display each subdirectory) and the second scanning will probably not use sequential reads.
The algorithm can handle as many subdirectories deep as there is memory to store the subdirinfo structures. Each structure is potentially MAX_DIR_PATH sizeof(long) sizeof(DIRHANDLE) bytes. The actual implementation has a depth limit defined by the maximum path length. This can handle any number of files in a directory. If filenames are displayed when each subdirectory is first entered (thus there is only a single additional search of the directory for showing files), the directory is searched and all files matching the search attribute criteria are shown.
Note: A faster approach would be to add all the subdirectories in a given directory when we first process it, then pop the top one off, display it and possibly its files, then repeat the process of scanning for subdirectories and adding any found. However, this limits the recursion to the number of subdirectory names that can be stored in memory. A directory with many subdirectories would take up a lot of space, reducing maximum depth.