Re: what is necessary for directory hard links

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 7/24/06, [email protected] <[email protected]> wrote:
On Mon, 24 Jul 2006 09:21:04 PDT, Joshua Hudson said:
> On 7/24/06, [email protected] <[email protected]> wrote:

> Actually, I walk from the source inode down to try to find the
> target inode. If not found, this is not attempting to create a loop.

The problem is that the "target inode" may not be the one obviously causing the
loop - you may be trying to link a directory into a/b/c, while the loop
is caused by a link from a/b/c/d/e/f/g back up to someplace.

Consider:
[case of 3^4 directories ]
Yes, you have to search *every* directory under x, y, and z.
With a mesh graph like that one, you are right.  Rather like my
test cases to see if the  loop-detection algorithm is working.

And this is an artificially small
directory tree.  Think about a /usr/src/ that has 4 or 5 linux-kernel
trees in it, with some 1,650 directories per tree...
Interesting case.


> Should be obvious that the average case is much less than the
> whole tree.

"The average case" is the one where the feature isn't used.  When you
actually *use* it, you get "not average case" behavior - not a good sign.

My use cases were generally about creating links at the twig level.
In my consideration, there would be two or more topical trees arranged
by different criteria, and they would bottom out at the
topics, which are directories that hold a number of documents.

A document might be a single file, a few files together, or a
directory that contains a few files that should alwas be together. I
suppose that an entire source tree could be considered one document,
and there lies the flaw.

This system is an attempt to replace a system of hard links to files
that didn't work because certain MS applications use rename and create
when saving files. The filesystem is intended to be
exported over smbfs. It is still very-much a single-user scheme,
which is why the bad worst case didn't really concern me.

>
> mv /a/b/c/d ../../w/z/b is implemented as this in the filesystem:
> ln /a/b/c/d ../../w/z/b && rm /a/b/c/d
>
> So what it's going to do is try to find z under /a/b/c/d.

Even if that's sufficient (which it isn't), it's going to be painful to lock
the filesystem for 20 or 30 seconds while you walk everything to make sure
there's no problem.

Maybe someday I'll work out a system by which much less is locked.
Conceptually, all that is requred to lock for the algorithm
to work is creating hard-links to directories and renaming directories
cross-directory.

(which it isn't)
Counterexample? I should swear that any cycle created by rename must
pass through the new parent into the victim and back to the new
parent.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

[Index of Archives]     [Kernel Newbies]     [Netfilter]     [Bugtraq]     [Photo]     [Stuff]     [Gimp]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Video 4 Linux]     [Linux for the blind]     [Linux Resources]
  Powered by Linux