On Tue, 2005-08-02 at 15:18 +0800, David Teigland wrote:
> Hi, GFS (Global File System) is a cluster file system that we'd like to
> see added to the kernel. The 14 patches total about 900K so I won't send
> them to the list unless that's requested. Comments and suggestions are
> welcome. Thanks
>
> http://redhat.com/~teigland/gfs2/20050801/gfs2-full.patch
> http://redhat.com/~teigland/gfs2/20050801/broken-out/
* The on disk structures are defined in terms of uint32_t and friends,
which are NOT endian neutral. Why are they not le32/be32 and thus
endian-defined? Did you run bitwise-sparse on GFS yet ?
* None of your on disk structures are packet. Are you sure?
*
+#define gfs2_16_to_cpu be16_to_cpu
+#define gfs2_32_to_cpu be32_to_cpu
+#define gfs2_64_to_cpu be64_to_cpu
why this pointless abstracting?
* +static const uint32_t crc_32_tab[] = .....
why do you duplicate this? The kernel has a perfectly good set of generic crc32 tables/functions just fine
* Why are you using bufferheads extensively in a new filesystem?
* + if (create)
+ down_write(&ip->i_rw_mutex);
+ else
+ down_read(&ip->i_rw_mutex);
why do you use a rwsem and not a regular semaphore? You are aware that rwsems are far more expensive than regular ones right?
How skewed is the read/write ratio?
* Why use your own journalling layer and not say ... jbd ?
* + while (!kthread_should_stop()) {
+ gfs2_scand_internal(sdp);
+
+ set_current_state(TASK_INTERRUPTIBLE);
+ schedule_timeout(gfs2_tune_get(sdp, gt_scand_secs) * HZ);
+ }
you probably really want to check for signals if you do interruptible sleeps
(multiple places)
* why not use msleep() and friends instead of schedule_timeout(), you're not using the complex variants anyway
* +++ b/fs/gfs2/fixed_div64.h 2005-08-01 14:13:08.009808200 +0800
ehhhh why?
* int gfs2_copy2user(struct buffer_head *bh, char **buf, unsigned int offset,
+ unsigned int size)
+{
+ int error;
+
+ if (bh)
+ error = copy_to_user(*buf, bh->b_data + offset, size);
+ else
+ error = clear_user(*buf, size);
that looks to be missing a few kmaps.. whats the guarantee that b_data is actually, like in lowmem?
* [PATCH 08/14] GFS: diaper device
The diaper device is a block device within gfs that gets transparently
inserted between the real device the and rest of the filesystem.
hmmmm why not use device mapper or something? Is this really needed? Should it live in drivers/block ? Doesn't
this wrapper just increase the risk for memory deadlocks?
* [PATCH 06/14] GFS: logging and recovery
quoting the ren and stimpy show is nice.. but did the ren ans stimpy authors agree to license their stuff under the GPL?
* do_lock_wait
that almost screams for using wait_event and related APIs
*
+static inline void gfs2_log_lock(struct gfs2_sbd *sdp)
+{
+ spin_lock(&sdp->sd_log_lock);
+}
why the abstraction ?
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
|
|