Re: Please help me understand ->writepage. Was Re: segfault mdadm --write-behind, 2.6.14-mm2 (was: Re: RAID1 ramdisk patch)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Neil Brown wrote (ao):
> On Monday November 21, [email protected] wrote:
> > bitmap->filemap_attr can be allocated with kzalloc() now.
> Yes, thanks.
> 
> So Sander, could you try this patch for main against reiser4?  It
> seems to work on ext3 and tmpfs and has some chance of not mucking up
> on reiser4.

It doesn't crash or segfault anymore. It works with the bitmap file on
tmpfs, but not yet on reiser4.

This is kernel 2.6.15-rc1-mm2 with your (Neil Brown's) patch.


loop0 is connected to a file on tmpfs
loop1 to a file on reiser4
/storage/raid1.bitmap is also on reiser4

# mdadm -C /dev/md1 --bitmap=/storage/raid1.bitmap -l1 -n2 /dev/loop0 --write-behind /dev/loop1
mdadm: RUN_ARRAY failed: No such file or directory

# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid5] [multipath] [raid6] [raid10]
md0 : active raid1 sdd1[3] sdc1[2] sdb1[1] sda1[0]
      1003904 blocks [4/4] [UUUU]

unused devices: <none>

# mdadm -C /dev/md1 --bitmap=/storage/raid1.bitmap -l1 -n2 /dev/loop0 --write-behind /dev/loop1
mdadm: /dev/loop0 appears to be part of a raid array:
    level=raid1 devices=2 ctime=Tue Nov 22 11:09:15 2005
mdadm: /dev/loop1 appears to be part of a raid array:
    level=raid1 devices=2 ctime=Tue Nov 22 11:09:15 2005
Continue creating array? yes
mdadm: bitmap file /storage/raid1.bitmap already exists, use --force to overwrite

# mdadm -C /dev/md1 -f --bitmap=/storage/raid1.bitmap -l1 -n2 /dev/loop0 --write-behind /dev/loop1
mdadm: /dev/loop0 appears to be part of a raid array:
    level=raid1 devices=2 ctime=Tue Nov 22 11:09:15 2005
mdadm: /dev/loop1 appears to be part of a raid array:
    level=raid1 devices=2 ctime=Tue Nov 22 11:09:15 2005
Continue creating array? yes
mdadm: RUN_ARRAY failed: Success

# cat /proc/mdstat 
Personalities : [linear] [raid0] [raid1] [raid5] [multipath] [raid6] [raid10] 
md0 : active raid1 sdd1[3] sdc1[2] sdb1[1] sda1[0]
      1003904 blocks [4/4] [UUUU]
      
unused devices: <none>


dmesg:
[42949583.660000] loop: loaded (max 8 devices)
[42949655.110000] md: bind<loop0>
[42949655.110000] md: bind<loop1>
[42949655.110000] md: md1: raid array is not clean -- starting background reconstruction
[42949655.110000] md1: bitmap file is out of date (0 < 1) -- forcing full recovery
[42949655.110000] md1: bitmap file is out of date, doing full recovery
[42949655.680000] md1: bitmap initialized from disk: read 0/4 pages, set 0 bits, status: 1
[42949655.680000] md1: failed to create bitmap (1)
[42949655.680000] md: pers->run() failed ...
[42949655.680000] md: md1 stopped.
[42949655.680000] md: unbind<loop1>
[42949655.680000] md: export_rdev(loop1)
[42949655.680000] md: unbind<loop0>
[42949655.680000] md: export_rdev(loop0)
[42949671.480000] md: bind<loop0>
[42949671.480000] md: bind<loop1>
[42949671.480000] md: md1: raid array is not clean -- starting background reconstruction
[42949671.480000] md1: bitmap file is out of date (0 < 1) -- forcing full recovery
[42949671.480000] md1: bitmap file is out of date, doing full recovery
[42949671.770000] md1: bitmap initialized from disk: read 0/4 pages, set 0 bits, status: 1
[42949671.770000] md1: failed to create bitmap (1)
[42949671.770000] md: pers->run() failed ...
[42949671.770000] md: md1 stopped.
[42949671.770000] md: unbind<loop1>
[42949671.770000] md: export_rdev(loop1)
[42949671.770000] md: unbind<loop0>
[42949671.770000] md: export_rdev(loop0)


It does work with the bitmap file on tmpfs:

# mdadm -C /dev/md1 -f --bitmap=/tmp/raid1.bitmap -l1 -n2 /dev/loop0 --write-behind /dev/loop1
mdadm: /dev/loop0 appears to be part of a raid array:
    level=raid1 devices=2 ctime=Tue Nov 22 11:20:48 2005
mdadm: /dev/loop1 appears to be part of a raid array:
    level=raid1 devices=2 ctime=Tue Nov 22 11:20:48 2005
Continue creating array? yes
mdadm: array /dev/md1 started.

silo1:~# cat /proc/mdstat 
Personalities : [linear] [raid0] [raid1] [raid5] [multipath] [raid6] [raid10] 
md1 : active raid1 loop1[1] loop0[0]
      509056 blocks [2/2] [UU]
      [>....................]  resync =  1.2% (6528/509056) finish=2.5min speed=3264K/sec
      bitmap: 63/63 pages [252KB], 4KB chunk, file: /tmp/raid1.bitmap

md0 : active raid1 sdd1[3] sdc1[2] sdb1[1] sda1[0]
      1003904 blocks [4/4] [UUUU]
      
unused devices: <none>


Is there anything you need me to test further?

Thanks for the patch!

	Sander

-- 
Humilis IT Services and Solutions
http://www.humilis.net
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

[Index of Archives]     [Kernel Newbies]     [Netfilter]     [Bugtraq]     [Photo]     [Stuff]     [Gimp]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Video 4 Linux]     [Linux for the blind]     [Linux Resources]
  Powered by Linux