Hi Matt, Tony and Andy,
I have a better characterization of the problem. It seems like Matt was
correct about the block size and now I can successfully mount the drive.
The real problem I'm seeing is that as the ramdisk fills and there is
obviously memory free, the machine crashes. I did this on a local
machine rather than a remote one and the message that appears on the
console is an "Out of Memory" error.
Here I create a 2.5GB ramdisk and I'm writing into it 512MB at a time.
After the 3rd write, the machine crashes and must be powercycled.
I've taken into account Tony's suggestion. It's possible that perl is
crashing. I've reverted to using dd which I know does not buffer its
outputs.
[root@stupid mnt]# mke2fs -b 1024 -vm0 /dev/ram0
mke2fs 1.37 (21-Mar-2005)
Filesystem label=
OS type: Linux
Block size=1024 (log=0)
Fragment size=1024 (log=0)
327680 inodes, 2621440 blocks
0 blocks (0.00%) reserved for the super user
First data block=1
Maximum filesystem blocks=69730304
320 block groups
8192 blocks per group, 8192 fragments per group
1024 inodes per group
Superblock backups stored on blocks:
8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409, 663553,
1024001, 1990657
Writing inode tables: done
Writing superblocks and filesystem accounting information: done
[root@stupid mnt]# mount /dev/ram0 /mnt/r0
[root@stupid mnt]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 5.9G 798M 5.1G 14% /
/dev/sda1 99M 8.8M 85M 10% /boot
/dev/shm 1.5G 0 1.5G 0% /dev/shm
/dev/sda6 61G 33M 61G 1% /hot
/dev/sda3 4.0G 33M 3.9G 1% /tmp
/dev/ram0 2.5G 3.1M 2.5G 1% /mnt/r0
[root@stupid r0]# free
total used free shared buffers cached
Mem: 3115052 105952 3009100 0 50892 23988
-/+ buffers/cache: 31072 3083980
Swap: 4096532 0 4096532
[root@stupid r0]# dd if=/dev/zero of=/mnt/r0/file0 bs=1024 count=524288
524288+0 records in
524288+0 records out
[root@stupid r0]# free
total used free shared buffers cached
Mem: 3115052 876364 2238688 0 267960 548180
-/+ buffers/cache: 60224 3054828
Swap: 4096532 0 4096532
[root@stupid r0]# dd if=/dev/zero of=/mnt/r0/file1 bs=1024 count=524288
524288+0 records in
524288+0 records out
[root@stupid r0]# free
total used free shared buffers cached
Mem: 3115052 1958340 1156712 0 796196 1072684
-/+ buffers/cache: 89460 3025592
Swap: 4096532 0 4096532
[root@stupid r0]# dd if=/dev/zero of=/mnt/r0/file2 bs=1024 count=524288
Connection to 192.168.1.81 closed by remote host.
Connection to 192.168.1.81 closed.
As you can see, I've written less than 1.5GB worth of files into the
ramdisk. However, the amount of used memory seems to go up much more and
I'm guessing the machine is genuinely running out of memory. Does
anybody know why this would happen? Could it be the small blocksize is
detrimental to larger filesystem?
Thanks!
Brandon
Matt Roth wrote:
Brandon,
Notice that the block and fragment sizes for the 512+MB RAM disk are
4KB (4096 bytes), but they are 1KB (1024 bytes) for the 512MB one. I
believe the default RAM disk block size is 1KB, which is why the mount
fails for the larger RAM disk.
You have two choices to resolve this:
1) Use the "ramdisk_blocksize" kernel parameter to increase the RAM
disk block size to 4K:
ramdisk_blocksize=4 # I *believe* this size is in KB, but you
may want to double check
2) Use the "-b" parameter to mke2fs to decrease the block size to
which the RAM disk is formatted to 1K:
mke2fs -b 1024 -vm0 /dev/ram0 530000
Sincerely,
Matthew Roth