adaptive readahead overheads

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I'd like to show some numbers on the pure software overheads come with
the adaptive readahead in daily operations.

SEQUENTIAL ACCESS OVERHEADS, or: PER-PAGE OVERHEADS
===================================================

SUMMARY
                      user       sys       cpu         total
	ARA	      0.13       5.30      92.0%       5.87
	STOCK         0.12       5.34      91.8%       5.91

        It shows about 0.8% overhead.

	They are mainly contributed by:
	- debug/accounting code
	- smooth aging accounting for the stateful method
	- readahead hit feedback

	However, if necessary, each of the obove can be seperated out
	and made a kconfig option. They won't affect the main
	functionality of ARA.

DETAILS

% time cp work/sparse /dev/null # repeat 5 times each

/proc/sys/vm/readahead_ratio = 1

cp work/sparse /dev/null  0.13s user 5.30s system 93% cpu 5.780 total
cp work/sparse /dev/null  0.13s user 5.29s system 92% cpu 5.847 total
cp work/sparse /dev/null  0.13s user 5.27s system 92% cpu 5.860 total
cp work/sparse /dev/null  0.13s user 5.32s system 91% cpu 5.961 total
cp work/sparse /dev/null  0.12s user 5.32s system 92% cpu 5.902 total

/proc/sys/vm/readahead_ratio = 100

cp work/sparse /dev/null  0.12s user 5.39s system 93% cpu 5.888 total
cp work/sparse /dev/null  0.13s user 5.32s system 92% cpu 5.923 total
cp work/sparse /dev/null  0.11s user 5.32s system 91% cpu 5.901 total
cp work/sparse /dev/null  0.12s user 5.35s system 92% cpu 5.952 total
cp work/sparse /dev/null  0.12s user 5.30s system 91% cpu 5.900 total


SMALL FILES OVERHEADS, or: PER-FILE OVERHEADS
=============================================

SUMMARY
                    user	sys	   cpu       total
	ARA         405.90     326.54      97%     12:27.59 
	stock       407.53     322.02      97%     12:27.52

	No obvious overhead.

	There is overhead in calling readahead_close() on each file.
	However, the small-io-go-all-down-to-lowlevel events are
	sucessfully reduced, making up for the overhead.

DETAILS
	In a qemu with 156M memory and a host system with 4G memory,
	traverse a whole tree of 434M /usr, and do this repeatedly.
	So that the 2+ runs do not involve true disk I/O.

# time find /usr -type f -exec md5sum {} \; >/dev/null

ARA

406.00s user 325.16s system 97% cpu 12:28.17 total
403.35s user 325.15s system 97% cpu 12:23.86 total
403.00s user 325.03s system 97% cpu 12:23.61 total
406.43s user 327.64s system 97% cpu 12:30.64 total
407.03s user 325.46s system 97% cpu 12:28.17 total
406.46s user 326.89s system 98% cpu 12:26.31 total
405.96s user 328.45s system 98% cpu 12:27.08 total
409.05s user 328.55s system 97% cpu 12:32.85 total

STOCK(vanilla kernel)

408.64s user 321.55s system 97% cpu 12:27.68 total
406.39s user 320.55s system 97% cpu 12:24.58 total
408.41s user 321.22s system 97% cpu 12:27.91 total
409.10s user 324.72s system 97% cpu 12:33.16 total
406.60s user 321.81s system 97% cpu 12:26.48 total
405.60s user 322.49s system 97% cpu 12:25.29 total
408.13s user 322.19s system 97% cpu 12:27.92 total
407.35s user 321.66s system 97% cpu 12:27.11 total

Thanks,
Wu
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

[Index of Archives]     [Kernel Newbies]     [Netfilter]     [Bugtraq]     [Photo]     [Stuff]     [Gimp]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Video 4 Linux]     [Linux for the blind]     [Linux Resources]
  Powered by Linux