|Jeremy Chadwick||Oct 30, 2008 8:31 pm|
|Danny Carroll||Oct 30, 2008 9:07 pm|
|Jeremy Chadwick||Oct 30, 2008 9:33 pm|
|Andrew Snow||Oct 30, 2008 9:43 pm|
|Danny Carroll||Oct 30, 2008 9:47 pm|
|Danny Carroll||Oct 30, 2008 9:49 pm|
|Danny Carroll||Oct 30, 2008 9:54 pm|
|Simun Mikecin||Oct 31, 2008 2:20 am|
|Simun Mikecin||Oct 31, 2008 4:56 am|
|Peter Schuller||Nov 2, 2008 7:08 am|
|Simun Mikecin||Nov 3, 2008 12:31 am|
|Dieter||Nov 12, 2008 2:57 pm|
|Danny Carroll||Nov 12, 2008 9:46 pm|
|Jeremy Chadwick||Nov 12, 2008 11:42 pm|
|Willem Jan Withagen||Nov 13, 2008 12:32 am|
|Danny Carroll||Nov 13, 2008 3:09 am|
|Danny Carroll||Nov 13, 2008 5:58 am|
|Nikolay Denev||Nov 13, 2008 7:05 am|
|Scott Long||Nov 13, 2008 8:49 am|
|Danny Carroll||Nov 13, 2008 12:46 pm|
|Danny Carroll||Nov 13, 2008 12:59 pm|
|Eirik Øverby||Nov 16, 2008 12:26 pm|
|Danny Carroll||Nov 16, 2008 7:15 pm|
|Matt Simerson||Nov 16, 2008 10:06 pm|
|Jeremy Chadwick||Nov 16, 2008 11:07 pm|
|Wes Morgan||Nov 17, 2008 3:26 am|
|Danny Carroll||Nov 17, 2008 3:42 am|
|Matt Simerson||Nov 17, 2008 1:04 pm|
|Matt Simerson||Nov 17, 2008 2:07 pm|
|Danny Carroll||Nov 17, 2008 3:45 pm|
|Jan Mikkelsen||Dec 2, 2008 2:38 am|
|Wes Morgan||Dec 2, 2008 4:04 am|
|Danny Carroll||Jan 7, 2009 4:33 pm|
|Zaphod Beeblebrox||Jan 7, 2009 11:40 pm|
|Koen Smits||Jan 7, 2009 11:48 pm|
|Nikolay Denev||Jan 8, 2009 1:19 am|
|Danny Carroll||Jan 8, 2009 6:29 pm|
|Koen Smits||Jan 9, 2009 12:46 am|
|Danny Carroll||Jan 9, 2009 1:02 am|
|Koen Smits||Jan 9, 2009 7:57 am|
|Andrew Snow||Jan 9, 2009 6:38 pm|
|Danny Carroll||Jan 9, 2009 8:58 pm|
|Danny Carroll||Jan 20, 2009 10:40 pm|
|Koen Smits||Jan 21, 2009 1:15 am|
|Danny Carroll||Jan 21, 2009 5:14 am|
|Subject:||Re: Areca vs. ZFS performance testing.|
|From:||Jeremy Chadwick (koi...@FreeBSD.org)|
|Date:||Oct 30, 2008 8:31:45 pm|
Cross-posting this to freebsd-fs, as I'm sure people there will have other recommendations. (This is one of those rare cross-posting situations.....)
On Fri, Oct 31, 2008 at 01:14:55PM +1000, Danny Carroll wrote:
I've just become the proud new owner of an Areca 1231-ML which I plan to use to set up an office server.
I'm very curious as to how ZFS compares to a hardware solution so I plan to run some tests before I put this thing to work.
The purpose of this email is to find out if anyone would like to see specific things tested as well as perhaps get some advice on how to get the most information out of the tests.
My setup: Supermicro X7SBE board with 2Gb ram and an E6550 Core 2 Duo. FreeBSD 7.0-Stable compiled with amd64 sources from mid August. 1 x ST9120822AS 120gb disk (for the OS) For the array(s) 9 x ST31000340AS 1tb disks 1 x ST31000333AS 1tb disk (trying to swap this for a ST31000340AS)
My thoughts are to do the following tests with bonnie++: 1 5 disk Areca Raid5 2 5 Disk ZFS RaidZ1 (Connected to Areca in JBOD mode) 3 5 Disk ZFS RaidZ1 (Connected to ICH9 On board SATA controller) 4 5 disk Areca Raid6 5 5 Disk ZFS RaidZ2 (Connected to Areca in JBOD mode) 6 5 Disk ZFS RaidZ2 (Connected to ICH9 On board SATA controller) 7 10 disk Areca Raid5 8 10 Disk ZFS RaidZ1 (Connected to Areca in JBOD mode) 9 10 disk Areca Raid6 10 10 Disk ZFS RaidZ2 (Connected to Areca in JBOD mode)
My aim is to see what sort of performance gain you get by buying an Areca card for use in JBOD as well as seeing how ZFS compares to the hardware solution which offers write caching etc. I'm really only interested in testing ZFS's volume management performance, so for that reason I will also put ZFS on the Areca Raid drives. Not sure if it's a good idea to create 2 Raid drives and stripe them or simply use 1 large disk and give it to ZFS.
Any thoughts on this setup as well as advice on what options to give to bonnie++ (or suggestions on another disk testing package) are very welcome.
I think these sets of tests are good. There are some others I'd like to see, but they'd only be applicable if the 1231-ML has hardware cache. I can mention what those are if the card does have hardware caching.
I do have some concern about the size of the eventual array and ZFS' use of system memory. Are there guidelines available that give advice on how much memory a box should have with large ZFS arrays?
The general concept is: "the more RAM the better". However, if you're using RELENG_7, then there's not much point (speaking solely about ZFS) to getting more than maybe 3 or 4GB; you're still limited to a 2GB kmap maximum.
Regarding size of the array vs. memory usage: as long as you tune kmem and ZFS ARC, you shouldn't have much trouble. There have been some key people reporting lately that they run very large ZFS arrays without issue, with proper tuning.
Also, just a reminder: do not pick a value of 2048M for kmem_size or kmem_size_max; the machine won't boot/work. You shouldn't go above something like 1536M, although some have tuned slightly above that with success. (You need to remember that there is more to kernel memory allocation than just this, so you don't want to exhaust it all assigning it to kmap. Hope that makes sense...)
Can an AMD64 kernel make use of memory above 2g?
Only on CURRENT; 7.x cannot, and AFAIK, will never be able to, as the engineering efforts required to fix it are too great.
I look forward to seeing your numbers. Someone here might be able to compile them into some graphs and other whatnots to make things easier for future readers.
Thanks for doing all of this!
-- | Jeremy Chadwick jdc at parodius.com | | Parodius Networking http://www.parodius.com/ | | UNIX Systems Administrator Mountain View, CA, USA | | Making life hard for others since 1977. PGP: 4BD6C0CB |
_______________________________________________ free...@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-fs To unsubscribe, send any mail to "free...@freebsd.org"