D-Link Forums

The Graveyard - Products No Longer Supported => D-Link Storage => DNS-343 => Topic started by: fluxen on February 05, 2009, 04:15:50 PM

Title: The Performance Optimization Thread
Post by: fluxen on February 05, 2009, 04:15:50 PM
I imagine I'm not the only one trying to squeeze as much performance as possible out of the 343, so I figured a thread where people share their throughput numbers and setup would be of benefit to many of us. I posted some numbers earlier in the 1.5TB drive thread, which I'll quote here fyi:

Sure, but it's nothing special. The DNS-343 has 4x1.5TB Segate 7200s. They're formatted as one RAID5 volume of 4096GB, and a JBOD of the leftovers 400 something GB. It's attached to the net via 1000tx.

The systems I'm using are as follows: There's a Windows 2008 server, Quad Core, 4gb ram, 4TB of SATA HDs on a Promise 6850ex RAID controller, RAID5. The Mac is a Macbook Pro 17" 2.5GHz, 4GB, with a 500GB internal SATA drive, running OSX 10.5.6.

The following copies were done with a 849MB Access db I had handy. The Windows data rates are taken from 2008's "more information" speedometer showing how fast a single transfer is happening. The Mac rates are taken by timing how long the transfer took, and dividing 849MB/et.

Copy from Win Server 2008 to JBOD on the DNS: 11.2MB/sec sustained
Copy from Win Server 2008 to RAID5 on the DNS: 8.5MB/sec sustained
Copy from Mac to JBOD on the DNS: 13.9MB/sec sustained
Copy from Mac to RAID5 on the DNS: 10.2MB/sec sustained

Copy from the RAID5 on the DNS to Mac: 21MB/sec sustained

Copy from Win Server 2008 to Mac: 27.4MB/sec sustained

I'll note a couple of observations. When I have a single transfer going, on Windows at least, I don't get as much throughput as I do with multiple transfers going. I can usually get 2-4 more MB/s if I'm doing say 4 different copies at once. I haven't tried that with OSX to see if it's consistent. Another thing. I can not tell you why, but over the terabytes of data I've transfer to date to the DNS, I noticed my transfers from 2008 to the DNS were very slow, and the best I could get was about 5MB/s for a single transfer. But if I had 5 transfers going, they would total something like 12MB/sec. Then today I do the little test for you above, and suddenly Windows performs great for single transfers. There's no obvious explaination for why, but I thought I'd mention it.

One more note, which you're probably aware of, but some might not be, is that transferring 1x1GB file is almost always significantly faster than transferring 1000x1MB files, so clearly your data rates will vary depending what files you're using.

Well, an update to those numbers is I'm now running a second 343 here at the office. It's using 4x1TB WD drives in JBOD using the "performance" format (the numbers I quoted above were using the "more stable" formatting). This 343 is currently copying from files from the Windows 2008 server mentioned above to the 343 at a sustained 20.9MB/sec according to 2008's copy dialog. Nice numbers for this class of box! One contributing factor turning OFF jumbo frames on the 343. I was seeing 15MB/sec with 4k frames enabled. A bit puzzling, but there you have it.

Anyone else have any numbers?
Title: Re: The Performance Optimization Thread
Post by: nataku316 on February 27, 2009, 09:43:10 PM
i remember running some benchmarks when i first got my dns-343, the numbers was not so great it was not as good was i hoped for, some of my co-worker also got it and found less than acceptable speed.

will try and post some numbers in the future
Title: Re: The Performance Optimization Thread
Post by: CyberTron on March 17, 2009, 10:35:11 AM
When copying a single huge file from Vista to DNS-343 (4 x 1.5TB Seagate Raid 5), I get around 11.2MB/sec, with jumbo frame disabled.

When I had just a single 1TB WD inside, I could do 15+MB/sec.

Title: Re: The Performance Optimization Thread
Post by: fordem on March 17, 2009, 07:49:22 PM
When copying a single huge file from Vista to DNS-343 (4 x 1.5TB Seagate Raid 5), I get around 11.2MB/sec, with jumbo frame disabled.

When I had just a single 1TB WD inside, I could do 15+MB/sec.

This is due to the additional processing required for a RAID5 array.

RAID5 will always be slower than a single drive (also slower than RAID1) on the same hardware - especially when it's software RAID as I believe it is on the DNS-343.

With a single disk, you simply write the data to the disk, and with RAID1 you write the data to both disks, but with RAID5 you split the data into chunks, write one chunk to each disk (see below), calculate the parity information and then write it to the disk.

A four disk RAID5 array (n disks) will split the data three ways (n-1)
Title: Re: The Performance Optimization Thread
Post by: Bhavik on March 18, 2009, 02:41:08 AM
I have 4x 1.5TB Seagate (CC1G Firmware) running in raid 5 with 1.0.2 on my NAS. The raid 5 is only 4TB big.

From Windows 7 on my AMD64 3500+ and 1GB ram and I was seeing transfer speeds of 15MB/s through my D-Link DIR-655.

I've posted before on how to calculate IOs between different raid configurations in a different thread.
Title: Re: The Performance Optimization Thread
Post by: fordem on March 18, 2009, 07:33:15 AM
Now that you've drawn my attention to those calculations - and I'm not saying they are incorrect - they don't seem to consider anything other than the disks I/O operations - what about the RAID processing required.

Correct me if I'm wrong here - a three disk RAID5 configuration running on a hardware RAID controller (dedicated processor doing the slicing & dicing) will outperform the exact same set of disks running software RAID (system processor doing the slicing & dicing)

Now going a bit off topic here but, this is of particular interest to me right now as I just switched one of my servers from a RAID1 config using 2x250 GB drives using Adaptec HostRAID (system processor) and the system board's integrated Intel ICH5 SATA controller to a RAID5 config using 3x250 GB drives using an Adaptec 2410SA RAID card (dedicated Intel processor).

It's not exactly an apples-apples comparison because of the different processors involved, but if I ignore that (as does your post) - using before and after numbers, my test results are pretty close to where those calculations have them - little or no change in read performance, and about a 33% degradation in write performance.
Title: Re: The Performance Optimization Thread
Post by: Bhavik on March 19, 2009, 12:12:28 AM
My other post talks about IOs/IOPs which is different from actual throughput through the network.

Those calculations are just for the raw disks and were sent to me by our system admin as he was trying to get more performance from our enterprise SAN for a particular purpose.

There are also particular views on what is hardware raid and what is software. But I agree software raid is seen to be slower in most cases.

What do you see as hardware raid? Because the D-Link 343 has a dedicated processor and it is the firmware/OS which does the raiding. Does true hardware raid have a dedicated XOR processor?

I'm not a expert on RAID.
Title: Re: The Performance Optimization Thread
Post by: fordem on March 19, 2009, 08:18:37 AM
What do I see as hardware RAID?  That's not so easy to define - and whilst I wouldn't consider myself an expert on RAID, other people do (it's a case of the one eyed man in the land of the blind).

I've taken a somewhat unorthodox view and now split RAID into three categories ...

 - hardware RAID will have a processor dedicated to the RAID processing, which I believe is an XOR processor, so the absence of such a processor would point to software RAID.
 - software RAID will use the system processor to do the RAID processing.

I then further split software RAID into two categories, OS driven RAID (where the OS is aware of the physical disks and manages the array) and driver based RAID, where all of the RAID processing is done by the driver and is completely transparent to the operating system.

I took this position (which may be subject to revision ;) ) after coming across what many people term "FakeRAID" (which is what I am calling driver based RAID) a few years back.  I dislike the term FakeRAID and find it somewhat derogatory, and in my opinion, there is nothing fake about it. 

I have a couple of PC servers that have what is called (by Adaptec) HostRAID integrated into the system BIOS - I am limited to RAID0 & RAID1 but, in every other respect, these behave like any other integrated hardware RAID - the arrays are created either through a software utility or through the BIOS firmware, the operating system never sees the physical disks, only the logical ones, any disk management must be done through a management tool running within the OS, when a disk fails I can remove & replace and it automatically rebuilds, the system will boot from a single disk (either disk) - try doing that when the OS is managing the array.

In case you're wondering why I call it driver based RAID and not HostRAID - HostRAID is trademarked by Adaptec, and the same technology is available from other vendors - for example SiliconImage.

You can buy RAID controller cards that plug into the system board (should that not make it hardware RAID) but have no processor (so that means it's not hardware RAID), but, through the use of this driver based RAID behave exactly like I mentioned above - and I have in my possession a pair of Siig SATA host adapters that apart from the firmware are identical in every respect, one does RAID0/1 and the other does not.  How do I know the only difference is the firmware?  Because I was the one who changed it.

To the best of my knowledge, there is one processor in the 343, it runs linux and is therefore the "system processor", and that would make the RAID on the 343 software RAID, if I am correct the OS is aware of the different physical drives and manages the array through mdadm, making it what I term OS driven RAID.

If it were not for the fact that the 343 boots it's OS from flash memory (and not the array) swapping a failed drive would be significantly more complex than it is now and that would put an end to the discussions on whether or not it should be considered hardware RAID.

One last thing - the term FakeRAID appears to have been coined by the linux crowd - who shun the technology, perhaps because there appear to be no drivers allowing them to implement it.