Performance testing an external USB drive array
This is a somewhat more technical post than is my usual fare, so my apologies to my non-geek friends.
Last night, I started doing some performance testing of my new external drive array. The storage array is an Addonics Storage Tower USB/JBOD, which provides a USB to IDE adapter that supports four drives. The storage array shows up to the system as a single USB device with four targets. USB 2.0 operates at a nominal speed of 60 MB/s, although due to command overhead, performance usually maxes out around 50-55 MB/s. Most typical USB drives usually max out between 15 and 20 MB/s.
I wrote a small benchmarking tool to test sustained sequential write performance. I chose write performance as my benchmark standard for a couple of reasons. First, it is easier to eliminate the effects of caching. An fsync() call at the end prior to closing will ensure that all data has been committed to disk. Second, it can be used to highlight the performance drop (if any) that is associated with calculating and writing parity on RAID volumes. The following table gives the performance of each configuration, and transfer rates at different transfer sizes. I included an internal SCSI drive in the tests as a comparison between a server-class drive and the external USB drives.
One interesting result of the tests was that striping two drives together did not improve performance at all. This indicates that the performance is limited somewhere between the USB controller card and the USB/IDE adapter. Given that a single USB/IDE adapter is used to connect all four drives, it is likely that the adapter is the point of contention. If there were multiple adapters, the host system might be able to queue writes to multiple drives simultaneously.
The ZFS performance is based on default settings. I have started testing a variety of ZFS block sizes. Setting the block size to 8K improved the performance of the small writes to be similar to the UFS configuration.
Overall, the performance of the storage array in a RAID-Z configuration is not great for writes, but it is tolerable for a USB setup. Given the fact that most of my drive writes will be across the network, the array speed is fast enough to keep up with the network. I may experiment with adding a second USB/IDE adapter inside the enclosure, to see if I can take advantage of parallel writes.
Last night, I started doing some performance testing of my new external drive array. The storage array is an Addonics Storage Tower USB/JBOD, which provides a USB to IDE adapter that supports four drives. The storage array shows up to the system as a single USB device with four targets. USB 2.0 operates at a nominal speed of 60 MB/s, although due to command overhead, performance usually maxes out around 50-55 MB/s. Most typical USB drives usually max out between 15 and 20 MB/s.
I wrote a small benchmarking tool to test sustained sequential write performance. I chose write performance as my benchmark standard for a couple of reasons. First, it is easier to eliminate the effects of caching. An fsync() call at the end prior to closing will ensure that all data has been committed to disk. Second, it can be used to highlight the performance drop (if any) that is associated with calculating and writing parity on RAID volumes. The following table gives the performance of each configuration, and transfer rates at different transfer sizes. I included an internal SCSI drive in the tests as a comparison between a server-class drive and the external USB drives.
MB/s | |||
Configuration | 8K | 64K | 128K |
UFS single drive SCSI | 36.57 | 35.07 | 35.80 |
UFS single drive USB | 12.80 | 12.59 | 12.00 |
ZFS single drive USB | 10.94 | 12.58 | 13.03 |
ZFS dual master-master stripe USB | 10.34 | 12.25 | 12.86 |
ZFS dual master-slave stripe USB | 10.34 | 12.49 | 12.90 |
ZFS RAID-Z four drives USB | 7.12 | 7.83 | 7.93 |
One interesting result of the tests was that striping two drives together did not improve performance at all. This indicates that the performance is limited somewhere between the USB controller card and the USB/IDE adapter. Given that a single USB/IDE adapter is used to connect all four drives, it is likely that the adapter is the point of contention. If there were multiple adapters, the host system might be able to queue writes to multiple drives simultaneously.
The ZFS performance is based on default settings. I have started testing a variety of ZFS block sizes. Setting the block size to 8K improved the performance of the small writes to be similar to the UFS configuration.
Overall, the performance of the storage array in a RAID-Z configuration is not great for writes, but it is tolerable for a USB setup. Given the fact that most of my drive writes will be across the network, the array speed is fast enough to keep up with the network. I may experiment with adding a second USB/IDE adapter inside the enclosure, to see if I can take advantage of parallel writes.