jgillman's Liquid Web Update Unofficial tips, tricks, and happenings from a Liquid Web sales engineer

21Nov/121

Initial HPBS Performance Numbers

As you may be aware, Liquid Web has released its block storage product, High Performance Block Storage.

As such, I've gone ahead and run some simple benchmarks of my own, which I present to you. Your mileage may vary, but this is what I was seeing.

Server Information:
AMD FX-6100 Proc (3314 Mhz)
3831MB RAM
Debian 6.0.3 x64 upgraded to the latest stable packages
HPBS volume formatted ext4
iozone 3_414 compiled with linux-AMD64 as the target

Command run: ~/iozone -l 32 -O -i 0 -i 1 -i 2 -e -+n -r 4K -s 4G > ~/hpbs-iops.txt
*Note: The iozone executable was located in /root, the actual test was still being run on HPBS

Here is the output. First test result is for 4k Sequential Write, Second is for Sequential Read.

Iozone: Performance Test of File I/O
Version $Revision: 3.414 $
Compiled for 64 bit mode.
Build: linux-AMD64

Contributors:William Norcott, Don Capps, Isom Crawford, Kirby Collins
Al Slater, Scott Rhine, Mike Wisner, Ken Goss
Steve Landherr, Brad Smith, Mark Kelly, Dr. Alain CYR,
Randy Dunlap, Mark Montague, Dan Million, Gavin Brebner,
Jean-Marc Zucconi, Jeff Blomberg, Benny Halevy, Dave Boone,
Erik Habbinga, Kris Strecker, Walter Wong, Joshua Root,
Fabrice Bacchella, Zhenghua Xue, Qin Li, Darren Sawyer,
Vangel Bojaxhi, Ben England.

Run began: Wed Nov 21 10:52:22 2012

OPS Mode. Output is in operations per second.
Include fsync in write timing
No retest option selected
Record Size 4 KB
File size set to 4194304 KB
Command line used: /root/iozone -l 32 -O -i 0 -i 1 -i 2 -e -+n -r 4K -s 4G
Time Resolution = 0.000001 seconds.
Processor cache size set to 1024 Kbytes.
Processor cache line size set to 32 bytes.
File stride size set to 17 * record size.
Min process = 32
Max process = 32
Throughput test with 32 processes
Each process writes a 4194304 Kbyte file in 4 Kbyte records

Children see throughput for 32 initial writers = 10377.15 ops/sec
Parent sees throughput for 32 initial writers = 9485.68 ops/sec
Min throughput per process = 305.25 ops/sec
Max throughput per process = 348.35 ops/sec
Avg throughput per process = 324.29 ops/sec
Min xfer = 916573.00 ops

Children see throughput for 32 readers = 12519.84 ops/sec
Parent sees throughput for 32 readers = 12519.41 ops/sec
Min throughput per process = 363.21 ops/sec
Max throughput per process = 412.03 ops/sec
Avg throughput per process = 391.24 ops/sec
Min xfer = 924357.00 ops

As of this writing, the random read and write tests are still being run. I will update this article once these tests are complete.

With that said, what I'm seeing so far is pretty decent compared to AWS storage (not EBS mind you).
I'm basing this off of the chart supplied by Liquid Web when we benched our Storm SSD servers to AWS:

As you can note, I ran the same command that was used to bench Storm SSD. You'll notice that AWS has been beat so far (pending the updated random read/write tests). Conferring with a customer who has had experience with EBS, he has indicated that HPBS numbers will look even better in comparison to EBS.

Stay tuned!

   
FireStats icon Powered by FireStats
Optimization WordPress Plugins & Solutions by W3 EDGE