jgillman's Liquid Web Update Unofficial tips, tricks, and happenings from a Liquid Web sales engineer

25Jan/130

Rough Gluster Benchmark

So, I've decided to benchmark GlusterFS.

The volume configuration was a distributed-replicated one in a 2x2 configuration of 1GB Storm Instances. Each instance was connected to a 150GB HPBS volume formatted XFS.

I then created 2 more 1GB Storm instances for the "clients". From there, I concurrently ran iozone with the same parameters as my HPBS benchmark.

Instances all used the latest version of Debian Squeeze. Compile target was the same as well as the HPBS test.

I'll just toss the final numbers down. Again - both of these nodes were running iozone concurrently.

IOPS Test

Client 1

Children see throughput for 32 initial writers = 4446.52 ops/sec
Children see throughput for 32 readers = 11907.26 ops/sec
Children see throughput for 32 random readers = 1142.15 ops/sec
Children see throughput for 32 random writers = 373.25 ops/sec

Client 2

Children see throughput for 32 initial writers = 4507.93 ops/sec
Children see throughput for 32 readers = 11926.79 ops/sec
Children see throughput for 32 random readers = 1212.09 ops/sec
Children see throughput for 32 random writers = 407.77 ops/sec

Comparison

SSD Disk Test

Throughput Test

Client 1

Children see throughput for 32 initial writers = 19018.32 KB/sec
Children see throughput for 32 readers = 45684.51 KB/sec
Children see throughput for 32 random readers = 4636.80 KB/sec
Children see throughput for 32 random writers = 1550.62 KB/sec

Client 2

Children see throughput for 32 initial writers = 19118.81 KB/sec
Children see throughput for 32 readers = 45607.64 KB/sec
Children see throughput for 32 random readers = 4797.18 KB/sec
Children see throughput for 32 random writers = 1613.63 KB/sec

Comparison

SSD Throughput Test

26Nov/120

My Finalized HPBS Benchmark

Well, the numbers are in. During the weekend, the iozone IOPS test completed for HPBS. In addition, I also ran a throughput test using iozone as well. Instead of updating the previous article, I figure I would just write a fresh one.

As a recap, here is the test environment that I used:

AMD FX-6100 Proc (3314 Mhz) Storm Server
3831MB RAM
Debian 6.0.3 x64 upgraded to the latest stable packages as of 21 November 12 (brings it up to Debian 6.0.6)
150GB HPBS volume formatted ext4
iozone 3_414 compiled with linux-AMD64 as the target

For the sake of simplicity, I passed the exact same parameters as the Storm SSD tests:

One thing I would like to point out is that the Amazon numbers are NOT for their Elastic Block Storage product (EBS), but rather for whatever methodology they use for their instance storage. For Liquid Web, our Storm Servers utilize local storage. One customer I have talked to has indicated that he believes the EBS numbers will fare even lower than the numbers we have when we tested against Amazon in the tables above.

So on to the numbers! Our first specimen here is the finalized output of the first test that I had run: I/O

Iozone: Performance Test of File I/O
Version $Revision: 3.414 $
Compiled for 64 bit mode.
Build: linux-AMD64

Contributors:William Norcott, Don Capps, Isom Crawford, Kirby Collins
Al Slater, Scott Rhine, Mike Wisner, Ken Goss
Steve Landherr, Brad Smith, Mark Kelly, Dr. Alain CYR,
Randy Dunlap, Mark Montague, Dan Million, Gavin Brebner,
Jean-Marc Zucconi, Jeff Blomberg, Benny Halevy, Dave Boone,
Erik Habbinga, Kris Strecker, Walter Wong, Joshua Root,
Fabrice Bacchella, Zhenghua Xue, Qin Li, Darren Sawyer,
Vangel Bojaxhi, Ben England.

Run began: Wed Nov 21 10:52:22 2012

OPS Mode. Output is in operations per second.
Include fsync in write timing
No retest option selected
Record Size 4 KB
File size set to 4194304 KB
Command line used: /root/iozone -l 32 -O -i 0 -i 1 -i 2 -e -+n -r 4K -s 4G
Time Resolution = 0.000001 seconds.
Processor cache size set to 1024 Kbytes.
Processor cache line size set to 32 bytes.
File stride size set to 17 * record size.
Min process = 32
Max process = 32
Throughput test with 32 processes
Each process writes a 4194304 Kbyte file in 4 Kbyte records

Children see throughput for 32 initial writers = 10377.15 ops/sec
Parent sees throughput for 32 initial writers = 9485.68 ops/sec
Min throughput per process = 305.25 ops/sec
Max throughput per process = 348.35 ops/sec
Avg throughput per process = 324.29 ops/sec
Min xfer = 916573.00 ops

Children see throughput for 32 readers = 12519.84 ops/sec
Parent sees throughput for 32 readers = 12519.41 ops/sec
Min throughput per process = 363.21 ops/sec
Max throughput per process = 412.03 ops/sec
Avg throughput per process = 391.24 ops/sec
Min xfer = 924357.00 ops

Children see throughput for 32 random readers = 349.18 ops/sec
Parent sees throughput for 32 random readers = 349.18 ops/sec
Min throughput per process = 10.22 ops/sec
Max throughput per process = 11.50 ops/sec
Avg throughput per process = 10.91 ops/sec
Min xfer = 931745.00 ops

Children see throughput for 32 random writers = 1026.03 ops/sec
Parent sees throughput for 32 random writers = 892.02 ops/sec
Min throughput per process = 28.77 ops/sec
Max throughput per process = 34.68 ops/sec
Avg throughput per process = 32.06 ops/sec
Min xfer = 875945.00 ops

iozone test complete.

I'm honestly not sure of the differences between "Children see throughput" and "Parent Sees throughput" (and Google wasn't helping), so for the case of this article, I'll go with the average of both of these numbers. Going with this, we're looking at the following:

4k Sequential Write: 9,931.42 ops/sec
4k Sequential Read: 12,519.63 ops/sec
4k Random Read: 349.18 ops/sec
4k Random Write: 959.03 ops/sec <-- !!

As you'll see, all tests showed higher performance than the Amazon instance previously tested (which was not done by myself). The real interesting thing though is how the random write numbers just blow away not only the Amazon m1.large instance, but the Storm 8GB instance as well. I believe this is related to how HPBS is setup, of which I'm not 100% technically versed at this time. Still, quite interesting given that conventional wisdom would indicate reads would be faster - especially when they aren't sequential.

Let's move on to the throughput test:

Iozone: Performance Test of File I/O
Version $Revision: 3.414 $
Compiled for 64 bit mode.
Build: linux-AMD64

Contributors:William Norcott, Don Capps, Isom Crawford, Kirby Collins
Al Slater, Scott Rhine, Mike Wisner, Ken Goss
Steve Landherr, Brad Smith, Mark Kelly, Dr. Alain CYR,
Randy Dunlap, Mark Montague, Dan Million, Gavin Brebner,
Jean-Marc Zucconi, Jeff Blomberg, Benny Halevy, Dave Boone,
Erik Habbinga, Kris Strecker, Walter Wong, Joshua Root,
Fabrice Bacchella, Zhenghua Xue, Qin Li, Darren Sawyer,
Vangel Bojaxhi, Ben England.

Run began: Fri Nov 23 20:25:07 2012

Include fsync in write timing
No retest option selected
Record Size 4 KB
File size set to 2097152 KB
Command line used: /root/iozone -l 32 -i 0 -i 1 -i 2 -e -+n -r 4K -s 2G
Output is in Kbytes/sec
Time Resolution = 0.000001 seconds.
Processor cache size set to 1024 Kbytes.
Processor cache line size set to 32 bytes.
File stride size set to 17 * record size.
Min process = 32
Max process = 32
Throughput test with 32 processes
Each process writes a 2097152 Kbyte file in 4 Kbyte records

Children see throughput for 32 initial writers = 38776.21 KB/sec
Parent sees throughput for 32 initial writers = 35388.11 KB/sec
Min throughput per process = 1149.08 KB/sec
Max throughput per process = 1295.61 KB/sec
Avg throughput per process = 1211.76 KB/sec
Min xfer = 1854184.00 KB

Children see throughput for 32 readers = 41301.22 KB/sec
Parent sees throughput for 32 readers = 41298.75 KB/sec
Min throughput per process = 807.03 KB/sec
Max throughput per process = 1489.52 KB/sec
Avg throughput per process = 1290.66 KB/sec
Min xfer = 1136272.00 KB

Children see throughput for 32 random readers = 2552.88 KB/sec
Parent sees throughput for 32 random readers = 2552.88 KB/sec
Min throughput per process = 74.07 KB/sec
Max throughput per process = 82.81 KB/sec
Avg throughput per process = 79.78 KB/sec
Min xfer = 1875968.00 KB

Children see throughput for 32 random writers = 4564.77 KB/sec
Parent sees throughput for 32 random writers = 3715.49 KB/sec
Min throughput per process = 122.68 KB/sec
Max throughput per process = 165.65 KB/sec
Avg throughput per process = 142.65 KB/sec
Min xfer = 1549772.00 KB

iozone test complete.

Using the methodology as presented above:
4k Sequential Write: 37.08 MB/sec
4k Sequential Read: 41.30 MB/sec
4k Random Read: 2.55 MB/sec
4k Random Write: 4.14 MB/sec

Again, so you don't have to scroll up, here are the results from the SSD benchmark previously performed:

Again, HPBS seems to really show its muscle compared to Amazon and even local Storm storage when it comes to random writes.

So, what does all this mean? Well, basically this is what I'm seeing with a Storm Baremetal Server doing nothing but running iozone. Actual performance of a product system will most likely vary. To what extent, I am not sure.

21Nov/121

Initial HPBS Performance Numbers

As you may be aware, Liquid Web has released its block storage product, High Performance Block Storage.

As such, I've gone ahead and run some simple benchmarks of my own, which I present to you. Your mileage may vary, but this is what I was seeing.

Server Information:
AMD FX-6100 Proc (3314 Mhz)
3831MB RAM
Debian 6.0.3 x64 upgraded to the latest stable packages
HPBS volume formatted ext4
iozone 3_414 compiled with linux-AMD64 as the target

Command run: ~/iozone -l 32 -O -i 0 -i 1 -i 2 -e -+n -r 4K -s 4G > ~/hpbs-iops.txt
*Note: The iozone executable was located in /root, the actual test was still being run on HPBS

Here is the output. First test result is for 4k Sequential Write, Second is for Sequential Read.

Iozone: Performance Test of File I/O
Version $Revision: 3.414 $
Compiled for 64 bit mode.
Build: linux-AMD64

Contributors:William Norcott, Don Capps, Isom Crawford, Kirby Collins
Al Slater, Scott Rhine, Mike Wisner, Ken Goss
Steve Landherr, Brad Smith, Mark Kelly, Dr. Alain CYR,
Randy Dunlap, Mark Montague, Dan Million, Gavin Brebner,
Jean-Marc Zucconi, Jeff Blomberg, Benny Halevy, Dave Boone,
Erik Habbinga, Kris Strecker, Walter Wong, Joshua Root,
Fabrice Bacchella, Zhenghua Xue, Qin Li, Darren Sawyer,
Vangel Bojaxhi, Ben England.

Run began: Wed Nov 21 10:52:22 2012

OPS Mode. Output is in operations per second.
Include fsync in write timing
No retest option selected
Record Size 4 KB
File size set to 4194304 KB
Command line used: /root/iozone -l 32 -O -i 0 -i 1 -i 2 -e -+n -r 4K -s 4G
Time Resolution = 0.000001 seconds.
Processor cache size set to 1024 Kbytes.
Processor cache line size set to 32 bytes.
File stride size set to 17 * record size.
Min process = 32
Max process = 32
Throughput test with 32 processes
Each process writes a 4194304 Kbyte file in 4 Kbyte records

Children see throughput for 32 initial writers = 10377.15 ops/sec
Parent sees throughput for 32 initial writers = 9485.68 ops/sec
Min throughput per process = 305.25 ops/sec
Max throughput per process = 348.35 ops/sec
Avg throughput per process = 324.29 ops/sec
Min xfer = 916573.00 ops

Children see throughput for 32 readers = 12519.84 ops/sec
Parent sees throughput for 32 readers = 12519.41 ops/sec
Min throughput per process = 363.21 ops/sec
Max throughput per process = 412.03 ops/sec
Avg throughput per process = 391.24 ops/sec
Min xfer = 924357.00 ops

As of this writing, the random read and write tests are still being run. I will update this article once these tests are complete.

With that said, what I'm seeing so far is pretty decent compared to AWS storage (not EBS mind you).
I'm basing this off of the chart supplied by Liquid Web when we benched our Storm SSD servers to AWS:

As you can note, I ran the same command that was used to bench Storm SSD. You'll notice that AWS has been beat so far (pending the updated random read/write tests). Conferring with a customer who has had experience with EBS, he has indicated that HPBS numbers will look even better in comparison to EBS.

Stay tuned!

21Aug/120

Utilizing the Stingray API

One of the benefits that a Managed Dedicated Load Balancing cluster provides is the ability to interact with it programmatically via a SOAP compliant API.

1Aug/120

Using GPG to Sign and Encrypt Email

Although you may use SSL encryption to communicate with your mail server, there is an additional level of security that you can use on top of that.

Welcome to the world of GPG.

10Jul/121

Using Liquid Web’s CDN with WordPress via W3 Total Cache

For those who don't know, W3 Total Cache is one of the most used performance enhancing plugins for WordPress. It offers many different options to speed up your WordPress site, whether it be through object caching via memcache, to minification of Javascript and CSS files.

It also offers easy integration with a feature that globally visited sites can benefit from: Content Delivery Networks

29Jun/124

Managed Dedicated Load Balancing: Right for your cloud?

One of the things that customers of Storm on Demand and Smart Servers (our public cloud offerings) may not realize is that Liquid Web's Managed Dedicated Load Balancing service works great not just for traditional dedicated servers, but for our public cloud offerings as well. In fact, we have a few customers who are utilizing our public cloud technologies with Managed Dedicated Load Balancing.

Why might you want Managed Dedicated Load Balancing?

You might be asking this question if you're currently, or planning, on using the load balancing features currently available with our public cloud products. Let's take a look at a couple of scenarios.

Customizability and API

Stingray, the software that powers our Managed Dedicated Load Balancing solution, offers many features that aren't' accessible with the shared load balancing available with our public cloud.

For example, we recently had a customer who needed to pass along custom headers to the load balanced web nodes running on Storm. This wasn't possible using the shared load balancing cluster.

Another feature that comes with having Managed Dedicated Load Balancing is an API that is native to the software. So just like you can interact with Storm on Demand via the API, you can interact with your load balancing cluster via a SOAP compliant API.

Cost effectiveness

Depending on circumstances, it may also be cost beneficial to go with a Managed Dedicated Load Balancing cluster.

Let's look at a hypothetical:

You are a platform as a service provider that offers your platform in a highly available environment utilizing load balancing between two front end nodes, file replication between these nodes, and highly available Percona technologies. This platform requires SSL.

You are looking to move over your infrastructure to Liquid Web's Smart Server Platform and currently have 20 customers, with expectations of significantly increased growth. You will be averaging throughput of 30Mbps.

In this case, the idea is that you would need 20 Virtual IPs (VIPs) to handle these customers, as each will require it's own VIP (yes, you can do SNI, however it's also dependent on the client browser to support it, so it's not a bad idea to have the domain on it's own IP).

With load balancing available to Smart Servers, you need to create a new "Load Balancer" for every VIP that you want to use. These run $100/month each. Then there is $25/month per node that you have being fed from that VIP.

So based on these figures:

20 VIPS * $100/VIP = $2,000/month
$25/Node * 2 Nodes/VIP * 20 VIPS = $1,000/month
Total: $3,000/month

So $3,000/month *just* in load balancing costs.

In comparison, we could potentially get you into a Managed Dedicated Load Balancing cluster for $1,198/month - a savings of $1,800/month! And of course the savings can keep going up as the client base increases in this hypothetical scenario.

Conclusion

There are many other benefits that Managed Dedicated Load Balancing has over shared public cloud load balancing, however, the above two are ones that I've seen most frequently cited so far as reasons for going with Managed Dedicated Load Balancing.

If you are interested in finding out if Managed Dedicated Load Balancing is right for your situation, give sales a call at 800-580-4985 today!

30Apr/120

Interactive Storm API Testing Script

What happens when I get bored? I code apparently.

Here's a piece of PHP code that runs purely command line. What it does is allow you to interactively work with the StormOnDemand API.

Although the output is just a straight print_r() of the returned data set, it does provide a nice easy way if you want to see the returns from various methods (or while feeding the same method different parameter/value pairs) in an expedient manner - all without having to write/replace code in a static script.

Enjoy!

 PHP |  copy code |? 
01
02
<?php
03
	/*
04
	 * Author: Jason Gillman Jr.
05
	 * Description: My attempt at writing a simple interactive CLI script for dumping raw data from Storm API returns.
06
	 * 				All you are going to get is print_r() of the returned array.
07
	 * 				Hope it's useful!
08
	 */
09
 
10
	require_once('StormAPI.class.php');
11
 
12
	// Initial information
13
	echo "\nAPI Username: "; $api_user = trim(fgets(STDIN));
14
	echo "Password: "; $api_pass = trim(fgets(STDIN));
15
	echo "Initial Method: "; $api_method = trim(fgets(STDIN));
16
 
17
	$storm = new StormAPI($api_user, $api_pass, $api_method);
18
 
19
	// Menu
20
	while(!isset($stop))
21
	{
22
		echo "\n\nPick your poison... \n";
23
		echo "1. Change method (will clear params) \n";
24
		echo "2. Add parameter \n";
25
		echo "3. Clear parameters \n";
26
		echo "4. Execute request and display \n";
27
		echo "5. Get me out of here \n";
28
		echo "Enter a number: "; fscanf(STDIN, "%d\n", $choice); // Get the choice
29
 
30
		switch($choice)
31
		{
32
			case 1:
33
				echo "\nEnter your new method: "; $api_method = trim(fgets(STDIN));
34
				$storm->new_method($api_method);
35
				break;
36
			case 2:
37
				echo "\nEnter the parameter: "; $parameter = trim(fgets(STDIN));
38
				echo "\nEnter the value: "; $value = trim(fgets(STDIN));
39
				$storm->add_param($parameter, $value);
40
				unset($parameter, $value);
41
				break;
42
			case 3:
43
				$storm->clear_params();
44
				break;
45
			case 4:
46
				print_r($storm->request());
47
				break;
48
			case 5:
49
				echo "\n\n";
50
				$stop = TRUE;
51
				break;
52
			default:
53
				echo "Really? How about you enter a valid value?";
54
				break;
55
		}
56
	}
57
?>
58

30Apr/120

StormOnDemand API Class update

I've yet again updated my StormOnDemand API PHP class.

For those who don't want to go to the PHP Classes site (which I highly recommend you do if you code OOP PHP - it's got a great collection of various PHP Classes), here is the source:

 PHP |  copy code |? 
01
02
<?php
03
	/*
04
	 * Author: Jason Gillman Jr.
05
	 * Description: This is my attempt at writing a PHP wrapper that will ease Storm API calls with PHP
06
	 * 				It will be designed to use the JSON format for talking with the API server.
07
	 * 				$api_method is as described in docs (Case doesn't matter)
08
	 * 				request() method returns an array generated from the API return
09
	 */
10
	class StormAPI
11
	{
12
		// Let's define attributes
13
		private $api_user, $api_pass, $base_url, $api_format, $api_full_uri, $api_request;
14
		private $api_request_body, $api_method, $api_params, $api_return; 
15
 
16
		function __construct($api_user, $api_pass, $api_method)
17
		{	
18
			$this->api_user = $api_user;
19
			$this->api_pass = $api_pass;
20
			$this->api_method = $api_method;
21
			$this->base_url = 'https://api.stormondemand.com/';
22
			$this->api_format = 'json';
23
 
24
			$this->api_full_uri = $this->base_url . $this->api_method . "." .$this->api_format;
25
			$this->api_request = curl_init($this->api_full_uri); // Instantiate
26
			curl_setopt($this->api_request, CURLOPT_RETURNTRANSFER, TRUE); // Don't dump directly to output
27
			curl_setopt($this->api_request, CURLOPT_SSL_VERIFYPEER, TRUE); // It does look like verification works now.
28
			curl_setopt($this->api_request, CURLOPT_USERPWD, "$this->api_user:$this->api_pass"); // Pass the creds
29
		}
30
 
31
		function add_param($parameter, $value)
32
		{
33
			$this->api_request_body['params'][$parameter] = $value;
34
		}
35
 
36
		function clear_params()
37
		{
38
			unset($this->api_request_body);
39
			curl_setopt($this->api_request, CURLOPT_HTTPGET, TRUE); //If the request was previously run with params, this cleans those out. Otherwise they go back with the request
40
		}
41
 
42
		function new_method($api_method, $clearparams = TRUE) // Clears out parameters by default, since they may not apply now
43
		{
44
			if($clearparams = TRUE)
45
			{
46
				unset($this->api_request_body);
47
				curl_setopt($this->api_request, CURLOPT_HTTPGET, TRUE); //If the request was previously run with params, this cleans those out. Otherwise they go back with the request
48
			}
49
 
50
			$this->api_method = $api_method; // New method, coming right up!
51
			$this->api_full_uri = $this->base_url . $this->api_method . "." .$this->api_format; // New URI since method change
52
			curl_setopt($this->api_request, CURLOPT_URL, $this->api_full_uri);
53
 
54
		}
55
 
56
		function request()
57
		{
58
			if(is_array($this->api_request_body)) // We have params
59
			{
60
				curl_setopt($this->api_request, CURLOPT_POST, TRUE); //POST method since we'll be feeding params
61
				curl_setopt($this->api_request, CURLOPT_HTTPHEADER, Array('Content-type: application/json')); // Since we'll be using JSON
62
				curl_setopt($this->api_request, CURLOPT_POSTFIELDS, json_encode($this->api_request_body)); // Insert the parameters
63
			}
64
 
65
			// Now send the request and get the return on investment
66
			try
67
			{
68
				return json_decode(curl_exec($this->api_request), TRUE); // Pull the trigger and get nice pretty arrays of returned data
69
			}
70
			catch (HTTP_Request2_Exception $e)
71
			{
72
				echo 'Error: ' . $e->getMessage();
73
			}
74
		}
75
	}
76
?>
77

Essentially, I added two new methods: clear_params() and new_method(). Both of these do what you might think - clearing existing set parameters and choosing a different API method.

Also, if you're so inclined, I believe you should be able to track updates to the PHP Classes project that I have with this RSS link. I've also added it to the right side of the page.

5Apr/120

Pseudo-VPN for SmartServers

One of the disadvantages that SmartVPSs or SmartServers have compared to traditional dedicated servers is that you can't stick an ASA5505 (or some other hardware firewall) in front of it for VPN connectivity.

Many times, customers are looking to have some sort of internal site that they don't want being visible to the outside world. So how can we accomplish something like this without a VPN? Read below the fold to find out.

FireStats icon Powered by FireStats
Optimization WordPress Plugins & Solutions by W3 EDGE