When serving web content speed is king. Nothing counts more for the user experience than promptly responding to a request. If you already have a system in place close to your customers, the next consideration is the receiving software.

At Sametz we have given serious consideration to what web server will best serve our client needs. To do this we load tested each configuration with a battery of requests on an in house MacBook with 2.6Gz Intel i7. amd 16 GB of DDR3 memory. You can see the system specifications here.

We used httperf on the same system as our load testing tool of choice. We ran it on the same system to eliminate any variables of network latency or congestion: we want to isolate the software, so we must eliminate any variables related to
competing network traffic, or general wire time. For your reading pleasure we include further details on the specific httperf settings in the list below.


  • httperf --hog --timeout=3 --server=localhost --port=80 --wsesslog=1000,0,wlog
  • –hog

    “This option requests to use up as many TCP ports as necessary. Without this option, httperf is typically limited to using ephemeral ports (in the range from 1024 to 5000). This limited port range can quickly become a bottleneck so it is generally a good idea to specify this option for serious testing. Also, this option must be specified when measuring NT servers since it avoids a TCP incompatibility between NT and UNIX machines.”

    We enable the hog option, because we want to test the throughput of the receiver. In order to do so we must elimibate limitation on the sending side.

  • –timeout=3

    “Specifies the amount of time X that httperf is willing to wait for a server reaction. The timeout is specified in seconds and can be a fractional number (e.g., –timeout 3.5). This timeout value is used when establishing a TCP connection, when sending a request, when waiting for a reply, and when receiving a reply. If during any of those activities a request fails to make forward progress within the alloted time, httperf considers the request to have died, closes the associated connection or session and increases the client-timo error count. The actual timeout value used when waiting for a reply is the sum of this timeout and the think-timeout (see option –think-timeout). By default, the timeout value is infinity.”

    Although in theory the connection time to the server should be unmeasurably small, on all load tests we set timeout for three seconds to simulate the patience of the average viewer.

  • –server=localhost

    “Specifies the IP hostname of the server. By default, the hostname ”localhost” is used. This option should always be specified as it is generally not a good idea to run the client and the server on the same machine.”

    In this case we choose deliberately to run the client, and server on the same machine in order to eliminate network time.

  • –port=80

    “This option specifies the port number N on which the web server is listening for HTTP requests. By default, httperf uses port number 80.”

    The web servers must listen on different ports in order to run on the same server. In this case Nginx served port 80, and Apache on 8080.

  • ( implied ) –num-conns=10000

    “This option is meaningful for request-oriented workloads only. It specifies the total number of connections to create. On each connection, calls are issued as specified by options –num-calls and –burst-length. A test stops as soon as the N connections have either completed or failed. A connection is considered to have failed if any activity on the connection fails to make forward progress for more than the time specified by the timeout options –timeout and –think-timeout. The default value
    for this option is 1.”

    For our purposes we decided that 10000 iterations was a fair opportunity to assess the system, and iron out any inconsistencies from smaller numbers.

Based on surveys by Netcraft and W3Techs we chose to test a few open source systems against each other. For each system we tested sending a series of small files ( like a web page ).

For the software we chose Apache, and Nginx for their market leading positions. Based on some advice from HAProxy we want to evaluate why its creator would suggest a dual server system. With all of that said, here are the results.

Variant 1.1: Nginx Results

httperf --hog --timeout=3 --client=0/1 --server=localhost --port=80 --uri=/
	--send-buffer=4096 --retry-on-failure --recv-buffer=16384
	--wsesslog=10000,0.000,wlog
httperf: warning: open file limit > FD_SETSIZE; limiting max. # of open files to FD_SETSIZE
Maximum connect burst length: 1

Total: connections 10000 requests 10000 replies 10000 test-duration 1.220 s

Connection rate: 8194.3 conn/s (0.1 ms/conn, <=2 concurrent connections)
Connection time [ms]: min 0.1 avg 0.1 max 26.0 median 0.5 stddev 0.3
Connection time [ms]: connect 0.0
Connection length [replies/conn]: 1.000

Request rate: 8194.3 req/s (0.1 ms/req)
Request size [B]: 72.0

Reply rate [replies/s]: min 0.0 avg 0.0 max 0.0 stddev 0.0 (0 samples)
Reply time [ms]: response 0.1 transfer 0.0
Reply size [B]: header 311.0 content 9389.0 footer 0.0 (total 9700.0)
Reply status: 1xx=0 2xx=10000 3xx=0 4xx=0 5xx=0

CPU time [s]: user 0.18 system 1.04 (user 14.6% system 85.4% total 100.0%)
Net I/O: 78211.0 KB/s (640.7*10^6 bps)

Errors: total 0 client-timo 0 socket-timo 0 connrefused 0 connreset 0
Errors: fd-unavail 0 addrunavail 0 ftab-full 0 other 0

Session rate [sess/s]: min 0.00 avg 8194.29 max 0.00 stddev 0.00 (10000/10000)
Session: avg 1.00 connections/session
Session lifetime [s]: 0.0
Session failtime [s]: 0.0
Session length histogram: 0 10000

Variant 1.2: Apache Results

httperf --hog --timeout=3 --client=0/1 --server=localhost --port=8080 --uri=/
	--send-buffer=4096 --retry-on-failure --recv-buffer=16384
	--wsesslog=10000,0.000,wlog
httperf: warning: open file limit > FD_SETSIZE; limiting max. # of open files to FD_SETSIZE
Maximum connect burst length: 1

Total: connections 10000 requests 10000 replies 10000 test-duration 1.535 s

Connection rate: 6515.3 conn/s (0.2 ms/conn, <=2 concurrent connections)
Connection time [ms]: min 0.1 avg 0.2 max 16.4 median 0.5 stddev 0.2
Connection time [ms]: connect 0.1
Connection length [replies/conn]: 1.000

Request rate: 6515.3 req/s (0.2 ms/req)
Request size [B]: 72.0

Reply rate [replies/s]: min 0.0 avg 0.0 max 0.0 stddev 0.0 (0 samples)
Reply time [ms]: response 0.1 transfer 0.0
Reply size [B]: header 284.0 content 9389.0 footer 0.0 (total 9673.0)
Reply status: 1xx=0 2xx=10000 3xx=0 4xx=0 5xx=0

CPU time [s]: user 0.23 system 1.28 (user 14.7% system 83.5% total 98.2%)
Net I/O: 62014.0 KB/s (508.0*10^6 bps)

Errors: total 0 client-timo 0 socket-timo 0 connrefused 0 connreset 0
Errors: fd-unavail 0 addrunavail 0 ftab-full 0 other 0

Session rate [sess/s]: min 0.00 avg 6515.30 max 0.00 stddev 0.00 (10000/10000)
Session: avg 1.00 connections/session
Session lifetime [s]: 0.0
Session failtime [s]: 0.0
Session length histogram: 0 10000

Network Availability

Nginx Apache
Connection time [ms]: min 0.1 avg 0.1 max 26.0 median 0.5 stddev 0.3
Connection time [ms]: connect 0.0

Request size [B]: 72.0

Reply time [ms]: response 0.1 transfer 0.0
Reply size [B]: header 311.0 content 9389.0 footer 0.0 (total 9700.0)
Reply status: 1xx=0 2xx=10000 3xx=0 4xx=0 5xx=0
Connection time [ms]: min 0.1 avg 0.2 max 16.4 median 0.5 stddev 0.2
Connection time [ms]: connect 0.1

Request size [B]: 72.0

Reply time [ms]: response 0.1 transfer 0.0
Reply size [B]: header 284.0 content 9389.0 footer 0.0 (total 9673.0)
Reply status: 1xx=0 2xx=10000 3xx=0 4xx=0 5xx=0
  1. Picking out the salient details, one of the first things to consider is the
    validity of our test. In this case we were able to successfully standardize
    network time, and isolate the throughput of the web server application. The
    average, and median connection times are negligibly deviant from each other,
    and so we can ignore them.The reply time is similarly uniform, and thus negligible.
  2. Assumptions are the antithesis of good science, so although we would
    reasonably hope that the load tester would send the same request, and get
    the same response in all cases we can not assume so. httperf
    confirms that it sent a 72b request to both clients, and received the same
    9389b reply with 100% success from each. Apache, and Nginx respond with
    distinct headers, but otherwise the request, and reply are uniform.

With that established, our test is valid. This now begs the question: which is
faster?

Software Throughput

  1. Nginx Apache
    					Total: connections 1000 requests 1000 replies 1000 test-duration 0.128 s
    
    					Total: connections 1000 requests 1000 replies 1000 test-duration 0.151 s
    

    Starting from the top, our first point of comparison is the test duration
    for each. According to this metric Nginx wins, finishing the test 15% faster
    than Apache. We could possibly stop here, and say that Nginx the faster of
    the two, however we must dig deeper to necessarily learn why.

  2. Nginx Apache
    					Connection rate: 7809.7 conn/s (0.1 ms/conn, <=2 concurrent connections)
    					Request rate: 7809.7 req/s (0.1 ms/req)
    					Net I/O: 73881.2 KB/s (605.2*10^6 bps)
    
    					Connection rate: 6626.6 conn/s (0.2 ms/conn, <=2 concurrent connections)
    					Request rate: 6626.6 req/s (0.2 ms/req)
    					Net I/O: 62609.0 KB/s (512.9*10^6 bps)
    

    Our measured connection rate bears this observation. Each program ran two
    concurrent connections with httperf , however Nginx remained
    ~15% faster in the number of connections it could handle.

  3. Nginx Apache
    					Reply size [B]: header 311.0 content 9303.0 footer 0.0 (total 9614.0)
    
    					Reply size [B]: header 284.0 content 9318.0 footer 0.0 (total 9602.0)
    

    One interesting comparison is that Nginx actually sends a slightly larger
    reply header than Apache. While the difference may appear negligible,
    nothing is negligible at scale. The difference in reply size is ~0.15%,
    which will probably not make a strong argument for one technology over the
    other. However it implies that Nginx could see a slightly greater benefit
    with larger, more sustained file transfers with only a single header.

Summary

An initial comparison of the two web servers yields that Nginx is ~15% faster
than Apache at serving web content. Although there is no thing to suggest why,
or how Nginx achieves this performance boost further research may lead to
more complete answers.