admin管理员组

文章数量:1558102

目录

1.ab工具

2.环境部署及压测

2.1 Openresty压测

2.2 NodeJS压测

2.3 Java压测

2.4 Python压测

3.结果分析


OpenResty基于高性能的Nginx,其实现采用了“小众”的开发语言Lua,社区相对较小,知名度较低。实际上,OpenResty的开发效率和运行效率都超过了竞争对手。下面,以测试案例对比目前较为流行的Web开发环境:NodeJsJavaPython。本次测试各自实现一个简单的HTTP服务,不做任何额外的优化调整,直接输出“Hello World”字符串,压测工具采用ab,测试机为CentOS 12G。压测命令为:

ab -c 100 -n 10000 http://127.0.0.1:8001/hello

1.ab工具

ab工具的安装以及性能指标,参考超实用压力测试工具-ab工具

2.环境部署及压测

2.1 Openresty压测

安装部署OpenResty,参考博文Linux中安装部署OpenResty应用

压测结果输出:

[root@VM_0_26_centos ~]# ab -c 100 -n 10000 http://127.0.0.1:8001/test
This is ApacheBench, Version 2.3 <$Revision: 1430300 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech/
Licensed to The Apache Software Foundation, http://www.apache/

Benchmarking 127.0.0.1 (be patient)
Completed 1000 requests
Completed 2000 requests
Completed 3000 requests
Completed 4000 requests
Completed 5000 requests
Completed 6000 requests
Completed 7000 requests
Completed 8000 requests
Completed 9000 requests
Completed 10000 requests
Finished 10000 requests


Server Software:        openresty/1.15.8.2
Server Hostname:        127.0.0.1
Server Port:            8001

Document Path:          /test
Document Length:        12 bytes

Concurrency Level:      100
Time taken for tests:   0.568 seconds  处理完全部请求所花费时间
Complete requests:      10000
Failed requests:        0
Write errors:           0
Total transferred:      1600000 bytes
HTML transferred:       120000 bytes
Requests per second:    17617.85 [#/sec] (mean)  吞吐率
Time per request:       5.676 [ms] (mean)   用户平均请求等待时间
Time per request:       0.057 [ms] (mean, across all concurrent requests) 服务器平均请求等待时间    吞吐率的倒数
Transfer rate:          2752.79 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    1   0.5      1       3
Processing:     2    4   0.9      4      11
Waiting:        0    3   0.7      3       8
Total:          2    6   0.9      5      11
WARNING: The median and mean for the total time are not within a normal deviation
        These results are probably not that reliable.

Percentage of the requests served within a certain time (ms)
  50%      5
  66%      6
  75%      6
  80%      6
  90%      6
  95%      7
  98%      9
  99%     11    99线 11ms
 100%     11 (longest request)

2.2 NodeJS压测

环境部署参考:https://blog.csdn/xzx4959/article/details/103950765

压测结果输出:

[root@VM_0_26_centos ~]# ab -c 100 -n 10000 http://127.0.0.1:8888/
This is ApacheBench, Version 2.3 <$Revision: 1430300 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech/
Licensed to The Apache Software Foundation, http://www.apache/

Benchmarking 127.0.0.1 (be patient)
Completed 1000 requests
Completed 2000 requests
Completed 3000 requests
Completed 4000 requests
Completed 5000 requests
Completed 6000 requests
Completed 7000 requests
Completed 8000 requests
Completed 9000 requests
Completed 10000 requests
Finished 10000 requests


Server Software:        
Server Hostname:        127.0.0.1
Server Port:            8888

Document Path:          /
Document Length:        11 bytes

Concurrency Level:      100
Time taken for tests:   1.835 seconds
Complete requests:      10000
Failed requests:        0
Write errors:           0
Total transferred:      1120000 bytes
HTML transferred:       110000 bytes
Requests per second:    5449.86 [#/sec] (mean)
Time per request:       18.349 [ms] (mean)
Time per request:       0.183 [ms] (mean, across all concurrent requests)
Transfer rate:          596.08 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    1   0.5      1       3
Processing:     5   17  10.1     12      53
Waiting:        4   14   7.5     11      43
Total:          5   18  10.1     14      53

Percentage of the requests served within a certain time (ms)
  50%     14
  66%     18
  75%     24
  80%     26
  90%     33
  95%     41
  98%     46
  99%     48
 100%     53 (longest request)

2.3 Java压测

安装部署Spring Boot压测环境,参考文档:Spring Boot之Hello World

压测结果输出:

[root@VM_0_26_centos ~]# ab -c 100 -n 10000 http://127.0.0.1:8080/hello
This is ApacheBench, Version 2.3 <$Revision: 1430300 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech/
Licensed to The Apache Software Foundation, http://www.apache/

Benchmarking 127.0.0.1 (be patient)
Completed 1000 requests
Completed 2000 requests
Completed 3000 requests
Completed 4000 requests
Completed 5000 requests
Completed 6000 requests
Completed 7000 requests
Completed 8000 requests
Completed 9000 requests
Completed 10000 requests
Finished 10000 requests


Server Software:        
Server Hostname:        127.0.0.1
Server Port:            8080

Document Path:          /hello
Document Length:        11 bytes

Concurrency Level:      100
Time taken for tests:   8.184 seconds
Complete requests:      10000
Failed requests:        0
Write errors:           0
Total transferred:      1440000 bytes
HTML transferred:       110000 bytes
Requests per second:    1221.84 [#/sec] (mean)
Time per request:       81.844 [ms] (mean)
Time per request:       0.818 [ms] (mean, across all concurrent requests)
Transfer rate:          171.82 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    1   0.5      1       2
Processing:     1   81  58.5     66     603
Waiting:        1   75  49.8     64     585
Total:          1   81  58.4     67     604

Percentage of the requests served within a certain time (ms)
  50%     67
  66%     83
  75%     93
  80%     99
  90%    129
  95%    200
  98%    305
  99%    327
 100%    604 (longest request)

2.4 Python压测

安装部署Python压测环境,参考文档:Linux上部署Flask Web应用

压测结果输出:

[root@VM_0_26_centos ~]# ab -c 100 -n 10000 http://127.0.0.1:5000/
This is ApacheBench, Version 2.3 <$Revision: 1430300 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech/
Licensed to The Apache Software Foundation, http://www.apache/

Benchmarking 127.0.0.1 (be patient)
Completed 1000 requests
Completed 2000 requests
Completed 3000 requests
Completed 4000 requests
Completed 5000 requests
Completed 6000 requests
Completed 7000 requests
Completed 8000 requests
Completed 9000 requests
Completed 10000 requests
Finished 10000 requests


Server Software:        Werkzeug/0.16.0
Server Hostname:        127.0.0.1
Server Port:            5000

Document Path:          /
Document Length:        11 bytes

Concurrency Level:      100
Time taken for tests:   9.212 seconds
Complete requests:      10000
Failed requests:        0
Write errors:           0
Total transferred:      1650000 bytes
HTML transferred:       110000 bytes
Requests per second:    1085.55 [#/sec] (mean)
Time per request:       92.119 [ms] (mean)
Time per request:       0.921 [ms] (mean, across all concurrent requests)
Transfer rate:          174.92 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.2      0       2
Processing:     2   92   6.4     91     112
Waiting:        2   92   6.4     91     112
Total:          4   92   6.3     91     112

Percentage of the requests served within a certain time (ms)
  50%     91
  66%     92
  75%     93
  80%     94
  90%     97
  95%     98
  98%    104
  99%    107
 100%    112 (longest request)

3.结果分析

压测结果统计如下:

 

OpenResty

NodeJS

Java

Python

消耗时间(s

0.568

1.835

8.184

9.212

吞吐量(#/s

1.762

0.545

0.122

0.109

99线(ms

10

48

327

107

max(ms)

11

53

604

112

用户平均请求等待时间(ms

5.68

 

18.35

 

81.84

 

92.12

 

从统计结果中可以看出,OpenResty的执行效率明显高于其他编程语言。

相关概念说明:

  1. 吞吐率(Requests per second)
    概念:服务器并发处理能力的量化描述,单位是reqs/s,指的是某个并发用户数下单位时间内处理的请求数。某个并发用户数下单位时间内能处理的最大请求数,称之为最大吞吐率。
    计算公式:总请求数 / 处理完成这些请求数所花费的时间,即
    Request per second = Complete requests / Time taken for tests

  2. 并发连接数(The number of concurrent connections)
    概念:某个时刻服务器所接受的请求数目,简单的讲,就是一个会话。

  3. 并发用户数(The number of concurrent users,Concurrency Level)
    概念:要注意区分这个概念和并发连接数之间的区别,一个用户可能同时会产生多个会话,也即连接数。

  4. 用户平均请求等待时间(Time per request)
    计算公式:处理完成所有请求数所花费的时间/ (总请求数 / 并发用户数),即
    Time per request = Time taken for tests /( Complete requests / Concurrency Level)

  5. 服务器平均请求等待时间(Time per request: across all concurrent requests)
    计算公式:处理完成所有请求数所花费的时间 / 总请求数,即
    Time taken for / testsComplete requests
    可以看到,它是吞吐率的倒数。
    同时,它也=用户平均请求等待时间/并发用户数,即
    Time per request / Concurrency Level

 

本文标签: 高性能Openresty