Docker Swarm 集群化部署 RunnerGo

RunnerGo 是基于 Golang 语言进行研发,具有性能测试、自动化测试等功能的开源项目。

目前官方文档对于集群化部署的资料是较少的,如果你的业务较小那使用 docker-compose 直接部署就可以满足了,但在大流量的压测场景下,我们需要将发压端部署到其他服务器上,来提升我们可压测的能力。

Docker Swarm 是群友推荐给我的一个方案,Docker Compose 只能支持单机,Kubernetes 在没有运维支撑下,成本会比较高,这里我们将会介绍使用 Docker Swarm 如何进行 RunnerGo 的部署,相比于官方提供的,我们的调整会比较大,也无法提供一键式部署方式。(请确保你的网络可以正常访问到 ghcr.io)

Docker Swarm 会将服务器划分为 master worker 两种节点,首先我们要对主服务器进行初始化,把它变成一个 master 节点。初始化后会输出加入 worker 节点的命令,你需要在部署压测端的服务器上执行。

1
2
3
4
5
6
7
8
$ docker swarm init --advertise-addr $(ip addr show eth0 | grep "inet\b" | awk '{print $2}' | cut -d/ -f1)
Swarm initialized: current node (t74sasz2wd0vcxvc6cb8pixpj) is now a manager.

To add a worker to this swarm, run the following command:

docker swarm join --token SWMTKN-1-2pv6brhh2qzkfukac2c9k7d9ztkafcgw6ot6xmj0ivlvouxwf5-11mkqperv31v8m8wx5gicr8fl 10.0.16.15:2377

To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

现在要下载官方的仓库,里面有初始化需要的文件。

1
2
$ git clone https://github.com/Runner-Go-Team/RunnerGo.git
$ cd RunnerGo/runnergo

好了,现在要调整 docker-compose.yaml 文件。我们需要在每一个服务都加上指定部署的位置。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
mysql-db:
image: registry.cn-beijing.aliyuncs.com/runnergo/mysql:5.7.40.v1@sha256:4d67b6aeab51bbae540a9e52f1d679aab8469d9e5bbd414da0b49f834fdc78b1
env_file:
- ./config.env
volumes:
- ./mysql/mysql.sql:/docker-entrypoint-initdb.d/mysql.sql:ro
restart: always
networks:
- apipost_net
ports:
- "3306:3306"
+ deploy:
+ placement:
+ constraints:
+ - node.role == manager # 这是部署到 master 节点的
# - node.role != manager # 这是部署到 worker 节点的

下面提供了最终的调整文件

  • 调整发压端的镜像地址,同时设置容器内服务的 IP 为指定网卡地址。
  • 添加服务器间可通讯的 overlay 网络。
  • 去除了数据库等的持久化配置,如果你需要保留数据内容,那要自行设置挂载卷。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
version: "3.7"
services:
mysql-db:
image: registry.cn-beijing.aliyuncs.com/runnergo/mysql:5.7.40.v1
env_file:
- ./config.env
volumes:
- ./mysql/mysql.sql:/docker-entrypoint-initdb.d/mysql.sql:ro
restart: always
networks:
- apipost_net
ports:
- "3306:3306"
deploy:
placement:
constraints:
- node.role == manager
redis-db:
image: registry.cn-beijing.aliyuncs.com/runnergo/redis:6.2.7
command: redis-server --requirepass mypassword
restart: always
networks:
- apipost_net
ports:
- "6379:6379"
deploy:
placement:
constraints:
- node.role == manager
mongo-db:
image: registry.cn-beijing.aliyuncs.com/runnergo/mongo:4.4
env_file:
- ./config.env
volumes:
- ./mongo/init-mongo.sh:/docker-entrypoint-initdb.d/init-mongo.sh
restart: always
networks:
- apipost_net
ports:
- "27017:27017"
deploy:
placement:
constraints:
- node.role == manager
manage:
image: registry.cn-beijing.aliyuncs.com/runnergo/manage:releases-v1.1.2
restart: always
env_file:
- ./config.env
networks:
- apipost_net
ports:
- "58889:30000"
depends_on:
- mysql-db
deploy:
placement:
constraints:
- node.role == manager
manage-ws:
image: registry.cn-beijing.aliyuncs.com/runnergo/manage-ws:releases-v1.1.2
restart: always
env_file:
- ./config.env
networks:
- apipost_net
ports:
- "58887:30000"
depends_on:
- mysql-db
deploy:
placement:
constraints:
- node.role == manager
web-ui:
image: registry.cn-beijing.aliyuncs.com/runnergo/web-ui:releases-v1.1.4
restart: always
ports:
- "9999:81"
- "58888:82"
networks:
- apipost_net
deploy:
placement:
constraints:
- node.role == manager
file-server:
image: registry.cn-beijing.aliyuncs.com/runnergo/file-server:releases-v1.0.1
restart: always
env_file:
- ./config.env
networks:
- apipost_net
deploy:
placement:
constraints:
- node.role == manager
zookeeper:
image: registry.cn-beijing.aliyuncs.com/runnergo/zookeeper:latest
restart: always
networks:
- apipost_net
ports:
- "2181:2181"
- "51268:51268"
- "51270:51270"
deploy:
placement:
constraints:
- node.role == manager
kafka:
image: registry.cn-beijing.aliyuncs.com/runnergo/kafka:2.13-3.2.1
depends_on:
- zookeeper
env_file:
- ./config.env
networks:
- apipost_net
ports:
- "9092:9092"
deploy:
placement:
constraints:
- node.role == manager
collector:
image: registry.cn-beijing.aliyuncs.com/runnergo/collector:releases-v1.0.3
restart: always
env_file:
- ./config.env
networks:
- apipost_net
links:
- kafka
depends_on:
- kafka
- engine
deploy:
placement:
constraints:
- node.role == manager
engine:
image: ghcr.io/hongfs/runnergo-engine-open-20230417:optimization-get-ip
restart: always
ports:
- "30000:30000"
env_file:
- ./config.env
environment:
- RG_ENGINE_IP_FOR_ETH=eth1
networks:
- apipost_net
deploy:
mode: global
placement:
constraints:
- node.role != manager
networks:
apipost_net:
driver: overlay
ipam:
config:
- subnet: 10.200.0.0/16

手动将 config.envkafka:9092 调整成 ${master 节点的内网 IP}:9092.

最后执行部署 docker stack deploy --compose-file docker-compose.yaml runnergo

往上