一直学一直嗨,一直嗨一直学

在Kubernetes中让nginx容器热加载配置文件

Nginx作为WEB服务器被广泛使用。其自身支持热更新,在修改配置文件后,使用nginx -s reload命令可以不停服务重新加载配置。然而对于Dockerize的Nginx来说,如果每次都进到容器里执行对应命令去实现配置重载,这个过程是很痛苦的。本文介绍了一种kubernetes集群下nginx的热更新方案。

首先我们创建正常的一个nginx资源,资源清单如下:

apiVersion: v1  kind: ConfigMap  metadata:    name: nginx-config  data:    default.conf: |-        server {          server_name localhost;          listen 80 default_server;            location = /healthz {            add_header Content-Type text/plain;            return 200 'ok';          }            location / {              root   /usr/share/nginx/html;              index  index.html index.htm;          }            error_page   500 502 503 504  /50x.html;            location = /50x.html {              root   /usr/share/nginx/html;          }        }  ---  apiVersion: apps/v1  kind: Deployment  metadata:    name: my-app  spec:    replicas: 1    selector:      matchLabels:        app: my-app    template:      metadata:        labels:          app: my-app      spec:        containers:        - name: my-app          image: nginx          imagePullPolicy: IfNotPresent          volumeMounts:          - name: nginx-config            mountPath: /etc/nginx/conf.d        volumes:        - name: nginx-config          configMap:           name: nginx-config

然后创建资源对象。

# kubectl get pod -o wide  NAME                     READY   STATUS    RESTARTS   AGE    IP               NODE         NOMINATED NODE   READINESS GATES  my-app-9bdd6cbbc-x9gnt   1/1     Running   0          112s   192.168.58.197   k8s-node02   <none>           <none>

然后我们访问pod资源,如下:

# curl -I 192.168.58.197  HTTP/1.1 200 OK  Server: nginx/1.17.10  Date: Tue, 26 May 2020 06:18:18 GMT  Content-Type: text/html  Content-Length: 612  Last-Modified: Tue, 14 Apr 2020 14:19:26 GMT  Connection: keep-alive  ETag: "5e95c66e-264"  Accept-Ranges: bytes

现在我们来更新一下ConfigMap,也就是更改配置文件如下:

apiVersion: v1  kind: ConfigMap  metadata:    name: nginx-config  data:    default.conf: |-        server {          server_name localhost;          listen 8080 default_server;            location = /healthz {            add_header Content-Type text/plain;            return 200 'ok';          }            location / {              root   /usr/share/nginx/html;              index  index.html index.htm;          }            error_page   500 502 503 504  /50x.html;            location = /50x.html {              root   /usr/share/nginx/html;          }        }

等待数秒…..

然后我们可以看到nginx pod里的配置信息已经更改为如下:

# kubectl exec -it my-app-9bdd6cbbc-x9gnt -- /bin/bash  root@my-app-9bdd6cbbc-x9gnt:/# cat  /etc/nginx/conf.d/default.conf  server {    server_name localhost;    listen 8080 default_server;      location = /healthz {      add_header Content-Type text/plain;      return 200 'ok';    }      location / {        root   /usr/share/nginx/html;        index  index.html index.htm;    }      error_page   500 502 503 504  /50x.html;      location = /50x.html {        root   /usr/share/nginx/html;    }  }  root@my-app-9bdd6cbbc-x9gnt:/#

这时候我们访问8080是不通的,访问80是没问题的,如下:

[root@k8s-master nginx]# curl -I 192.168.58.197  HTTP/1.1 200 OK  Server: nginx/1.17.10  Date: Tue, 26 May 2020 06:21:05 GMT  Content-Type: text/html  Content-Length: 612  Last-Modified: Tue, 14 Apr 2020 14:19:26 GMT  Connection: keep-alive  ETag: "5e95c66e-264"  Accept-Ranges: bytes    [root@k8s-master nginx]# curl -I 192.168.58.197:8080  curl: (7) Failed connect to 192.168.58.197:8080; Connection refused

我们可以看到,我们需要的配置文件已经更新的,但是并没有使用上,pod里的nginx也没有重载配置文件,这时候如果我们重新部署Pod,资源对象肯定就生效了。

但是这并不我们想要的效果,我们希望配置文件更改了,服务也跟着reload,并不需要我们手动的去干预。

目前有三种方法:

  • 应用本身可以检测配置文件,然后自动reload
  • 给Pod增加一个sidecar,用它来检测配置文件
  • 第三方组件reloader,在deployment的annotations增加字段reloader.stakater.com/auto: “true”,即可检测configmap的更改来重启pod

应用本身检测的话这里就不做介绍了。这里主要来实验一下第2,3种方法

一、以sidecar形式

1.1、方法

  • Kubernetes集群中部署Nginx Pod。该Pod包含两个Container,一个是nginx container,实现nginx自身的功能;另一个是nginx-reloader container,负责实时监测目标configmap的变化,当发现configmap更新以后,会主动向nginx的master进程发送HUP信号,实现配置的热加载。
  • 配置文件是通过ConfigMap的形式挂载到Nginx Pod上,两个Container共享该ConfigMap。
  • 依赖K8s集群的shareProcessNamespace特性(版本需在1.12之后),两个Container需要在Pod中共享进程名字空间。

1.2、实现

1.2.1、镜像制作

(1)、主容器使用官方NG容器即可

(2)、sidecar容器制作

Dockerfile如下:

FROM golang:1.12.0 as build    RUN go get github.com/fsnotify/fsnotify  RUN go get github.com/shirou/gopsutil/process  RUN mkdir -p /go/src/app  ADD main.go /go/src/app/  WORKDIR /go/src/app  RUN CGO_ENABLED=0 GOOS=linux go build -a -o nginx-reloader .    # main image  FROM nginx:1.14.2-alpine    COPY --from=build /go/src/app/nginx-reloader /    CMD ["/nginx-reloader"]

main.go脚本如下:

package main    import (      "log"      "os"      "path/filepath"      "syscall"        "github.com/fsnotify/fsnotify"      proc "github.com/shirou/gopsutil/process"  )    const (      nginxProcessName = "nginx"      defaultNginxConfPath = "/etc/nginx"      watchPathEnvVarName = "WATCH_NGINX_CONF_PATH"  )    var stderrLogger = log.New(os.Stderr, "error: ", log.Lshortfile)  var stdoutLogger = log.New(os.Stdout, "", log.Lshortfile)    func getMasterNginxPid() (int, error) {      processes, processesErr := proc.Processes()      if processesErr != nil {          return 0, processesErr      }        nginxProcesses := map[int32]int32{}        for _, process := range processes {          processName, processNameErr := process.Name()          if processNameErr != nil {              return 0, processNameErr          }            if processName == nginxProcessName {              ppid, ppidErr := process.Ppid()                if ppidErr != nil {                  return 0, ppidErr              }                nginxProcesses[process.Pid] = ppid          }      }        var masterNginxPid int32        for pid, ppid := range nginxProcesses {          if ppid == 0 {              masterNginxPid = pid                break          }      }        stdoutLogger.Println("found master nginx pid:", masterNginxPid)        return int(masterNginxPid), nil  }    func signalNginxReload(pid int) error {      stdoutLogger.Printf("signaling master nginx process (pid: %d) -> SIGHUPn", pid)      nginxProcess, nginxProcessErr := os.FindProcess(pid)        if nginxProcessErr != nil {          return nginxProcessErr      }        return nginxProcess.Signal(syscall.SIGHUP)  }    func main() {      watcher, watcherErr := fsnotify.NewWatcher()      if watcherErr != nil {          stderrLogger.Fatal(watcherErr)      }      defer watcher.Close()        done := make(chan bool)      go func() {          for {              select {              case event, ok := <-watcher.Events:                  if !ok {                      return                  }                    if event.Op&fsnotify.Create == fsnotify.Create {                      if filepath.Base(event.Name) == "..data" {                          stdoutLogger.Println("config map updated")                            nginxPid, nginxPidErr := getMasterNginxPid()                          if nginxPidErr != nil {                              stderrLogger.Printf("getting master nginx pid failed: %s", nginxPidErr.Error())                                continue                          }                            if err := signalNginxReload(nginxPid); err != nil {                              stderrLogger.Printf("signaling master nginx process failed: %s", err)                          }                      }                  }              case err, ok := <-watcher.Errors:                  if !ok {                      return                  }                  stderrLogger.Printf("received watcher.Error: %s", err)              }          }      }()        pathToWatch, ok := os.LookupEnv(watchPathEnvVarName)      if !ok {          pathToWatch = defaultNginxConfPath      }        stdoutLogger.Printf("adding path: `%s` to watchn", pathToWatch)        if err := watcher.Add(pathToWatch); err != nil {          stderrLogger.Fatal(err)      }      <-done  }

1.2.2、部署NG

(1)、NG的配置以configMap进行部署:

nginx-config.yaml

// nginx-config.yaml  apiVersion: v1  kind: ConfigMap  metadata:    name: nginx-config  data:    default.conf: |-        server {          server_name localhost;          listen 80 default_server;            location = /healthz {            add_header Content-Type text/plain;            return 200 'ok';          }            location / {              root   /usr/share/nginx/html;              index  index.html index.htm;          }            error_page   500 502 503 504  /50x.html;            location = /50x.html {              root   /usr/share/nginx/html;          }        }

(2)、NG的Deployment清单(需打开共享进程命名空间特性:shareProcessNamespace: true):

nginx-deploy.yaml

---  apiVersion: apps/v1  kind: Deployment  metadata:    name: nginx  spec:    replicas: 1    selector:      matchLabels:        app: nginx    template:      metadata:        name: nginx        labels:          app: nginx      spec:        shareProcessNamespace: true        containers:          - name: nginx            image: nginx            imagePullPolicy: IfNotPresent            volumeMounts:              - name: nginx-config                mountPath: /etc/nginx/conf.d                readOnly: true          - name: nginx-reloader            image: registry.cn-hangzhou.aliyuncs.com/rookieops/nginx-reloader:v1            imagePullPolicy: IfNotPresent            env:              - name: WATCH_NGINX_CONF_PATH                value: /etc/nginx/conf.d            volumeMounts:            - name: nginx-config              mountPath: /etc/nginx/conf.d              readOnly: true        volumes:        - name: nginx-config          configMap:            name: nginx-config

手动修改configmap后,reloader监测到configmap变化,会主动向nginx主进程发起HUP信号,实现配置热更新。

二、第三方插件reloader

项目地址:https://github.com/stakater/Reloader

资源清单如下,我修改了镜像地址:

---  # Source: reloader/templates/clusterrole.yaml    apiVersion: rbac.authorization.k8s.io/v1beta1  kind: ClusterRole  metadata:    labels:      app: reloader-reloader      chart: "reloader-v0.0.58"      release: "reloader"      heritage: "Tiller"    name: reloader-reloader-role    namespace: default  rules:    - apiGroups:        - ""      resources:        - secrets        - configmaps      verbs:        - list        - get        - watch    - apiGroups:        - "apps"      resources:        - deployments        - daemonsets        - statefulsets      verbs:        - list        - get        - update        - patch    - apiGroups:        - "extensions"      resources:        - deployments        - daemonsets      verbs:        - list        - get        - update        - patch    ---  # Source: reloader/templates/clusterrolebinding.yaml    apiVersion: rbac.authorization.k8s.io/v1beta1  kind: ClusterRoleBinding  metadata:    labels:      app: reloader-reloader      chart: "reloader-v0.0.58"      release: "reloader"      heritage: "Tiller"    name: reloader-reloader-role-binding    namespace: default  roleRef:    apiGroup: rbac.authorization.k8s.io    kind: ClusterRole    name: reloader-reloader-role  subjects:    - kind: ServiceAccount      name: reloader-reloader      namespace: default    ---  # Source: reloader/templates/deployment.yaml  apiVersion: apps/v1  kind: Deployment  metadata:    labels:      app: reloader-reloader      chart: "reloader-v0.0.58"      release: "reloader"      heritage: "Tiller"      group: com.stakater.platform      provider: stakater      version: v0.0.58      name: reloader-reloader  spec:    replicas: 1    revisionHistoryLimit: 2    selector:      matchLabels:        app: reloader-reloader        release: "reloader"    template:      metadata:        labels:          app: reloader-reloader          chart: "reloader-v0.0.58"          release: "reloader"          heritage: "Tiller"          group: com.stakater.platform          provider: stakater          version: v0.0.58        spec:        containers:        - env:          image: "registry.cn-hangzhou.aliyuncs.com/rookieops/stakater-reloader:v0.0.58"          imagePullPolicy: IfNotPresent          name: reloader-reloader          args:        serviceAccountName: reloader-reloader    ---  # Source: reloader/templates/role.yaml      ---  # Source: reloader/templates/rolebinding.yaml      ---  # Source: reloader/templates/service.yaml    ---  # Source: reloader/templates/serviceaccount.yaml    apiVersion: v1  kind: ServiceAccount  metadata:    labels:      app: reloader-reloader      chart: "reloader-v0.0.58"      release: "reloader"      heritage: "Tiller"    name: reloader-reloader

然后部署资源,结果如下:

 kubectl get pod  NAME                               READY   STATUS    RESTARTS   AGE  my-app-9bdd6cbbc-x9gnt             1/1     Running   0          38m  reloader-reloader-ff767bb8-cpzgz   1/1     Running   0          56s

然后给deployment增加一个annotations。如下:

kubectl patch deployments.apps my-app -p '{"metadata": {"annotations": {"reloader.stakater.com/auto": "true"}}}'

然后我们更改configMap清单,重新apply过后,我们可以看到pod会删除重启,如下:

 kubectl get pod -o wide  NAME                               READY   STATUS        RESTARTS   AGE     IP               NODE         NOMINATED NODE   READINESS GATES  my-app-7c4fc77f5f-w4mbn            1/1     Running       0          3s      192.168.58.202   k8s-node02   <none>           <none>  my-app-df6fbdb67-bnftb             1/1     Terminating   0          35s     192.168.58.201   k8s-node02   <none>           <none>  reloader-reloader-ff767bb8-cpzgz   1/1     Running       0          3m47s   192.168.85.195   k8s-node01   <none>           <none>

然后我们curl pod也可以通了,如下:

# curl 192.168.58.202:8080 -I  HTTP/1.1 200 OK  Server: nginx/1.17.10  Date: Tue, 26 May 2020 06:58:38 GMT  Content-Type: text/html  Content-Length: 612  Last-Modified: Tue, 14 Apr 2020 14:19:26 GMT  Connection: keep-alive  ETag: "5e95c66e-264"  Accept-Ranges: bytes
三、附加

附加一个sidecar形式的python脚本

#!/usr/bin/env python  # -*- encoding: utf8 -*-  """  需求:nginx配置文件变化,自动更新配置文件,类似nginx -s reload  实现:      1、用pyinotify实时监控nginx配置文件变化      2、如果配置文件变化,给系统发送HUP来reload nginx  """  import os  import re  import pyinotify  import logging  from threading import Timer    # Param  LOG_PATH = "/root/python/log"  CONF_PATHS = [    "/etc/nginx",  ]  DELAY = 5  SUDO = False  RELOAD_COMMAND = "nginx -s reload"  if SUDO:    RELOAD_COMMAND = "sudo " + RELOAD_COMMAND    # Log  logger = logging.getLogger(__name__)  logger.setLevel(level = logging.INFO)  log_handler = logging.FileHandler(LOG_PATH)  log_handler.setLevel(logging.INFO)  log_formatter = logging.Formatter('%(asctime)s - %(levelname)s - %(message)s')  log_handler.setFormatter(log_formatter)  logger.addHandler(log_handler)    # Reloader  def reload_nginx():    os.system(RELOAD_COMMAND)    logger.info("nginx is reloaded")    t = Timer(DELAY, reload_nginx)    def trigger_reload_nginx(pathname, action):    logger.info("nginx monitor is triggered because %s is %s" % (pathname, action))    global t    if t.is_alive():      t.cancel()      t = Timer(DELAY, reload_nginx)      t.start()    else:      t = Timer(DELAY, reload_nginx)      t.start()    events = pyinotify.IN_MODIFY | pyinotify.IN_CREATE | pyinotify.IN_DELETE    watcher = pyinotify.WatchManager()  watcher.add_watch(CONF_PATHS, events, rec=True, auto_add=True)    class EventHandler(pyinotify.ProcessEvent):    def process_default(self, event):      if event.name.endswith(".conf"):        if event.mask == pyinotify.IN_CREATE:          action = "created"        if event.mask == pyinotify.IN_MODIFY:          action = "modified"        if event.mask == pyinotify.IN_DELETE:          action = "deleted"        trigger_reload_nginx(event.pathname, action)    handler = EventHandler()  notifier = pyinotify.Notifier(watcher, handler)    # Start  logger.info("Start Monitoring")  notifier.loop()

原文出处:coolops -> https://www.coolops.cn/posts/kubernetes-nginx-hot-update/