How to reload ConfigMap on Google Kubernetes Engine

Today during work, while using Google Kubernetes Engine(GKE), I ran into a desperate situation: I updated a config map for a GKE cluster, but the pods weren’t getting the updated config map. Even if I triggered a deploy, the newly deployed pods were still using the old values of the config map.

I eventually found the solution here, dev.to: link

Essentially, what you need to do:

First, you need to open the Cloud Shell. It’s an in-browser terminal that lets you use kubectl on your k8s cluster.

Then, get the service name:

$ kubectl get services

NAME         TYPE           CLUSTER-IP      EXTERNAL-IP    PORT(S)          AGE
web          NodePort       10.180.1.0      <none>         8000:30997/TCP   640d
workers      NodePort       10.180.3.44     <none>         8000:32161/TCP   640d

Then, copy the name of the service you want to apply the config map changes to, and scale the replicas to 0.

Please note: this will make this service unavailable when you scale it down. So probably don’t do this in prod.

$ kubectl scale deployment web --replicas=0

After you scale it down, look at how many pods are running, and wait for it to be done terminating:

$ kubectl get pods

web-7b977d6dd9-5xmkn         1/1     Terminating 0          23h
workers-568db87799-fjhx8     1/1     Running     43         4d1h

When the pods for the service you just turned down is done terminating, then you just need to scale it back up again.

 26 $ kubectl scale deployment web --replicas=1

That’s it! When it comes back online, it will have the latest config map.

The pods will load the config map eventually, even if you don’t do this, it just takes an unknown amount of time (at least in my experience). So this is not the only way to apply config map changes to a service, but it is what I do if I need the changes now during development.

end of storey Last modified: