Skip to main content

Upgrading to Tator 0.2.19

· 5 min read
Hugh Enxing

This blog post will go over the procedure for upgrading a Tator deployment to the latest release, 0.2.19. This release updates the versions of some dependencies, which require more user actions than the standard upgrade process. It assumes that the tator deployment was set up using the install script and is running on microk8s.

Uninstall Tator

First, uninstall tator by running make clean from the tator directory:

tator$ make clean
kubectl delete apiservice v1beta1.metrics.k8s.io
kubectl delete all --namespace kubernetes-dashboard --all
helm uninstall tator
pod "kubernetes-dashboard-5b5bd8dd4c-cwbqb" deleted
pod "dashboard-metrics-scraper-665b55f67d-wct8m" deleted
service "kubernetes-dashboard" deleted
service "dashboard-metrics-scraper" deleted
deployment.apps "kubernetes-dashboard" deleted
deployment.apps "dashboard-metrics-scraper" deleted
replicaset.apps "kubernetes-dashboard-5b5bd8dd4c" deleted
release "tator" uninstalled
tator$
note

This will not result in any data loss.

Uninstall microk8s

The supported version of Kubernetes has changed from 1.19 to 1.22, so first uninstall microk8s:

sudo snap remove microk8s

And then install it from the 1.22/stable channel:

sudo snap install microk8s --classic --channel=1.22/stable

Troubleshooting

If snap install fails, then sometimes it is possible to break it up into two steps:

snap download microk8s --channel=1.22/stable
snap ack ./microk8s_3203.assert \
&& sudo snap install ./microk8s_3203.snap --classic

Find your host IP address

If you do not already know your host IP address, you can find it with the ip command:

tator$ ip addr show eno1
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc fq_codel state UP group default qlen 1000
link/ether c1:16:e5:96:a7:da brd ff:ff:ff:ff:ff:ff
altname enp0s31f6
inet 192.168.1.213/24 brd 192.168.1.255 scope global dynamic eno1
valid_lft 73699sec preferred_lft 73699sec
inet6 fe80::fe61:1e69:7aff:5ead/64 scope link
valid_lft forever preferred_lft forever
tator$

Here, our IP address is 192.168.1.213, save this for later. Your interface may be different than the example, eno1, so check which one is in use on your system.

Set up kubectl

We also need to upgrade kubectl to match the version of microk8s. Start by checking that microk8s is ready:

tator$ sudo microk8s status --wait-ready
microk8s is running
...
tator$

And then download and configure the new version of kubectl:

curl -sLO "https://dl.k8s.io/release/v1.22.9/bin/linux/amd64/kubectl" \
&& chmod +x kubectl \
&& sudo mv ./kubectl /usr/local/bin/kubectl \
&& mkdir -p $HOME/.kube \
&& sudo chmod 777 $HOME/.kube \
&& sudo microk8s config > $HOME/.kube/config

Set up microk8s services

Set up the necessary microk8s services for Tator:

yes $HOST_IP-$HOST_IP | sudo microk8s enable dns metallb storage
kubectl label nodes --overwrite --all cpuWorker=yes webServer=yes dbServer=yes
note

Replace $HOST_IP with the IP address from earlier.

Set up Argo

The final setup step is to configure Argo for Tator to use:

kubectl create namespace argo --dry-run=client -o yaml | kubectl apply -f - \
&& kubectl apply -n argo -f "https://github.com/argoproj/argo-workflows/releases/download/v3.3.1/install.yaml" \
&& kubectl apply -n argo -f argo/workflow-controller-configmap.yaml \
&& kubectl apply -n argo -f argo/argo-server.yaml

Edit values.yaml

Open the configuration file at helm/tator/values.yaml and add the following:

okta:
enabled: false

Then edit the redis section to the following since the Redis helm chart will be upgraded:

redis:
# Enable this to install the redis helm chart.
enabled: true
architecture: standalone
master:
persistence:
enabled: false
slave:
persistence:
enabled: false
nodeSelector:
dbServer: "yes"
usePassword: false
auth:
enabled: false

Install Tator

Finally, you can update to and install the latest version of tator:

git fetch
git checkout 0.2.19
git submodule update
cd ui && npm install && cd ..
make main/version.py
make cluster-deps
make cluster-install

Automated upgrade

The following is a script that collects the manual steps presented above and can be run to the same effect (does NOT include edits to values.yaml):

#!/bin/bash

# This may not return without error, run before setting -e
make clean

# Exit on error.
set -e

# Define environment variables.
KUBECTL_URL="https://dl.k8s.io/release/v1.22.9/bin/linux/amd64/kubectl"
ARGO_MANIFEST_URL="https://github.com/argoproj/argo-workflows/releases/download/v3.3.1/install.yaml"

# Install snap
sudo snap remove microk8s
sudo snap install microk8s --channel=1.22/stable --classic

# Get IP address if it is not set explicitly.
# Credit to https://serverfault.com/a/1019371
if [[ -z "${HOST_INTERFACE}" ]]; then
HOST_INTERFACE=$(ip -details -json address show | jq --join-output '
.[] |
if .linkinfo.info_kind // .link_type == "loopback" then
empty
else
.ifname ,
( ."addr_info"[] |
if .family == "inet" then
" " + .local
else
empty
end
),
"\n"
end
')
fi
if [[ -z "${HOST_IP}" ]]; then
HOST_IP=$(ip addr show $(echo $HOST_INTERFACE | awk '{print $1}') | awk '$1 == "inet" {gsub(/\/.*$/, "", $2); print $2}')
fi
echo "Using host interface $HOST_INTERFACE."
echo "Using host IP address $HOST_IP."


# Get docker registry if it is not set explicitly.
if [[ -z "${DOCKER_REGISTRY}" ]]; then
DOCKER_REGISTRY=cvisionai
fi
echo "Using docker registry $DOCKER_REGISTRY."

# Wait for microk8s to be ready.
echo "Waiting for microk8s to be ready..."
sudo microk8s status --wait-ready
echo "Ready!"

# Set up kubectl.
echo "Setting up kubectl."
curl -sLO $KUBECTL_URL \
&& chmod +x kubectl \
&& sudo mv ./kubectl /usr/local/bin/kubectl \
&& mkdir -p $HOME/.kube \
&& sudo chmod 777 $HOME/.kube \
&& sudo microk8s config > $HOME/.kube/config
kubectl describe nodes

# Enable microk8s services.
echo "Setting up microk8s services."
yes $HOST_IP-$HOST_IP | sudo microk8s enable dns metallb storage
kubectl label nodes --overwrite --all cpuWorker=yes webServer=yes dbServer=yes

# Set up argo.
echo "Setting up argo."
kubectl create namespace argo --dry-run=client -o yaml | kubectl apply -f - \
&& kubectl apply -n argo -f $ARGO_MANIFEST_URL \
&& kubectl apply -n argo -f argo/workflow-controller-configmap.yaml \
&& kubectl apply -n argo -f argo/argo-server.yaml

# Install tator.
echo "Installing tator."
git fetch
git checkout 0.2.19
git submodule update
cd ui && npm install && cd ..
make main/version.py
make cluster-deps
make cluster-install

# Print success.
echo "Upgrade completed successfully!"