.png)
A step by step implementation path to a cross regional stretched IrisCluster with Mirroring using the Intersystems Kubernetes Operator (IKO), Google Cloud Platform, and Tailscale.
I am giving this distraction the code name "Compliment Sandwich" for a reason yet to be realized, but I'd rather the community go right for the jugular shooting holes in a solution that implements wireguard based connectivity for our workloads in general, as I would like to refine it as a fall project leading up to KubeCon in Atlanta and if I miss the mark, Ill get it done before Amsterdam.
.png)
The journey to wireguard started with Cilium for us, with ClusterMesh and Node to Node stuff becoming apparent for a two configuration flag to encryption and compliance checkboxes for the workloads. After going down the Wireguard rabbit hole, it lent to aiding the overall divorce from Cloud vendors for $ite to $ite VPN's and development compute and watching the Super Bowl in Puerto Rico unabated as they say.
In a paragraph, explain Tailscale
Tailscale leverages WireGuard as its secure, encrypted tunnel layer, but wraps it in a powerful overlay that handles the hard parts for you. Under the hood, each device runs the WireGuard protocol (specifically a fork of the wireguard-go implementation) to establish encrypted point-to-point tunnels. Tailscale’s coordination (control) plane—not part of the tunnel—handles identity-based authentication, key discovery, and automated configuration, allowing devices to connect directly in a peer-to-peer mesh even when behind NATs or firewalls.
My Version
Tailscale is a clown suit for wireguard-go like Github is for git, that flattens a network of participating devices.
Let's Go
gcloud config set project ikoplus
Create Compute
In a GCP project "ikoplus" enable, Compute Engine.
🇺🇸
gcloud compute instances create cks-master --zone=europe-west3-c \
--machine-type=e2-medium \
--image=ubuntu-2404-noble-amd64-v20250530 \
--image-project=ubuntu-os-cloud \
--boot-disk-size=200GB
🇬🇧
gcloud compute instances create cks-master --zone=europe-west3-c \
--machine-type=e2-medium \
--image=ubuntu-2404-noble-amd64-v20250530 \
--image-project=ubuntu-os-cloud \
--boot-disk-size=200GB
Should be looking a little like this in the cloud console.
.png)
Add the nodes to your Tailnet
Now for `region` in [ 🇺🇸 , 🇬🇧 ]; do
curl -fsSL https://pkgs.tailscale.com/stable/ubuntu/noble.noarmor.gpg | sudo tee /usr/share/keyrings/tailscale-archive-keyring.gpg >/dev/null
curl -fsSL https://pkgs.tailscale.com/stable/ubuntu/noble.tailscale-keyring.list | sudo tee /etc/apt/sources.list.d/tailscale.list
sudo apt-get update
sudo apt-get install tailscale
sudo tailscale up
tailscale ip -4
You should get a prompt to authenticate your Tailnet:
.png)
And how our tailnet should look like so:
.png)
Install Kubernetes
Using Canonical k8s (formerly Microk8s), the cluster setup is quite literally a "snap".
🇺🇸
sudo snap install k8s --classic
sudo k8s bootstrap --address 100.127.21.93 # use your tailnet address here for kube api
sudo k8s get-join-token --worker
👣 🔫
Foot gun: before we go to the next step you may or may not hit this but... I was tailing the logs after hitting some TLS issues joining on the next step and found a time drift of over 5 seconds between the two boxes, despite using chrony and Google time synchroniztion. To mitigate this a bit, I made the ntp setup on both nodes a bit more agressive by tweaking the /etc/chrony/chrony.conf with these two directives.
maxslewrate 2000
makestep 0.25 0.5
🇬🇧
sudo snap install k8s --classic
sudo k8s join-cluster eyJ0b2tl-tokenfromcommandaboveonmaster
Joining the cluster. This may take a few seconds, please wait.
We should now be able to bask in the glory of a kubernetes cluster with two nodes, over the pond.
.png)
Tailscale: Create OAUTH Client and Install Operator
This is a bonus for now, but the operator allows us to expose any service we decorate in the cluster with a tag to the tailscale operator.
Oauth Client
Install the Operator
helm repo add tailscale https://pkgs.tailscale.com/helmcharts
helm repo update
helm upgrade \
--install \
tailscale-operator \
tailscale/tailscale-operator \
--namespace=tailscale \
--create-namespace \
--set-string oauth.clientId=${OAUTH_CLIENT_ID} \
--set-string oauth.clientSecret=${OAUTH_CLIENT_SECRET} \
--set-string apiServerProxyConfig.mode="true" \
--wait
.png)
Our Tailnet now looks like so, as you can see my laptop is also included in it.
.png)
Install the InterSystems Kubernetes Operator
sween @ fhirwatch-pop-os ~/Desktop/ISC/IKO/iris_operator_amd-3.10.1.100-unix/iris_operator_amd-3.10.1.100/chart/iris-operator
└─ $ ▶ helm install iko . --kubeconfig kube.config
NAME: iko
LAST DEPLOYED: Tue Sep 9 14:41:51 2025
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
To verify that InterSystems Kubernetes Operator has started, run:
kubectl --namespace=default get deployments -l "release=iko, app=iris-operator-amd"
Establish Zones
Important concept, but powerful one is the establishment of zones. At the end of the day, for IKO to do its magic, this simply means that you label the nodes with a zone, and IKO will do its magic to create the mirror set across the zones for the desired state across the pond.
For this, we created zones "us,uk" and labelled each node accordingly.
apiVersion: v1
kind: Node
🇺🇸
topology.kubernetes.io/zone: us
🇬🇧
topology.kubernetes.io/zone: uk
The Stretched IrisCluster
The abbreviated IrisCluster resource definition/topology is in the teaser below, but lets call out the cool spots.
Create an IRIS for Health Mirror, across zones and save me a flight.
mirrorMap: primary,backup
mirrored: true
preferredZones:
- us
- uk
Please secure the web gateways, ecp, superserver, and mirror agents with TLS:
tls:
common:
secret:
secretName: cert-secret
apiVersion: intersystems.com/v1beta1
kind: IrisCluster
metadata:
name: ikoplus-crossregion-1
namespace: ikoplus
spec:
imagePullSecrets:
- name: containers-pull-secret
licenseKeySecret:
name: license-key-secret
tls:
common:
secret:
secretName: cert-secret
topology:
arbiter:
image: containers.intersystems.com/intersystems/arbiter:2025.1
data:
compatibilityVersion: 2025.1.0
image: containers.intersystems.com/intersystems/irishealth:2025.1
mirrorMap: primary,backup
mirrored: true
preferredZones:
- us
- uk
podTemplate:
spec:
securityContext:
fsGroup: 51773
runAsGroup: 51773
runAsNonRoot: true
runAsUser: 51773
webgateway:
replicas: 1
alternativeServers: LoadBalancing
applicationPaths:
- /*
ephemeral: true
image: containers.intersystems.com/intersystems/webgateway-lockeddown:2025.1
loginSecret:
name: webgateway-secret
type: apache-lockeddown
serviceTemplate:
spec:
type: LoadBalancer
externalTrafficPolicy: Local
---
apiVersion: v1
data:
iris.key: W0NvbmZp...
kind: Secret
metadata:
name: license-key-secret
namespace: ikoplus
---
apiVersion: v1
data:
password: U9lTb
username: Ub3rEa
kind: Secret
metadata:
name: webgateway-secret
namespace: ikoplus
---
apiVersion: v1
data:
.dockerconfigjson: eyJhd...
kind: Secret
metadata:
name: containers-pull-secret
namespace: ikoplus
type: kubernetes.io/dockerconfigjson
---
apiVersion: v1
data:
ca.pem: LS0tLS1CRUd...
tls.crt: LS0tLS1CRUd...
tls.key: LS0tLS1CRUd...
metadata:
creationTimestamp: null
name: cert-secret
namespace: ikoplus
---
apiVersion: v1
kind: Namespace
metadata:
name: ikoplus
Deploy it!
.png)
Wooo!
.png)
Checking our Work
Now lets prove it utilizing curl to indicate our geo location, by flying to the US, shelling into the pod, and running `curl ipinfo.io`...
🇺🇸
.png)
Then flying back to the UK, shelling into the pod, and running `curl ipinfo.io`
🇬🇧
.png)
🎤 ⬇
.png)
So that is it for now, stay tuned for more elaboration on the subject.
💪 Special thanks to Mark Hayden from ISC for helping me wrap my head around "zones" in an understandable fashion in the IKO topology!