Skip to main content
Uncategorized

Building an Istio Service Mesh Across Anthos GKE and GKE On-Prem

By July 15, 2021No Comments

Building an Istio Service Mesh Across Anthos GKE and GKE On-Prem

It’s tough to browse much of today’s microservices landscape without stumbling upon or thinking about a service mesh. Istio brings service mesh, service discovery, and visibility to microservices architectures which of course includes Kubernetes. During a recent event I built a demo showcasing an Istio-based service mesh that stretches across two different environments leveraging nothing but Istio Ingress Gateway services in GKE (Google Kubernetes Engine) and GKE On-Prem (Google’s new On-Premise offering).

In order to showcase the setup and test connectivity across the clusters I chose to use the following demo repository from Google. The application is an artifical online store made up of multiple individual microservices intended to showcase deploying and monitoring in Kubernetes.

Istio in a Shared Control Plane Across GKE and GKE On-Prem Clusters

The demo environment is based on the Shared control plane deployment published by the Istio team. The TLDR of this deployment model is to build a multicluster topology via gateways only. Istio can then route traffic to the appropriate endpoint via location-aware service routing.

<img src="https://res.cloudinary.com/arctiq/image/upload/q_auto/posts/diagram.svg" position="center" alt="istio-diagram" %} Shared Istio control plane topology spanning multiple Kubernetes clusters using gateways

In this deployment the cluster on the left will be running on GKE On-Prem, while the second cluster on the left will be running on GKE. Following this deployment the GKE On-Prem cluster will have the full Istio stack deployed whereas the remote GKE cluster will only have an Ingress gateway, Citadel, and the Sidecar injector implemented. The GKE cluster will be managed and controlled from an Istio perspective completely from the full deployment On-Prem.

Let’s jump right into the deployment!

Prerequisites

  • At least 2 Kubernetes clusters (GKE and GKE On-Prem in our case)
  • Admin access to the clusters to deploy Istio
  • kubectl installed locally for management
  • IP address for the Load Balancer service for Istio (only needed for GKE On-Prem clusters)
  • IP address on the primary cluster accessible to the secondary cluster
  • IP address on the secondard cluster accessible to the primary cluster
  • (Optional) A front-end certificate for the Istio Ingress gateway on the primary cluster (Can use cert-manager if desired)
  • External sub-domain and DNS entries completed accordingly. (This will vary based on your deployment)

In our demo environment we have a sub-domain with a wildcard DNS entry that resolves to a routable IP address that is used to host the Istio Ingress Gateway in the primary cluster. This can be customized accordingly.

Install and Prep Istio in GKE On-Prem (Local Cluster)

Perform the Istio deployment against the GKE On-Prem cluster (Cluster 1 in the diagram) as follows:

{% highlight shell %} export GKEOP_LB_IP= export CLUSTER1_IP= export WILDCARD_SUBDOMAIN=<Set this to a wildcard *.subdomain that resolves to the CLUSTER1_IP above> git clone https://github.com/istio/istio.git cd istio kubectl create namespace istio-system helm template install/kubernetes/helm/istio-init –name istio-init –namespace istio-system | kubectl apply -f – kubectl create secret generic cacerts -n istio-system –from-file=samples/certs/ca-cert.pem –from-file=samples/certs/ca-key.pem –from-file=samples/certs/root-cert.pem –from-file=samples/certs/cert-chain.pem kubectl create secret generic kiali –from-literal=username=admin –from-literal=passphrase=admin -n istio-system

helm template install/kubernetes/helm/istio –name istio –namespace istio-system \ –set grafana.enabled=true \ –set grafana.security.enabled=false \ –set kiali.enabled=true \ –set kiali.createDemoSecret=false \ –set kiali.contextPath="/" \ –set tracing.enabled=true \ –set gateways.istio-ingressgateway.loadBalancerIP=${GKEOP_LB_IP} \ –set sidecarInjectorWebhook.rewriteAppHTTPProbe=true \ –set global.mtls.enabled=true \ –set security.selfSigned=false \ –set global.controlPlaneSecurityEnabled=true \ –set global.proxy.accessLogFile="/dev/stdout" \ –set global.meshExpansion.enabled=true \ –set ‘global.meshNetworks.network1.endpoints[0].fromRegistry’=Kubernetes \ –set ‘global.meshNetworks.network1.gateways[0].address’=${CLUSTER1_IP} \ –set ‘global.meshNetworks.network1.gateways[0].port’=443 \ –set gateways.istio-ingressgateway.env.ISTIO_META_NETWORK="network1" \ –set global.network="network1" \ –set ‘global.meshNetworks.network2.endpoints[0].fromRegistry’=n2-k8s-config \ –set ‘global.meshNetworks.network2.gateways[0].address’=0.0.0.0 \ –set ‘global.meshNetworks.network2.gateways[0].port’=443 \ | kubectl delete -f –

You should have this cert ready and available for this step. Skip if using cert-manager or otherwise to create and manage the certificate

kubectl create secret tls \ –namespace istio-system \ istio-ingressgateway-certs \ –cert ./ingress-wildcard.crt \ –key ./ingress-wildcard.key

Create the Gateway Services required to access the application from outside the cluster

kubectl apply -f – <<EOF apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: frontend-gateway namespace: istio-system spec: selector: istio: ingressgateway # use Istio default gateway implementation servers:

  • port: number: 80 name: http protocol: HTTP hosts:
    • "*.${WILDCARD_SUBDOMAIN}" tls: httpsRedirect: true
  • hosts:
    • "*.${WILDCARD_SUBDOMAIN}" port: name: https number: 443 protocol: HTTPS tls: mode: SIMPLE serverCertificate: /etc/istio/ingressgateway-certs/tls.crt privateKey: /etc/istio/ingressgateway-certs/tls.key EOF

kubectl apply -f – <<EOF apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: cluster-aware-gateway namespace: istio-system spec: selector: istio: ingressgateway servers:

  • port: number: 443 name: tls protocol: TLS tls: mode: AUTO_PASSTHROUGH hosts:
    • "*.local" EOF {% endhighlight %}

Install and Prep Istio in GKE (Remote Cluster)

Perform the Istio deployment against the GKE cluster (Cluster 2 in the diagram) as follows:

{% highlight shell %} kubectl create namespace istio-system helm template install/kubernetes/helm/istio-init –name istio-init –namespace istio-system | kubectl apply -f – kubectl create secret generic cacerts -n istio-system –from-file=samples/certs/ca-cert.pem –from-file=samples/certs/ca-key.pem –from-file=samples/certs/root-cert.pem –from-file=samples/certs/cert-chain.pem

helm template –name istio-remote –namespace=istio-system \ –values install/kubernetes/helm/istio/values-istio-remote.yaml \ –set global.mtls.enabled=true \ –set sidecarInjectorWebhook.rewriteAppHTTPProbe=true \ –set gateways.enabled=true \ –set security.selfSigned=false \ –set global.controlPlaneSecurityEnabled=true \ –set global.createRemoteSvcEndpoints=true \ –set global.remotePilotCreateSvcEndpoint=true \ –set global.remotePilotAddress=${CLUSTER1_IP} \ –set global.remotePolicyAddress=${CLUSTER1_IP} \ –set global.remoteTelemetryAddress=${CLUSTER1_IP} \ –set gateways.istio-ingressgateway.env.ISTIO_META_NETWORK="network2" \ –set global.network="network2" \ install/kubernetes/helm/istio \ | kubectl apply -f –

You should have this cert ready and available for this step. Skip if using cert-manager or otherwise to create and manage the certificate

kubectl create secret tls \ –namespace istio-system \ istio-ingressgateway-certs \ –cert ./ingress-wildcard.crt \ –key ./ingress-wildcard.key

CLUSTER_NAME=$(kubectl config view –minify=true -o jsonpath='{.clusters[].name}’) SERVER=$(kubectl config view –minify=true -o jsonpath='{.clusters[].cluster.server}’) SECRET_NAME=$(kubectl get sa istio-multi -n istio-system -o jsonpath='{.secrets[].name}’) CA_DATA=$(kubectl get secret ${SECRET_NAME} -n istio-system -o jsonpath="{.data[‘ca.crt’]}") TOKEN=$(kubectl get secret ${SECRET_NAME} -n istio-system -o jsonpath="{.data[‘token’]}" | base64 –decode)

cat < n2-k8s-config apiVersion: v1 kind: Config clusters:

  • cluster: certificate-authority-data: ${CA_DATA} server: ${SERVER} name: ${CLUSTER_NAME} contexts:
  • context: cluster: ${CLUSTER_NAME} user: ${CLUSTER_NAME} name: ${CLUSTER_NAME} current-context: ${CLUSTER_NAME} users:
  • name: ${CLUSTER_NAME} user: token: ${TOKEN} EOF export CLUSTER2_IP=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}’) {% endhighlight %}

Joining the Two Clusters

At this point Istio is installed in both clusters, and in Cluster 2 (our GKE cluster) the Ingress Gateway pods for Istio will not become ready. This is due to the primary cluster not being configured to accept requests from the secondary cluster. In this part we will configure the primary cluster to accept connections and join the two configurations to create a single mesh.

Perform these steps against the GKE On-Prem cluster (Cluster 1 in the diagram):

{% highlight shell %} kubectl create secret generic n2-k8s-secret –from-file n2-k8s-config -n istio-system kubectl label secret n2-k8s-secret istio/multiCluster=true -n istio-system

echo The ingress gateway of cluster2: address=$CLUSTER2_IP

Edit the Istio Config Map and change the network2 gateway address from 0.0.0.0 to the IP of Cluster 2 as output above

kubectl edit cm -n istio-system istio {% endhighlight %}

At this point the Gateway pods in the secondary cluster should become ready as they are able to join the mesh.

Deploying the Hipster-Store Demo Application

Below is a video demonstrating the full deployment of the demo application on the GKE On-Prem cluster, while only the frontend web services are running in the Remote cluster. This showcases the front-end service connecting back to the primary cluster for all services required for the front-end.

Deploy the demo app in the primary cluster

{% highlight shell %} kubectl create namespace hipster-store kubectl label namespace hipster-store istio-injection=enabled kubectl apply -f https://raw.githubusercontent.com/ArctiqTeam/microservices-demo/multimesh/release/istio-manifests.yaml kubectl apply -f https://raw.githubusercontent.com/ArctiqTeam/microservices-demo/multimesh/release/kubernetes-manifests.yaml

kubectl apply -f – <<EOF apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: frontend-ingress namespace: hipster-store spec: hosts:

  • "hipster-store.${WILDCARD_SUBDOMAIN}" gateways:
  • istio-system/frontend-gateway http:
  • route:
    • destination: host: frontend port: number: 80 subset: local weight: 50
    • destination: host: frontend port: number: 80 subset: remote weight: 50 EOF

This configuration will route 50% of the traffic to the virtual service to the local vs remote front-end deployments

{% endhighlight %}

Deploy the frontend to the secondary cluster

{% highlight shell %} kubectl create namespace hipster-store kubectl label namespace hipster-store istio-injection=enabled kubectl apply -f https://raw.githubusercontent.com/ArctiqTeam/microservices-demo/multimesh/release/istio-manifests-remote.yaml kubectl apply -f https://raw.githubusercontent.com/ArctiqTeam/microservices-demo/multimesh/release/kubernetes-manifests-remote.yaml {% endhighlight %}

Demo Video

Here is a demo video showing the environment built along with the traffic routing between GKE and GKE On-Prem through the service mesh:

{% include video name="_jPkVcyejI8" %}

Reach out to learn more about Istio, GKE, Anthos etc

//take the first step