Quick Tip: Exposing K8s Services between GCP projects through VPC peering

One of the best and most secure ways of getting two internal services to communicate with each other between two different GCP projects, is through VPC peering.

However if the service you need to talk to is running on an old GCP Kubernetes cluster that was not created with VPC native-mode, you might get your Services Address range not exposed. Everything is green and there are no errors or conflicts, but the Service address range is not included in the exposed addresses ranges list.

Unfortunately the k8s Service address range is immutable and you can’t change it once the cluster is created, as well as the VPC native mode toggle, but a simple and effective way to work around this is by creating an internal load balancer, with the networking.gke.io/load-balancer-type: "Internal" annotation. (You can check AWS equivalent annotation here).

If you’re using Helm charts, you can then expose your service like this:

service:
  enabled: true
  annotations:
    networking.gke.io/load-balancer-type: "Internal"
  type: LoadBalancer
  externalTrafficPolicy: Local
  port: 80

Vanilla k8s:

apiVersion: v1
kind: Service
metadata:
  name: example-internal-service
  annotations:
    networking.gke.io/load-balancer-type: "Internal"
spec:
  selector:
    app: example
  ports:
    - port: 80
      targetPort: 80
  type: LoadBalancer
  externalTrafficPolicy: Local

After creating the LB, you can check that it’s internal IP should now be exposed in the VPC peering list.

You can also check the connectivity from the other project by fetching the LB’s internal IP by running:

kubectl describe services example-internal-service | grep "IP:"

IP:                       10.3.22.96

And then sending an http ping to the /healthz endpoint natively exposed by the LBs.

~$ curl 10.3.22.96:80/healthz
OK

If you’re still having connectivity issues, double check your firewall rules.

Cya ~