Modern applications built using microservices patterns are distributed and dynamic by nature. Deploying these applications to Kubernetes clusters tightly couples the application and cluster together. Increasingly, customers are asking for the ability to deploy applications across clusters to allow for easier upgrades and migrations and to break down isolation boundaries. However, bridging the gap between clusters requires platform and operations teams to implement complex routing and maintain a centralized service registry. This can be complicated, time-consuming, and error-prone. To work around these issues, some customers run third-party service discovery solutions that orchestrate service discovery. However, these tools require deploying and managing that software as an additional application dependency, which increases infrastructure, development, and maintenance costs.

Amazon is announcing an open source project, AWS Cloud Map MCS Controller for K8s, distributed under the Apache License, Version 2.0. It allows a Kubernetes-native service discovery capability that works across Kubernetes (K8s) clusters. Multicluster service discovery allows applications to understand what services exist in an environment so that they can discover and communicate with each other, even when applications are deployed across several Kubernetes clusters. It simplifies building and operating services that run across multiple Kubernetes clusters.

An open source solution

This functionality is enabled via the AWS Cloud Map MCS Controller for K8s, an open source implementation of the Kubernetes proposed API, KEP-1645: Multicluster Services API. We recognize the Kubernetes community’s strong preference for open source solutions; therefore, open sourcing the implementation reflects the shared effort to move the emerging multicluster specification from alpha to beta stage. Furthermore, the development community will be able to take our implementation and adjust it to work with other service registries for on-premises use cases.

AWS Cloud Map

AWS Cloud Map is a fully managed service discovery tool for all the cloud resources and application components that help your applications connect to the correct endpoints, making your deployments safer and reducing the complexity of the infrastructure code. The AWS Cloud Map MCS Controller for K8s is the AWS implementation of Multicluster Services API based on AWS Cloud Map.

New CRDs: ServiceExport & ServiceImport

The Kubernetes community proposed a new API extension called the Multicluster Services API which extends the Kubernetes service object beyond a single cluster. It introduces two new Custom Resource Definitions (CRDs), ServiceExport and ServiceImport.

  • ServiceExport
    • A ServiceExport is associated with a Kubernetes service object. When a ServiceExport is associated with a service, it extends the service beyond the cluster.
  • ServiceImport
    • The controller watches for ServiceExports to be created. The ServiceExport from one cluster becomes the ServiceImport in the secondary cluster. You will find the EndpointSlices by describing the ServiceImport.

In this post, we will walk you through how you can use AWS Cloud Map MCS Controller for K8s to create a multicluster DNS service name that can be accessed within and across clusters seamlessly. This capability can lead to reduced impact from failures, automatic failover functionality, easier Kubernetes upgrades, and security and isolation between teams within an organization.

How it works

Both clusters run the AWS Cloud Map MCS Controller for K8s. The controller is responsible for monitoring the Kubernetes api-server and synchronizing the state of the cluster with AWS Cloud Map. All clusters register their services and endpoints with AWS Cloud Map as specified by the ServiceExport. In the following diagram, services C and D from the Kubernetes 1 cluster are exported to AWS Cloud Map. The services are then imported into the Kubernetes 2 cluster. In this case, you can think of AWS Cloud Map like a repository where services are stored, which are then imported into other clusters.

This image shows a diagram depicting two Kubernetes clusters and a central AWS Cloud Map. Kubernetes 1 has Service C and Service D, along with a Cloud Map MCS Controller. Kubernetes 2 has Service A and B, and imports Services C and D from the AWS Cloud Map. Kubernetes 2 also has a Cloud Map MCS Controller. The AWS Cloud Map has Service A, B, C, and D. Services A and B are exported from the Kubernetes 2 cluster, while Services C and D are exported from the Kubernetes 1 cluster.

Getting started with AWS Cloud Map Multicluster Service Controller for K8s


This blog assumes prerequisites have been completed prior to deploying AWS Cloud Map Multicluster Service Controller for K8s. To get started with the AWS Cloud Map MCS Controller for K8s, first ensure you have two Amazon Elastic Kubernetes Service (EKS) clusters and that they can communicate with each other. Each cluster should be running Kubernetes 1.21+. In this example, we will use Virtual Private Cloud (Amazon VPC) peering but this will also work if your Amazon EKS clusters reside in the same VPC. To test connectivity between clusters, you can use the VPC Reachability Analyzer, a configuration analysis tool that lets you perform connectivity testing between a source resource and a destination resource in your Virtual Private Clouds (VPCs).

Each Amazon EKS node needs permissions to communicate with AWS Cloud Map. You’ll assign AWS Cloud Map permissions to the Amazon EKS nodes by creating a new role or adding permissions to the existing role assumed by the EKS nodes. For testing purposes, use the AWS managed policy AWSCloudMapFullAccess. Optimally, customers should use IAM Roles for Service Accounts to associate AWS Identity and Access Management (IAM) permissions to the pods directly as opposed to the nodes.

Once you’re able to access the cluster and run kubectl commands, install the controllers on both clusters by running the following:

kubectl apply -k ""

CoreDNS is a DNS server that runs within your Amazon EKS clusters. In the current release, updates to CoreDNS must be made which ensure the DNS names for the services are updated from <servicename>.<namespace>.svc.cluster.local to <servicename>.<namespace>.svc.clusterset.local. Three updates need to be made to CoreDNS and are detailed below.

  1. Update the CoreDNS ConfigMap by running the following command and adding multicluster clusterset.local to the Corefile:

kubectl edit configmap coredns -n kube-system

Shows output from kubectl in JSON format, highlighting that a multicluster entry for clusterset.local has been added.
  1. Update the ClusterRole permissions for CoreDNS by running the following command and adding the permissions included below:

kubectl edit clusterrole system:coredns

- apiGroups: - resources: - endpointslices verbs: - list - watch - get - create - update - apiGroups: - resources: - serviceimports verbs: - create - get - list - patch - update - watch - apiGroups: - resources: - serviceexports verbs: - get - list - patch - update - watch
  1. Lastly, update the CoreDNS image with the multicluster plugin by applying the following:
kubectl set image --namespace kube-system deployment.apps/coredns

CoreDNS, by default, supports DNS resolution within a single cluster. By adding the multicluster plugin, we enable DNS resolution across clusters. You also have the option to build your own image but doing so is out of the scope of this blog.

The update will change the DNS name of the service in the following way:

Describes that the cluser-local DNS service discovery hostname is now and that the multi-cluster DNS service discovery hostname is now

Create the namespace

After the prerequisites are complete, create the namespace in each cluster where your applications will reside.

On the first cluster, create the Kubernetes Namespace.

kubectl create namespace example

On the second cluster, create the Kubernetes Namespace.

kubectl create namespace example

The following three commands will install the sample Kubernetes Deployment, Service, and ServiceExport objects. Run these in the first cluster only.

kubectl apply -f

kubectl apply -f

kubectl apply -f

Next, to test reaching EndpointSlices, create a busybox container in the second cluster, exec into it, and access the new service name.

kubectl run busybox --image=busybox -n example --restart=Never -- /bin/sh -c "sleep 5d"

kubectl exec busybox -n example -it -- bin/sh

wget -O- my-service.example.svc.clusterset.local

Clean up

To clean up resources, delete the namespace which will remove all associated services and pods deployed within the namespace. Secondly, delete the AWS Cloud Map MCS Controller for K8s components. Other resources to delete, if necessary, include the EKS clusters, the AWS Cloud Map namespace, and any networking constructs used to create the environment.

kubectl delete namespace example

kubectl delete -k ""


In this post, we explored the most common issues customers face when attempting to adopt a multicluster architecture and how AWS Cloud Map MCS Controller for K8s enables multicluster service discovery. AWS Cloud Map MCS Controller for K8s implements a multicluster service architecture within the same Region or extended across Regions and accounts. We demonstrated how this can be configured using Amazon EKS and AWS Cloud Map. The project is distributed under the Apache 2.0 License, and you can find more information about its use, the latest version of the controller, and how to contribute by visiting the project’s GitHub page.