Containers
Accelerating application development with the Amazon EKS MCP server
This blog post was jointly authored by Niall Thomson, Principal Solutions Architect – Containers, Carlos Santana, Solutions Architect – Containers and George John, Senior Product Manager – Amazon EKS
Introduction
Today, we’re excited to announce the launch of the open source Model Context Protocol (MCP) server for Amazon Elastic Kubernetes Service (Amazon EKS). This new capability enables artificial intelligence (AI) code assistants such as Amazon Q Developer CLI, Cline, and Cursor to seamlessly interact with your EKS clusters in a standardized way. The MCP server provides AI assistants with contextual data and enables them to manage EKS and Kubernetes resources. As a result, developers can now receive tailored guidance throughout the entire development lifecycle, streamlining and accelerating their application development process.
Large Language Models (LLMs) have revolutionized the way developers write code, and their capabilities are being further enhanced through innovative solutions like the Model Context Protocol (MCP) server. While LLMs excel at providing general coding assistance based on their training data, the MCP server extends their capabilities by enabling real-time access to external tools and data sources, particularly valuable in complex environments like Kubernetes. As an open standard, MCP creates a standardized interface that empowers LLMs to tap into current, contextual information, making them even more powerful and precise in supporting specific application development use cases. This synergy between LLMs and MCP represents a significant advancement in AI-assisted development.
The EKS MCP server provides AI code assistants with resource management tools and up-to-date, contextual information about your Amazon EKS clusters. This allows code assistants to provide more accurate, tailored guidance throughout the application lifecycle, from initial setup through production optimization and troubleshooting. Integrating the EKS MCP server into your development workflow can provide you with significant enhancements across various phases of application development. During the getting started phase, it offers guided cluster creation with all necessary prerequisites automatically created and best practice applied. In the development phase, it reduces the EKS and Kubernetes learning curve by providing high-level workflows for application deployment and cluster management, as well as generating EKS-aware code and manifests. For debugging and troubleshooting, the EKS MCP server accelerates issue resolution by offering troubleshooting aids and access to a knowledge base. These capabilities are now accessible through natural language interactions within an AI code assistant, transforming how developers interact with EKS and making complex Kubernetes operations more intuitive and efficient.
Features
The EKS MCP server provides several MCP tools, each of which can be invoked by AI assistants to interact with external systems such as APIs or knowledge bases.
The tools provided by the EKS MCP server can be broken down into three categories:
1) Kubernetes resource management: Interact and manage Kubernetes resources in an EKS cluster without relying on Kubernetes commands. These tools include seamless authentication for EKS clusters, allowing efficient operations across multiple clusters without needing to manage a kubeconfig file.
list_k8s_resources
– List Kubernetes resources of a specific kindlist_api_versions
– List all available Kubernetes API versionsmanage_k8s_resource
– Create, update, or delete an individual Kubernetes resourceapply_yaml
– Apply YAML objectsget_k8s_events
– Get events related to a specific Kubernetes resourceget_pod_logs
– Get logs for a specific pod
2) EKS cluster management: Conveniently create and manage EKS clusters powered by EKS Auto Mode through AWS CloudFormation.
manage_eks_stacks
– Generate, deploy, and delete CloudFormation stacks for EKS clusters
3) Troubleshooting: Streamlines issue resolution by providing comprehensive telemetry data, such as logs and metrics. It enhances LLM capabilities by combining real-time cluster insights with curated troubleshooting playbooks for common failure scenarios, enabling faster and more accurate problem diagnosis and resolution.
search_eks_troubleshoot_guide
– Search the Amazon EKS knowledge base for troubleshooting informationget_cloudwatch_logs
– Retrieve logs from Amazon CloudWatch for a pod or an EKS cluster control planeget_cloudwatch_metrics
– Retrieve metrics from CloudWatch for a container, pod, node, or cluster
Additional tools are included, check out the documentation for more details.
Walkthrough
To demonstrate the capabilities of the EKS MCP server, the following sections walk through example scenarios.
Deploying a workload
In this section we demonstrate how the EKS MCP server can accelerate getting a workload running on Amazon EKS faster. For this you create a new application and package it as a container, ready to be deployed to Amazon EKS. This involves coding, thus you can use Cline, an autonomous agent for VS Code.
Follow the EKS MCP Server documentation here to install the pre-requisites including IAM permissions. To configure Cline to use the EKS MCP server following the Cline documentation here. Your cline_mcp_settings.json
file should resemble the following example.
If the installation is successful, then you should see the EKS MCP server in list of MCP servers installed in Cline as shown in the following figure.

Figure 1: Configuring the EKS MCP server in Cline

Figure 2: MCP successfully installed in Cline
You need an application to deploy to Amazon EKS, and for that you rely on Cline and the LLM model that it’s been configured to use. You don’t need to rely on the EKS MCP server yet. Enter the following prompt into the a new Cline task:
We can break down this prompt:
- You’re asking the assistant to build a Node.js application that uses the popular Express framework, with some starter endpoints you can access.
- You need a Dockerfile, so you ask the assistant to create one.
- Next you ask the assistant to build the container image, making sure that it’s built for multiple CPU architectures. It also quickly tests the image locally to make sure that its basic functions are correct.
- Finally you ask the assistant to push the container image to Amazon Elastic Container Registry (Amazon ) so that it can be deployed to Amazon EKS.
The application repository produced would look something like the following:

Figure 3: Generated application file structure
The container image has been built and pushed to Amazon ECR, as shown in the following figure:

Figure 4: Application bootstrap task completion in Cline
Now you ask the assistant to deploy the application to Amazon EKS:
Under the hood, the code assistant uses the EKS MCP server’s manage_eks_stacks
tool to automate the entire cluster provisioning process, as shown in the following figure. It needs zero input from the user, and automates creation of all cluster prerequisites, such as VPC, subnets, and AWS Identity and Access Management roles. The EKS MCP server tool not only streamlines infrastructure setup but also applies Amazon EKS recommendations on the cluster automatically, such as enablement of EKS Auto Mode for streamlined cluster management.

Figure 5: Cline invoking EKS MCP stack management tool
The cluster creation takes several minutes, after which the assistant generates and deploys Kubernetes manifests using EKS MCP server’s apply_yaml
tool, as shown in the following figure:

Figure 6: Cline invoking EKS MCP tool to apply YAML manifest
When the manifests are deployed, the assistant can use EKS MCP server tools such as list_k8s_resources
and manage_k8s_resources
to check the status of the Pods, as shown in the following figure.

Figure 7: Cline invoking EKS MCP tool to list Kubernetes resources
Finally, the assistant retrieves the application URL to confirm that it’s deployed and running, as shown in the following figure.

Figure 8: Cline successfully deployed the application to Amazon EKS
Although we used docker in this walkthrough, we have also developed the Finch MCP Server to support our users’ diverse container management needs. Finch offers a secure, standardized approach to container operations, integrating seamlessly with AWS services while maintaining robust security controls. It reflects our commitment to providing flexible, enterprise-grade solutions that meet varying user requirements.
Troubleshooting
Another area where the EKS MCP server can provide valuable context to AI assistants is identifying and remediating issues. To demonstrate the portability of MCP servers, switch to using Amazon Q Developer CLI, which supports MCP servers for tools and prompts. After installing Q Developer CLI, the EKS MCP can be added by configuring the mcp.json
file
Troubleshooting pods
In this situation there are two pods that are failing to start:
Ask the AI assistant to troubleshoot and try to directly fix the issue:

Figure 9: Amazon Q CLI invoking the EKS MCP tool to get Kubernetes events
It can remediate the issues directly through the EKS MCP server’s manage_k8s_resources
to update the Deployment resource, as shown in the following figure:

Figure 10: Amazon Q CLI invoking the EKS MCP tool to update a Kubernetes resource
Finally, you get a summary of the multiple issues that were identified and fixed, as shown in the following figure:

Figure 11: Amazon Q CLI summarizing the troubleshooting issues and remediations
Troubleshooting infrastructure
When users troubleshoot Amazon EKS environments, they must consider not only the Kubernetes resources but also the AWS resources that are used to create the clusters, as well as the related resources such as VPC networking and IAM.
In this example we start with a similar situation as the previous scenario, but in this case the pods are in a Pending
state, indicating they can’t be scheduled to an EKS worker node:
We can ask the AI assistant to help figure out the issue:
The assistant likely takes similar action to the previous scenario to begin to diagnose the issue, checking the status of the Deployment and Pods, as well as retrieving Kubernetes events. However, in this case it can also use the EKS MCP server’s search_eks_troubleshoot_guide
knowledge base tool to gain specialized troubleshooting knowledge related to Amazon EKS, as shown in the following figure:

Figure 12: Amazon Q CLI invoking the EKS MCP tool to search the Amazon EKS knowledge base
The Amazon EKS troubleshooting tools responds with targeted advice related to the assistant’s query, along with related reference documentation that can be used for further research. For example:
This documentation provides the assistant with the context it needs to identify the issue and a solution. In this case it correctly identified an issue with the IAM role used to provide permissions to the EKS cluster, as shown in the following figure:

Figure 13: Amazon Q CLI summarizing the issue that was identified and remediation steps
Conclusion
The open source MCP server for Amazon EKS provides users with an exciting new way to interact with their Kubernetes environments.
This MCP server allows you to do the following:
- Deploy and manage Kubernetes resources with AI-assisted guidance
- Troubleshoot EKS cluster issues using conversational AI
As organizations continue to adopt containerized architectures, tools that streamline management and reduce cognitive load become increasingly valuable. The EKS MCP server demonstrates our commitment to making Kubernetes more accessible while maintaining the power and flexibility that Amazon EKS users expect.
At AWS, our roadmap is deeply influenced by customer feedback. We encourage you to share your experiences with the EKS MCP server: whether it’s suggesting new features, reporting challenges, or highlighting workflows where AI assistance could be more impactful. Your insights into daily development patterns, pain points, and areas where you need enhanced automation or guidance are invaluable in shaping the future capabilities of this tool. You can provide feedback in the AWSLabs MCP Servers Github repository by creating a new issue.
Get started today by visiting the EKS MCP Server documentation and join us in shaping the future of AI-assisted Kubernetes management.