Skip to main content

Kubernetes deployment within an ec2 instance

Kubernetes within an EC2 instance,


We have to follow these steps:-

  1. Set up the EC2 instance with Kubernetes.
  2. Create a Kubernetes Deployment YAML file.
  3. Apply the deployment using kubectl.

Below is a guide and code to accomplish this.

Step 1: Set Up EC2 Instance with Kubernetes

  1. Launch an EC2 Instance:

    • Choose an Amazon Linux 2 AMI or Ubuntu AMI.
    • Select an instance type (t2.micro is fine for small projects).
    • Configure security groups to allow SSH, HTTP, HTTPS, and any required Kubernetes ports.
  2. Install Docker: SSH into your instance and install Docker.

    sudo yum update -y
    sudo amazon-linux-extras install docker -y sudo service docker start sudo usermod -aG docker ec2-user

    For Ubuntu:

    sudo apt-get update
    sudo apt-get install -y docker.io sudo systemctl start docker sudo usermod -aG docker ubuntu
  3. Install Kubernetes (kubectl, kubeadm, kubelet):s

    sudo apt-get update && sudo apt-get install -y apt-transport-https curl curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee -a /etc/apt/sources.list.d/kubernetes.list sudo apt-get update sudo apt-get install -y kubelet kubeadm kubectl sudo apt-mark hold kubelet kubeadm kubect
  4. Initialize Kubernetes (Master Node):

    • This is usually done on the master node, but for simplicity, we'll assume a single-node setup.
    sudo kubeadm init --pod-network-cidr=192.168.0.0/16
  5. Set up kubectl for your user:

    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
  6. Install a Pod Network (Weave, Flannel, etc.):

    • For example, with Flannel:
    kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master

Step 2: Create a Kubernetes Deployment YAML File

Below is a sample YAML file for deploying a simple Nginx application.

apiVersion: apps/v1
kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 2 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.14.2 ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: nginx-service spec: selector: app: nginx ports: - protocol: TCP port: 80 targetPort: 80 type: LoadBalancer

Step 3: Deploy the Application

  1. Apply the Deployment: Save the above YAML content to a file named nginx-deployment.yaml.

    kubectl apply -f nginx-deployment.yaml
  2. Verify the Deployment:

    kubectl get deployments
    kubectl get pods kubectl get services
  3. Access the Application:

    • If you have set the Service type to LoadBalancer, Kubernetes will provision a public IP through your cloud provider. Use kubectl get services to find the external IP and access your application via a browser or curl.

Additional Considerations:

  • Scaling: You can scale the number of replicas easily with:

    kubectl scale deployment nginx-deployment --replicas=5
  • Monitoring: Consider setting up monitoring for your Kubernetes cluster using tools like Prometheus and Grafana.

This process will give you a basic setup to deploy an application on Kubernetes running on an EC2 instance. For production, you should explore multi-node clusters, proper security configurations, and advanced networking setups.

Comments

Popular posts from this blog

Mastering Machine Learning with scikit-learn: A Comprehensive Guide for Enthusiasts and Practitioners

Simplifying Machine Learning with Scikit-Learn: A Programmer's Guide Introduction: In today's digital age, machine learning has become an integral part of many industries. As a programmer, diving into the world of machine learning can be both exciting and overwhelming. However, with the help of powerful libraries like Scikit-Learn, the journey becomes much smoother. In this article, we will explore Scikit-Learn and how it simplifies the process of building machine learning models. What is Scikit-Learn? Scikit-Learn, also known as sklearn, is a popular open-source machine learning library for Python. It provides a wide range of tools and algorithms for various tasks, including classification, regression, clustering, and dimensionality reduction. With its user-friendly interface and extensive documentation, Scikit-Learn has become the go-to choice for many programmers and data scientists . Key Features of Scikit-Learn:  Simple and Consistent API: Scikit-Learn follows a consiste...

Unlocking the Power of CGI-BIN: A Dive into Common Gateway Interface for Dynamic Web Content

 CGI-BIN What is CGI-BIN? The Common Gateway Interface (CGI) is a standard protocol for enabling web servers to execute programs that generate web content dynamically. CGI scripts are commonly written in languages such as Perl, Python, and PHP, and they allow web servers to respond to user input and generate customized web pages on the fly. The CGI BIN directory is a crucial component of this process, serving as the location where these scripts are stored and executed. The CGI BIN directory is typically found within the root directory of a web server, and it is often named "cgi-bin" or "CGI-BIN". This directory is designated for storing executable scripts and programs that will be run by the server in response to requests from web clients. When a user interacts with a web page that requires dynamic content, the server will locate the appropriate CGI script in the CGI BIN directory and execute it to generate the necessary output. One of the key advantages of using ...

Hugging Face: Revolutionizing Natural Language Processing

  Hugging Face: Revolutionizing Natural Language Processing Hugging Face has emerged as a pivotal player in the field of Natural Language Processing (NLP), driving innovation and accessibility through its open-source model library and powerful tools. Founded in 2016 as a chatbot company, Hugging Face has since pivoted to become a leader in providing state-of-the-art machine learning models for NLP tasks, making these sophisticated models accessible to researchers, developers, and businesses around the world. What is Hugging Face? Hugging Face is best known for its Transformers library, a highly popular open-source library that provides pre-trained models for various NLP tasks. These tasks include text classification, sentiment analysis, translation, summarization, question answering, and more. The library is built on top of deep learning frameworks such as PyTorch and TensorFlow, offering seamless integration and ease of use. Key Components of Hugging Face Transformers Library : T...