Skip to main content

Deploying salary prediction ML model inside a Docker container hosted on an EC2 instance:


Deploying your salary prediction ML model inside a Docker container hosted on an EC2 instance


A step-by-step guide to deploying your salary prediction ML model inside a Docker container hosted on an EC2 instance:

Step 1: Prepare the ML Model

  1. Train your model: Make sure your salary prediction model is trained and saved as a serialized file (e.g., model.pkl).
  2. Create a Flask API: If you haven't already, create a Flask API to serve the model predictions.
    from flask import Flask, request, jsonify
    import pickle app = Flask(__name__) # Load the model model = pickle.load(open('model.pkl', 'rb')) @app.route('/predict', methods=['POST']) def predict(): data = request.get_json() prediction = model.predict([data['features']]) return jsonify({'prediction': prediction.tolist()}) if __name__ == '__main__': app.run(host='0.0.0.0', port=5000)
  3. Test the API locally: Run the Flask application locally to ensure it works as expected.

Step 2: Create a Dockerfile

  1. Create a Dockerfile in the same directory as your Flask app. Here's an example:
    # Use an official Python runtime as a parent image
    FROM python:3.8-slim # Set the working directory in the container WORKDIR /app # Copy the current directory contents into the container at /app COPY . /app # Install any needed packages specified in requirements.txt RUN pip install --no-cache-dir -r requirements.txt # Make port 5000 available to the world outside this container EXPOSE 5000 # Run app.py when the container launches CMD ["python", "app.py"]
  2. Create a requirements.txt file to list the dependencies
  3. Flask
    scikit-learn

Step 3: Build and Test the Docker Image Locally

  1. Build the Docker image:
    docker build -t salary-prediction-app .
  2. Run the Docker container locally:
    docker run -p 5000:5000 salary-prediction-app
  3. Test the API: Use Postman or curl to test your API endpoint (http://localhost:5000/predict).

Step 4: Set Up an EC2 Instance

  1. Launch an EC2 instance: Go to the AWS Management Console, launch an EC2 instance, and choose an appropriate AMI (e.g., Amazon Linux 2).
  2. Connect to the EC2 instance:
    ssh -i /path/to/your-key.pem ec2-user@your-ec2-public-dns
  3. Install Docker on the EC2 instance:
    sudo yum update -y
    sudo amazon-linux-extras install docker sudo service docker start sudo usermod -a -G docker ec2-user

Step 5: Deploy the Docker Container on EC2

  1. Copy your Docker image to the EC2 instance:

    • You can use docker save and docker load commands to transfer the Docker image, or you can push the image to a Docker registry (e.g., Docker Hub) and pull it on the EC2 instance.
    docker save salary-prediction-app | gzip > salary-prediction-app.tar.gz
    scp -i /path/to/your-key.pem salary-prediction-app.tar.gz ec2-user@your-ec2-public-dns:/home/ec2-user/ ssh -i /path/to/your-key.pem ec2-user@your-ec2-public-dns gunzip -c salary-prediction-app.tar.gz | docker load
  2. Run the Docker container on EC2:

    docker run -d -p 80:5000 salary-prediction-app

Step 6: Access Your Application

  • Access your app: The app should now be running on your EC2 instance. You can access it using the public DNS of your EC2 instance:
    http://your-ec2-public-dns/

Step 7: Secure Your Application

  • Security groups: Make sure your EC2 instance security group allows inbound traffic on port 80 (HTTP).
  • Optional: Set up a domain name and SSL for better security and accessibility

Your Application

  • Security groups: Make sure your EC2 instance security group allows inbound traffic on port 80 (HTTP).
  • Optional: Set up a domain name and SSL for better security and accessibility

Comments

Popular posts from this blog

Mastering Machine Learning with scikit-learn: A Comprehensive Guide for Enthusiasts and Practitioners

Simplifying Machine Learning with Scikit-Learn: A Programmer's Guide Introduction: In today's digital age, machine learning has become an integral part of many industries. As a programmer, diving into the world of machine learning can be both exciting and overwhelming. However, with the help of powerful libraries like Scikit-Learn, the journey becomes much smoother. In this article, we will explore Scikit-Learn and how it simplifies the process of building machine learning models. What is Scikit-Learn? Scikit-Learn, also known as sklearn, is a popular open-source machine learning library for Python. It provides a wide range of tools and algorithms for various tasks, including classification, regression, clustering, and dimensionality reduction. With its user-friendly interface and extensive documentation, Scikit-Learn has become the go-to choice for many programmers and data scientists . Key Features of Scikit-Learn:  Simple and Consistent API: Scikit-Learn follows a consiste...

An Introduction to LangChain: Simplifying Language Model Applications

  An Introduction to LangChain: Simplifying Language Model Applications LangChain is a powerful framework designed to streamline the development and deployment of applications that leverage language models. As the capabilities of language models continue to expand, LangChain offers a unified interface and a set of tools that make it easier for developers to build complex applications, manage workflows, and integrate with various data sources. Let's explore what LangChain is, its key features, and how it can be used to create sophisticated language model-driven applications. What is LangChain? LangChain is an open-source framework that abstracts the complexities of working with large language models (LLMs) and provides a consistent, modular approach to application development. It is particularly well-suited for tasks that involve natural language processing (NLP), such as chatbots, data analysis, content generation, and more. By providing a cohesive set of tools and components, Lang...

Hugging Face: Revolutionizing Natural Language Processing

  Hugging Face: Revolutionizing Natural Language Processing Hugging Face has emerged as a pivotal player in the field of Natural Language Processing (NLP), driving innovation and accessibility through its open-source model library and powerful tools. Founded in 2016 as a chatbot company, Hugging Face has since pivoted to become a leader in providing state-of-the-art machine learning models for NLP tasks, making these sophisticated models accessible to researchers, developers, and businesses around the world. What is Hugging Face? Hugging Face is best known for its Transformers library, a highly popular open-source library that provides pre-trained models for various NLP tasks. These tasks include text classification, sentiment analysis, translation, summarization, question answering, and more. The library is built on top of deep learning frameworks such as PyTorch and TensorFlow, offering seamless integration and ease of use. Key Components of Hugging Face Transformers Library : T...