Skip to main content

Deploying salary prediction ML model inside a Docker container hosted on an EC2 instance:


Deploying your salary prediction ML model inside a Docker container hosted on an EC2 instance


A step-by-step guide to deploying your salary prediction ML model inside a Docker container hosted on an EC2 instance:

Step 1: Prepare the ML Model

  1. Train your model: Make sure your salary prediction model is trained and saved as a serialized file (e.g., model.pkl).
  2. Create a Flask API: If you haven't already, create a Flask API to serve the model predictions.
    from flask import Flask, request, jsonify
    import pickle app = Flask(__name__) # Load the model model = pickle.load(open('model.pkl', 'rb')) @app.route('/predict', methods=['POST']) def predict(): data = request.get_json() prediction = model.predict([data['features']]) return jsonify({'prediction': prediction.tolist()}) if __name__ == '__main__': app.run(host='0.0.0.0', port=5000)
  3. Test the API locally: Run the Flask application locally to ensure it works as expected.

Step 2: Create a Dockerfile

  1. Create a Dockerfile in the same directory as your Flask app. Here's an example:
    # Use an official Python runtime as a parent image
    FROM python:3.8-slim # Set the working directory in the container WORKDIR /app # Copy the current directory contents into the container at /app COPY . /app # Install any needed packages specified in requirements.txt RUN pip install --no-cache-dir -r requirements.txt # Make port 5000 available to the world outside this container EXPOSE 5000 # Run app.py when the container launches CMD ["python", "app.py"]
  2. Create a requirements.txt file to list the dependencies
  3. Flask
    scikit-learn

Step 3: Build and Test the Docker Image Locally

  1. Build the Docker image:
    docker build -t salary-prediction-app .
  2. Run the Docker container locally:
    docker run -p 5000:5000 salary-prediction-app
  3. Test the API: Use Postman or curl to test your API endpoint (http://localhost:5000/predict).

Step 4: Set Up an EC2 Instance

  1. Launch an EC2 instance: Go to the AWS Management Console, launch an EC2 instance, and choose an appropriate AMI (e.g., Amazon Linux 2).
  2. Connect to the EC2 instance:
    ssh -i /path/to/your-key.pem ec2-user@your-ec2-public-dns
  3. Install Docker on the EC2 instance:
    sudo yum update -y
    sudo amazon-linux-extras install docker sudo service docker start sudo usermod -a -G docker ec2-user

Step 5: Deploy the Docker Container on EC2

  1. Copy your Docker image to the EC2 instance:

    • You can use docker save and docker load commands to transfer the Docker image, or you can push the image to a Docker registry (e.g., Docker Hub) and pull it on the EC2 instance.
    docker save salary-prediction-app | gzip > salary-prediction-app.tar.gz
    scp -i /path/to/your-key.pem salary-prediction-app.tar.gz ec2-user@your-ec2-public-dns:/home/ec2-user/ ssh -i /path/to/your-key.pem ec2-user@your-ec2-public-dns gunzip -c salary-prediction-app.tar.gz | docker load
  2. Run the Docker container on EC2:

    docker run -d -p 80:5000 salary-prediction-app

Step 6: Access Your Application

  • Access your app: The app should now be running on your EC2 instance. You can access it using the public DNS of your EC2 instance:
    http://your-ec2-public-dns/

Step 7: Secure Your Application

  • Security groups: Make sure your EC2 instance security group allows inbound traffic on port 80 (HTTP).
  • Optional: Set up a domain name and SSL for better security and accessibility

Your Application

  • Security groups: Make sure your EC2 instance security group allows inbound traffic on port 80 (HTTP).
  • Optional: Set up a domain name and SSL for better security and accessibility

Comments

Popular posts from this blog

Mastering Machine Learning with scikit-learn: A Comprehensive Guide for Enthusiasts and Practitioners

Simplifying Machine Learning with Scikit-Learn: A Programmer's Guide Introduction: In today's digital age, machine learning has become an integral part of many industries. As a programmer, diving into the world of machine learning can be both exciting and overwhelming. However, with the help of powerful libraries like Scikit-Learn, the journey becomes much smoother. In this article, we will explore Scikit-Learn and how it simplifies the process of building machine learning models. What is Scikit-Learn? Scikit-Learn, also known as sklearn, is a popular open-source machine learning library for Python. It provides a wide range of tools and algorithms for various tasks, including classification, regression, clustering, and dimensionality reduction. With its user-friendly interface and extensive documentation, Scikit-Learn has become the go-to choice for many programmers and data scientists . Key Features of Scikit-Learn:  Simple and Consistent API: Scikit-Learn follows a consiste...

Mastering Docker: A Comprehensive Guide to Containerization Excellence

  DOCKER Docker is a software platform that allows you to build, test, and deploy applications quickly. Docker packages software into standardized units called   containers   that have everything the software needs to run including libraries, system tools, code, and runtime. Using Docker, you can quickly deploy and scale applications into any environment and know your code will run. Running Docker on AWS provides developers and admins a highly reliable, low-cost way to build, ship, and run distributed applications at any scale. Docker is a platform for developing, shipping, and running applications in containers. Containers are lightweight, portable, and self-sufficient units that can run applications and their dependencies isolated from the underlying system. Docker provides a set of tools and a platform to simplify the process of creating, deploying, and managing containerized applications. Key components of Docker include: Docker Engine: The core of Docker, responsibl...

Unveiling the Power of Prompt Engineering: Crafting Effective Inputs for AI Models

  Unveiling the Power of Prompt Engineering: Crafting Effective Inputs for AI Models In the rapidly evolving landscape of artificial intelligence (AI), prompt engineering has emerged as a crucial technique for harnessing the capabilities of language models and other AI systems. This article delves into the essence of prompt engineering, its significance, and best practices for designing effective prompts. What is Prompt Engineering? Prompt engineering involves designing and refining input queries or prompts to elicit desired responses from AI models. The effectiveness of an AI model often hinges on how well its input is structured. A well-crafted prompt can significantly enhance the quality and relevance of the model’s output. Why is Prompt Engineering Important? Maximizing Model Performance: Well-engineered prompts can help models generate more accurate and contextually relevant responses, making them more useful in practical applications. Reducing Ambiguity: Clear and precise p...