Skip to main content

Multi linear regression for heart disease risk prediction system

 Multi linear regression for heart disease risk prediction system. 

Step 1: Import Required Libraries

import pandas as pd
import numpy as np from sklearn.model_selection import train_test_split from sklearn.linear_model import LinearRegression from sklearn.metrics import mean_squared_error, r2_score import matplotlib.pyplot as plt import seaborn as sns

Step 2: Load and Prepare the Dataset

For this example, I'll create a synthetic dataset. In a real scenario, you would load your dataset from a file.

# Creating a synthetic dataset
np.random.seed(42) data_size = 200 age = np.random.randint(30, 70, data_size) cholesterol = np.random.randint(150, 300, data_size) blood_pressure = np.random.randint(80, 180, data_size) smoking = np.random.randint(0, 2, data_size) # 0 for non-smoker, 1 for smoker diabetes = np.random.randint(0, 2, data_size) # 0 for no diabetes, 1 for diabetes # Risk score (synthetic target variable) risk_score = ( 0.3 * age + 0.2 * cholesterol + 0.3 * blood_pressure + 10 * smoking + 8 * diabetes + np.random.normal(0, 10, data_size) ) # Creating a DataFrame df = pd.DataFrame({ 'Age': age, 'Cholesterol': cholesterol, 'Blood Pressure': blood_pressure, 'Smoking': smoking, 'Diabetes': diabetes, 'Risk Score': risk_score }) # Display the first few rows of the dataset print(df.head())

Step 3: Exploratory Data Analysis (EDA)

# Pairplot to visualize relationships between features and target
sns.pairplot(df) plt.show() # Correlation matrix to check relationships between features corr_matrix = df.corr() sns.heatmap(corr_matrix, annot=True, cmap="coolwarm") plt.show()

Step 4: Split the Dataset into Training and Testing Sets


# Features and target variable X = df[['Age', 'Cholesterol', 'Blood Pressure', 'Smoking', 'Diabetes']] y = df['Risk Score'] # Splitting the dataset into training and testing sets X_train, X_test, y_train, y_test = train_test_split(X, y, linear Regression Model
# Creating and training the model
model = LinearRegression() model.fit(X_train, y_train) # Model coefficients print("Coefficients:", model.coef_) print("Intercept:", model.intercept_)

Step 6: Make Predictions and Evaluate the Model

# Making predictions on the test set
y_pred = model.predict(X_test) # Evaluating the model mse = mean_squared_error(y_test, y_pred) r2 = r2_score(y_test, y_pred) print(f"Mean Squared Error: {mse}") print(f"R-squared: {r2}")

Step 7: Visualize the Results


:
  1. Data Generation: A synthetic dataset is created with features like Age, Cholesterol, Blood Pressure, Smoking, and Diabetes to predict a synthetic Risk Score.

  2. EDA: Exploratory Data Analysis helps understand the relationships between the features and the target variable.

  3. Model Training: The multiple linear regression model is trained on the dataset. The model’s coefficients indicate the weight of each feature in predicting the risk score.

  4. Evaluation: The model's performance is evaluated using Mean Squared Error (MSE) and R-squared values.

  5. Visualization: Visualizing actual vs. predicted values and residuals helps in assessing the model's fit.

Real Dataset Consideration:

Replace the synthetic data generation part with your actual dataset, ensuring that your data is clean and well-preprocessed. You might need to handle missing values, normalize/standardize features, and encode categorical variables depending on your dataset's characteristics.

This code provides a foundation for building a heart disease risk prediction system using multiple linear regression. Let me know if you need further assistance with your specific dataset or model improvements!

Comments

Popular posts from this blog

GUI of a chatbot using streamlit Library

GUI of an AI chatbot  Creating a GUI for an AI chatbot using the streamlit library in Python is straightforward. Streamlit is a powerful tool that makes it easy to build web applications with minimal code. Below is a step-by-step guide to building a simple AI chatbot GUI using Streamlit. Step 1: Install Required Libraries First, you'll need to install streamlit and any AI model or library you want to use (e.g., OpenAI's GPT-3 or a simple rule-based chatbot). If you're using OpenAI's GPT-3, you'll also need the openai library. pip install streamlit openai Step 2: Set Up OpenAI API (Optional) If you're using OpenAI's GPT-3 for your chatbot, make sure you have an API key and set it up as an environment variable: export OPENAI_API_KEY= 'your-openai-api-key' Step 3: Create the Streamlit Chatbot Application Here's a basic example of a chatbot using OpenAI's GPT-3 and Streamlit: import streamlit as st import openai # Set the OpenAI API key (...

Unveiling the Power of Prompt Engineering: Crafting Effective Inputs for AI Models

  Unveiling the Power of Prompt Engineering: Crafting Effective Inputs for AI Models In the rapidly evolving landscape of artificial intelligence (AI), prompt engineering has emerged as a crucial technique for harnessing the capabilities of language models and other AI systems. This article delves into the essence of prompt engineering, its significance, and best practices for designing effective prompts. What is Prompt Engineering? Prompt engineering involves designing and refining input queries or prompts to elicit desired responses from AI models. The effectiveness of an AI model often hinges on how well its input is structured. A well-crafted prompt can significantly enhance the quality and relevance of the model’s output. Why is Prompt Engineering Important? Maximizing Model Performance: Well-engineered prompts can help models generate more accurate and contextually relevant responses, making them more useful in practical applications. Reducing Ambiguity: Clear and precise p...

Kubernetes deployment within an ec2 instance

Kubernetes within an EC2 instance, We have to follow these steps:- Set up the EC2 instance with Kubernetes. Create a Kubernetes Deployment YAML file. Apply the deployment using kubectl . Below is a guide and code to accomplish this. Step 1: Set Up EC2 Instance with Kubernetes Launch an EC2 Instance : Choose an Amazon Linux 2 AMI or Ubuntu AMI. Select an instance type (t2.micro is fine for small projects). Configure security groups to allow SSH, HTTP, HTTPS, and any required Kubernetes ports. Install Docker : SSH into your instance and install Docker. sudo yum update -y sudo amazon-linux-extras install docker -y sudo service docker start sudo usermod -aG docker ec2-user For Ubuntu: sudo apt-get update sudo apt-get install -y docker.io sudo systemctl start docker sudo usermod -aG docker ubuntu Install Kubernetes (kubectl, kubeadm, kubelet) :s sudo apt-get update && sudo apt-get install -y apt-transport-https curl curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | s...