Skip to main content
Back to Blog
AI/MLData AnalysisProgramming Languages
5 April 20265 min readUpdated 5 April 2026

Preparing Data for Machine Learning in Python

Data preprocessing is a crucial initial phase in any data analysis or machine learning project. This process involves cleaning, transforming, and organizing raw data to ensure i...

Preparing Data for Machine Learning in Python

Data preprocessing is a crucial initial phase in any data analysis or machine learning project. This process involves cleaning, transforming, and organizing raw data to ensure it is accurate, consistent, and ready for modeling. Here's why it's essential:

  • Clean and structured data enable models to identify meaningful patterns instead of noise.
  • Properly processed data avoids misleading inputs, resulting in more reliable predictions.
  • Organized data simplifies the creation of useful model inputs, improving performance.
  • Well-structured data facilitates better Exploratory Data Analysis (EDA), making it easier to interpret patterns and trends.

Step-by-Step Implementation

Let's explore the steps involved in data preprocessing:

Step 1: Import Libraries and Load Dataset

To start, set up the environment with essential libraries like pandas, numpy, scikit-learn, matplotlib, and seaborn for data manipulation, numerical operations, visualization, and scaling. Load the dataset to begin preprocessing.

import pandas as pd
import numpy as np
from sklearn.preprocessing import MinMaxScaler, StandardScaler
import seaborn as sns
import matplotlib.pyplot as plt

df = pd.read_csv('path/to/diabetes.csv')
df.head()

Step 2: Inspect Data Structure and Check Missing Values

Understanding the dataset's size, data types, and identifying missing data is critical.

  • df.info(): Provides a concise summary with non-null entries and data type per column.
  • df.isnull().sum(): Returns the number of missing values for each column.
df.info()
print(df.isnull().sum())

Step 3: Statistical Summary and Visualizing Outliers

Calculate numeric summaries like mean, median, and min/max to identify unusual points (outliers), which can skew models if not addressed.

  • df.describe(): Computes count, mean, standard deviation, min/max, and quartiles for numerical columns.
  • Boxplots using matplotlib to visualize spread and detect outliers.
df.describe()

fig, axs = plt.subplots(len(df.columns), 1, figsize=(7, 18), dpi=95)
for i, col in enumerate(df.columns):
    axs[i].boxplot(df[col], vert=False)
    axs[i].set_ylabel(col)
plt.tight_layout()
plt.show()

Step 4: Remove Outliers Using the Interquartile Range (IQR) Method

Improve model robustness by removing extreme values outside a reasonable range.

  • Calculate IQR: Q3 (75th percentile) – Q1 (25th percentile).
  • Outliers are values below Q1 - 1.5IQR or above Q3 + 1.5IQR.
q1, q3 = np.percentile(df['Insulin'], [25, 75])
iqr = q3 - q1
lower = q1 - 1.5 * iqr
upper = q3 + 1.5 * iqr
clean_df = df[(df['Insulin'] >= lower) & (df['Insulin'] <= upper)]

Step 5: Correlation Analysis

Examine relationships between features and the target variable (Outcome) to gauge feature importance.

  • df.corr(): Computes pairwise correlation coefficients.
  • Use a heatmap via seaborn for clear visualization.
  • Sort correlations to highlight features most correlated with the target.
corr = df.corr()
plt.figure(dpi=130)
sns.heatmap(corr, annot=True, fmt='.2f', cmap='coolwarm')
plt.show()

print(corr['Outcome'].sort_values(ascending=False))

Step 6: Visualize Target Variable Distribution

Assess whether target classes (Diabetes vs. Not Diabetes) are balanced, as this affects model training and evaluation.

  • Use plt.pie() to display the proportion of each class in the target variable 'Outcome'.
plt.pie(df['Outcome'].value_counts(), labels=['Diabetes', 'Not Diabetes'], autopct='%.f%%', shadow=True)
plt.title('Outcome Proportionality')
plt.show()

Step 7: Separate Features and Target Variable

Prepare independent variables (features) and dependent variable (target) for modeling.

  • Use df.drop(columns=[...]) to exclude the target column from features.
  • Select the target column directly with df['Outcome'].
X = df.drop(columns=['Outcome'])
y = df['Outcome']

Step 8: Feature Scaling: Normalization and Standardization

Scale features to a common range or distribution, crucial for algorithms sensitive to feature magnitudes.

1. Normalization (Min-Max Scaling)

Rescales features between 0 and 1, particularly beneficial for algorithms like k-NN and neural networks.

scaler = MinMaxScaler()
X_normalized = scaler.fit_transform(X)
print(X_normalized[:5])

2. Standardization

Transforms features to have a mean of 0 and a standard deviation of 1, useful for normally distributed features.

scaler = StandardScaler()
X_standardized = scaler.fit_transform(X)
print(X_standardized[:5])

Advantages

Data preprocessing offers several benefits:

  • Improves Data Quality: Cleans and organizes raw data for better analysis.
  • Enhances Model Accuracy: Removes noise and irrelevant data, leading to more precise predictions.
  • Reduces Overfitting: Handles outliers and redundant features, improving model generalization.
  • Speeds Up Training: Efficiently scaled data reduces computation time.
  • Ensures Algorithm Compatibility: Converts data into formats suitable for machine learning models.