# Feature Scaling – Normalization Vs Standardization Explained in Simple Terms – Machine Learning Basics

Feature scaling is a preprocessing technique used in machine learning to standardize or normalize the range of independent variables (features) in a dataset. The primary goal of feature scaling is to ensure that no particular feature dominates the others due to differences in the units or scales. By transforming the features to a common scale, it helps improve the performance, stability, and convergence speed of machine learning algorithms.

Some machine learning algorithms, especially those that rely on the calculation of distances or similarity measures between data points (e.g., k-Nearest Neighbors, Support Vector Machines, Neural Networks), are sensitive to the scale of input features. If features have different scales, an algorithm may give more importance to features with larger scales, leading to suboptimal performance.

Normalization and Standardization are two common techniques used in data preprocessing to scale and transform numerical features in a dataset. They help in handling different feature scales and improving the performance of machine learning algorithms. Here’s a brief explanation of each technique, followed by a Python example:

Normalization (Min-Max Scaling):

Normalization rescales the features to a specific range, usually [0, 1] without losing the format of the data. It’s also known as Min-Max Scaling. It is calculated using the following formula:

normalized_value = (value – min) / (max – min)

By rescaling the features to a common range, the Min-Max Scaler helps improve the performance of machine learning algorithms that are sensitive to the scale of input features, such as k-Nearest Neighbors, Neural Networks, and Gradient Descent-based algorithms.

Python example:

Here’s a Python code example using matplotlib and sklearn to plot data before and after normalization. In this example, we generate random data points and then normalize them using Min-Max scaling.

``````import numpy as np
import matplotlib.pyplot as plt
from sklearn.preprocessing import MinMaxScaler

# Generate random data
np.random.seed(42)
data = np.random.randint(0, 100, (50, 2))

# Normalize data
scaler = MinMaxScaler()
normalized_data = scaler.fit_transform(data)

# Plot before normalization
plt.figure(figsize=(12, 6))
plt.subplot(1, 2, 1)
plt.scatter(data[:, 0], data[:, 1], color='blue', label='Original Data')
plt.xlabel('Feature 1')
plt.ylabel('Feature 2')
plt.legend()
plt.title('Before Normalization')

# Plot after normalization
plt.subplot(1, 2, 2)
plt.scatter(normalized_data[:, 0], normalized_data[:, 1], color='green', label='Normalized Data')
plt.xlabel('Feature 1')
plt.ylabel('Feature 2')
plt.legend()
plt.title('After Normalization')

plt.show()
``````

Standardization (Z-score normalization):

Z-score standardization, also known as Z-score normalization, is a feature scaling technique used in machine learning to transform numerical features to have zero mean and unit variance. This transformation helps improve the performance of machine learning algorithms, especially those that are sensitive to the scale of input features.. It is calculated using the following formula:

standardized_value = (value – mean) / standard_deviation

Python example:

Here’s an example using the `matplotlib` library to visualize the dataset before and after standardization. This example uses a synthetic dataset with two numerical features.

``````import numpy as np
import matplotlib.pyplot as plt
from sklearn.preprocessing import StandardScaler

# Create a synthetic dataset
np.random.seed(42)
feature1 = np.random.normal(20, 5, 100)
feature2 = np.random.normal(100, 20, 100)
data = np.column_stack((feature1, feature2))

# Standardize the data
scaler = StandardScaler()
standardized_data = scaler.fit_transform(data)

# Create a plot
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 5))

# Plot the original data
ax1.scatter(data[:, 0], data[:, 1], label='Original Data')
ax1.set_title('Before Standardization')
ax1.set_xlabel('Feature 1')
ax1.set_ylabel('Feature 2')
ax1.legend()

# Plot the standardized data
ax2.scatter(standardized_data[:, 0], standardized_data[:, 1], label='Standardized Data', c='r')
ax2.set_title('After Standardization')
ax2.set_xlabel('Feature 1')
ax2.set_ylabel('Feature 2')
ax2.legend()

# Show the plot
plt.show()
``````

When to Use Normalization and Standardization during PreProcessing in Machine Learning?

Choosing when to use Normalization or Standardization during preprocessing in Machine Learning depends on the characteristics of the dataset and the requirements of the algorithm being used. Here are some guidelines to help you make the right decision:

1. Normalization (Min-Max Scaling):
• Use when the data has a skewed distribution or when the minimum and maximum values are known.
• Useful when the algorithm is sensitive to the scale of input features, such as k-Nearest Neighbors, Neural Networks, and Gradient Descent-based algorithms.
• Recommended when the algorithm relies on the similarity or distance measures between data points, as normalization scales the features within a specific range.
• May not be suitable if there are outliers in the data, as normalization could lead to the suppression of important information.
1. Standardization (Z-score normalization):
• Use when the data follows a Gaussian (normal) distribution or when the distribution is unknown.
• More robust to outliers, as it is less sensitive to extreme values.
• Preferred for algorithms that assume that input features have zero mean and unit variance, such as Support Vector Machines (SVM), Principal Component Analysis (PCA), and Linear Discriminant Analysis (LDA).
• Can be used with most machine learning algorithms, as it maintains the original distribution of the data while transforming it to a standard scale.

In practice, you can experiment with both techniques and choose the one that yields better performance for your specific problem. It’s also possible to use different scaling methods for different features if needed. Remember that not all machine learning algorithms require feature scaling, such as decision tree-based algorithms (e.g., Decision Trees, Random Forests) and Naive Bayes. ## How to Perform Machine Learning Using Amibroker and Python

Amibroker is a powerful technical analysis and trading system development platform, used extensively by traders and analysts for developing and deploying trading strategies. Python,...

## Predicting Gap Up, Gap Down, or No Gap in…

Logistic Regression is a popular statistical method used for predicting binary outcomes, such as predicting whether an email is spam or not, whether a...

## AdaBoost – Ensembling Methods in Machine Learning for Stock…

In this article, we will discuss how ensembling methods, specifically bagging, boosting, stacking, and blending, can be applied to enhance stock market prediction. And...