Skip to content
Home » My Blog Tutorial » Mini-Batch Gradient Descent: Optimizing Machine Learning Models

Mini-Batch Gradient Descent: Optimizing Machine Learning Models

Gradient Descent: Building Optimization Algorithms from Scratch

Mini-Batch Gradient Descent (MBGD) is a powerful optimization technique that revolutionizes machine learning model training. By combining the best features of Stochastic Gradient Descent (SGD) and Batch Gradient Descent, MBGD offers a balanced approach to model optimization. In this blog post, we’ll explore how MBGD works, its advantages, and how to implement it in Python.

Understanding the Need for Mini-Batch Gradient Descent

First and foremost, let’s consider why MBGD is necessary. While SGD is efficient for large datasets, it often leads to unstable loss functions. On the other hand, Batch Gradient Descent can be computationally expensive. Consequently, MBGD emerges as a solution that addresses these limitations.

The Mechanics of Mini-Batch Gradient Descent

MBGD operates by dividing the dataset into small subsets or mini-batches. Subsequently, it computes the gradient of the cost function for each subset and updates the model parameters accordingly. As a result, MBGD strikes a balance between computational efficiency and stability.

Implementing Mini-Batch Gradient Descent in Python

Now, let’s dive into the practical implementation of MBGD using Python. We’ll use NumPy for numerical computations. Here’s a Python function that performs Mini-Batch Gradient Descent:

import numpy as np

def mini_batch_gradient_descent(X, y, learning_rate=0.01, batch_size=16, epochs=100):
  m, n = X.shape
  theta = np.random.randn(n, 1)  # random initialization

  for epoch in range(epochs):
      shuffled_indices = np.random.permutation(m)
      X_shuffled = X[shuffled_indices]
      y_shuffled = y[shuffled_indices]

      for i in range(0, m, batch_size):
          xi = X_shuffled[i:i + batch_size]
          yi = y_shuffled[i:i + batch_size]

          gradients = 2 / batch_size * xi.T.dot(xi.dot(theta) - yi)
          theta = theta - learning_rate * gradients

  return theta


This function initializes random weights and iterates through the dataset in small batches. For each batch, it calculates the gradients and updates the weights. This process is repeated for several epochs, resulting in optimized model parameters.

Applying MBGD to a Real-World Problem

To demonstrate the effectiveness of MBGD, let’s apply it to a simple linear regression problem:

from sklearn.metrics import mean_absolute_error

# Generate sample data
X = np.random.rand(100, 3)
y = 5 * X[:, 0] - 3 * X[:, 1] + 2 * X[:, 2] + np.random.randn(100, 1)

# Apply MBGD
theta = mini_batch_gradient_descent(X, y)

# Make predictions and calculate MAE
predictions = X.dot(theta)
mae = mean_absolute_error(y, predictions)
print(f"Mean Absolute Error: {mae}")


This code snippet generates sample data, applies MBGD to optimize the model parameters, and then calculates the Mean Absolute Error (MAE) to evaluate the model’s performance.

Advantages of Mini-Batch Gradient Descent

MBGD offers several benefits over other gradient descent methods. Firstly, it provides a good balance between the efficiency of SGD and the stability of Batch Gradient Descent. Secondly, it allows for more frequent updates to the model parameters, potentially leading to faster convergence. Lastly, MBGD can be easily parallelized, making it suitable for distributed computing environments.

Conclusion and Further Learning

In conclusion, Mini-Batch Gradient Descent is a powerful optimization technique that can significantly improve the training of machine learning models. By understanding and implementing MBGD, data scientists and machine learning engineers can develop more efficient and effective models.

For further learning, consider exploring advanced variants of MBGD, such as Adam or RMSprop. Additionally, you might want to experiment with different batch sizes and learning rates to see how they affect model performance. For a comprehensive guide on optimization algorithms, check out this excellent resource by Sebastian Ruder.

Remember, mastering optimization techniques like MBGD is crucial for developing high-performance machine learning models. So, keep practicing and experimenting with these concepts to enhance your skills in the field of machine learning and artificial intelligence.


Discover more from teguhteja.id

Subscribe to get the latest posts sent to your email.

Leave a Reply

Optimized by Optimole
WP Twitter Auto Publish Powered By : XYZScripts.com

Discover more from teguhteja.id

Subscribe now to keep reading and get access to the full archive.

Continue reading