Skip to content
Home » My Blog Tutorial » Math Behind Neural Networks: Unveiling the Universal Approximation

Math Behind Neural Networks: Unveiling the Universal Approximation

Introduction to Neural Networks with TensorFlow

The math behind neural networks forms the foundation of modern artificial intelligence. Neural networks, inspired by biological systems, utilize complex mathematical principles to model and approximate a wide range of functions. At the heart of this computational marvel lies the Universal Approximation Theorem, a powerful concept that underpins the capabilities of these networks. In this blog post, we’ll delve into the fascinating world of neural network mathematics, exploring activation functions, the Universal Approximation Theorem, and how tools like TensorFlow simplify these intricate calculations.

The Building Blocks: Neural Network Mathematics

To begin with, let’s examine the fundamental mathematical representation of a neural network. At its core, a simple feed-forward neural network with one hidden layer can be expressed as:

f(x)=σ(W2⋅σ(W1⋅x+b1)+b2)f(x)=σ(W2​⋅σ(W1​⋅x+b1​)+b2​)

In this equation, x represents the input vector, W1 and W2 are weight matrices, b1 and b2 are bias vectors, and σ denotes the activation function. Consequently, this compact formula encapsulates the essence of neural network computations.

The Power of Activation Functions

Moving on to activation functions, we find that they play a crucial role in introducing non-linearity to neural networks. As a result, these functions enable networks to model complex relationships in data. Let’s explore some common activation functions:

  1. Sigmoid function
  2. Hyperbolic Tangent (tanh)
  3. Rectified Linear Unit (ReLU)

Furthermore, each of these functions has unique properties that make them suitable for different types of neural network architectures and problem domains.

The Universal Approximation Theorem: A Game-Changer

Now, let’s turn our attention to the Universal Approximation Theorem (UAT), a cornerstone in neural network theory. In essence, this theorem states that a neural network with a single hidden layer containing a finite number of neurons can approximate any continuous function to any desired degree of accuracy.

To illustrate this concept, consider the following Python code that demonstrates a simple approximation:

import numpy as np
import matplotlib.pyplot as plt

def target_function(x):
return x * np.sin(x)

x = np.linspace(0, 10, 100)
y = target_function(x)

n_neurons = 10
np.random.seed(42)

weights = np.random.rand(n_neurons)
biases = np.random.rand(n_neurons)

neurons = np.tanh(weights * x.reshape(-1, 1) + biases)
coefficients = np.linalg.lstsq(neurons, y, rcond=None)[0]
y_approx = neurons @ coefficients

plt.plot(x, y, label="Target Function: $f(x) = x*\sin(x)$")
plt.plot(x, y_approx, label="Neural Network Approximation")
plt.legend()
plt.show()

This code demonstrates how a simple neural network can approximate a complex function, validating the Universal Approximation Theorem in practice.

TensorFlow: Simplifying Neural Network Complexity

Finally, let’s explore how TensorFlow, a popular machine learning library, abstracts away much of the mathematical complexity involved in implementing neural networks. For instance, TensorFlow provides high-level APIs that allow developers to focus on model architecture rather than intricate calculations.

Here’s a simple example of how TensorFlow simplifies neural network implementation:

import tensorflow as tf
from tensorflow.keras import layers, models

model = models.Sequential([
layers.Dense(10, activation='tanh', input_shape=(1,)),
layers.Dense(10, activation='tanh'),
layers.Dense(1)
])

model.compile(optimizer='adam', loss='mse')

x_train = np.linspace(0, 10, 100)
y_train = x_train * np.sin(x_train)
model.fit(x_train, y_train, epochs=500, verbose=0)

y_pred = model.predict(x_train)

plt.plot(x_train, y_train, label='True function')
plt.plot(x_train, y_pred, label='Neural network approximation')
plt.legend()
plt.show()

In conclusion, the math behind neural networks, while complex, provides a solid foundation for the incredible capabilities of modern AI systems. By understanding concepts like activation functions and the Universal Approximation Theorem, we gain insight into how these networks can model intricate relationships in data. Moreover, tools like TensorFlow make it possible for developers to harness this power without getting bogged down in mathematical intricacies.

For more information on neural network mathematics, you can visit TensorFlow’s official documentation.


Discover more from teguhteja.id

Subscribe to get the latest posts sent to your email.

Leave a Reply

Optimized by Optimole
WP Twitter Auto Publish Powered By : XYZScripts.com

Discover more from teguhteja.id

Subscribe now to keep reading and get access to the full archive.

Continue reading