Skip to content
Home » My Blog Tutorial » k-Nearest Neighbors: Implementing the Algorithm in Python

k-Nearest Neighbors: Implementing the Algorithm in Python

Classification Algorithms and Metrics

k-Nearest Neighbors (k-NN) is a fundamental machine learning algorithm that every data scientist should master. This powerful classification technique relies on the principle of similarity, where data points are classified based on their proximity to other known points. In this blog post, we’ll explore the k-NN algorithm, dive into its Python implementation, and discuss key concepts such as distance metrics and the selection of ‘k’. By the end, you’ll have a solid understanding of how to leverage k-NN for your own machine learning projects.

Understanding the k-Nearest Neighbors Algorithm

First and foremost, let’s unpack the core concept of k-NN. This algorithm operates on a simple yet effective premise: a data point is likely to belong to the same class as its nearest neighbors. Here’s how it works:

  1. For a given data point, the algorithm calculates its distance from all other points in the dataset.
  2. It then identifies the ‘k’ closest points, where ‘k’ is a user-defined parameter.
  3. Finally, it assigns the most common class among these ‘k’ neighbors to the new data point.

Consequently, the choice of ‘k’ plays a crucial role in the algorithm’s performance. A small ‘k’ value might lead to overfitting, while a large ‘k’ could result in underfitting. Therefore, selecting an optimal ‘k’ is essential for achieving accurate classifications.

Implementing Distance Metrics in Python

Now, let’s delve into the Python implementation of k-NN. One of the key components is the distance metric used to measure the similarity between data points. The most commonly used metric is Euclidean distance. Here’s how we can implement it in Python:

import math

# The 'euclidean_distance' function computes the Euclidean distance between two points
def euclidean_distance(point1, point2):
    squares = [(p - q) ** 2 for p, q in zip(point1, point2)] # Calculate squared distance for each dimension
    return math.sqrt(sum(squares)) # Return the square root of the sum of squares

# Test it
point1 = (1, 2) # The coordinates of the first point
point2 = (4, 6) # The coordinates of the second point
print(euclidean_distance(point1, point2)) # 5.0

This function calculates the Euclidean distance between two points in any number of dimensions. It’s worth noting that other distance metrics, such as Manhattan distance or Minkowski distance, can also be used depending on the specific requirements of your dataset.

Building the k-NN Classifier

With our distance metric in place, we can now implement the complete k-NN classifier. Here’s a Python implementation:

from collections import Counter

def k_nearest_neighbors(data, query, k, distance_fn):
    neighbor_distances_and_indices = []
    
    # Compute distance from each training data point
    for idx, label in enumerate(data):
        distance = distance_fn(label[:-1], query)
        neighbor_distances_and_indices.append((distance, idx))
    
    # Sort array by distance
    sorted_neighbor_distances_and_indices = sorted(neighbor_distances_and_indices)
    
    # Select k closest data points
    k_nearest_distances_and_indices = sorted_neighbor_distances_and_indices[:k]
    
    # Obtain class labels for those k data points
    k_nearest_labels = [data[i][1] for distance, i in k_nearest_distances_and_indices]

    # Majority vote
    most_common = Counter(k_nearest_labels).most_common(1)
    return most_common[0][0] # Return the label of the class that receives the majority vote

This implementation takes a dataset, a query point, the value of ‘k’, and a distance function as inputs. It then returns the predicted class for the query point based on its k-nearest neighbors.

# Define the dataset (training set)
# Each element of the dataset is a tuple (features, label)
data = [
    ((2, 3), 0),
    ((5, 4), 0),
    ((9, 6), 1),
    ((4, 7), 0),
    ((8, 1), 1),
    ((7, 2), 1)
]
query = (5, 3)  # test point

# Perform the classification
predicted_label = k_nearest_neighbors(data, query, k=3, distance_fn=euclidean_distance)
print(predicted_label)  # Expected class label is 0

Optimizing k-NN Performance

To enhance the performance of your k-NN classifier, consider the following tips:

  1. Feature scaling: Normalize your features to ensure all dimensions contribute equally to the distance calculation.
  2. Cross-validation: Use techniques like k-fold cross-validation to find the optimal value of ‘k’.
  3. Dimensionality reduction: Apply methods like PCA to reduce the number of features and mitigate the curse of dimensionality.
  4. Weighted voting: Assign weights to neighbors based on their distance, giving closer neighbors more influence in the classification decision.

Conclusion

In conclusion, the k-Nearest Neighbors algorithm is a versatile and intuitive machine learning technique that can be easily implemented in Python. By understanding its core concepts and following the implementation steps outlined in this post, you’re now equipped to apply k-NN to your own classification problems. Remember to experiment with different distance metrics, optimize your ‘k’ value, and consider performance enhancements to get the most out of this powerful algorithm.

For more information on machine learning algorithms and their implementations, check out this comprehensive guide on supervised learning techniques.


Discover more from teguhteja.id

Subscribe to get the latest posts sent to your email.

Leave a Reply

Optimized by Optimole
WP Twitter Auto Publish Powered By : XYZScripts.com

Discover more from teguhteja.id

Subscribe now to keep reading and get access to the full archive.

Continue reading