Overview

This page discusses the building blocks of an algorithm.

As said in the very popular The Hundred Page Machine Learning Book each learning algorithm usually consists of three parts:

  1. a loss function

  2. an optimization criterion based on the loss function (a cost function, for example)

  3. an optimization routine that leverages training data to find a solution to the optimization criterion.

These are the building blocks of any learning algorithm. Some algorithms were designed to explicitly optimize a specific criterion (both linear and logistic regressions, SVM). Some others, including decision tree learning and kNN, optimize the criterion implicitly. Decision tree learning and kNN are among the oldest machine learning algorithms and were invented experimentally based on intuition, without a specific global optimization criterion in mind, and (like it often happens in scientific history) the optimization criteria were developed later to explain why those algorithms work.

A common overview of different algorithms is given below:

Optimization Criterion

Cost functions, also known as loss functions or objective functions, are used in data science to quantify the error or discrepancy between the predicted values generated by a model and the actual observed values in the dataset. The choice of a cost function depends on the specific type of problem you are trying to solve. Here are some common examples of cost functions used in data science:

  1. Mean Squared Error (MSE):

    • Use Case: Regression problems.

    • Formula: MSE = (1/n)Σ(yiy^i)2(1/n) Σ(y_i - ŷ_i)^2

    • Description: Measures the average squared difference between the predicted values (ŷ_i) and the actual values (y_i). Penalizes larger errors more.

  2. Mean Absolute Error (MAE):

    • Use Case: Regression problems.

    • Formula: MAE = (1/n)Σyiy^i(1/n) Σ|y_i - ŷ_i|

    • Description: Measures the average absolute difference between predicted and actual values. Less sensitive to outliers compared to MSE.

  3. Binary Cross-Entropy (Log Loss):

    • Use Case: Binary classification problems.

    • Formula: BCE = Σ(yilog(y^i)+(1yi)log(1y^i)) -Σ(y_i * log(ŷ_i) + (1 - y_i) * log(1 - ŷ_i))

    • Description: Evaluates the difference between predicted probabilities (y^iŷ_i) and actual binary labels (yiy_i ). Commonly used with logistic regression and neural networks.

  4. Categorical Cross-Entropy (Multiclass Log Loss):

    • Use Case: Multiclass classification problems.

    • Formula: Cross-Entropy = Σ(yilog(y^i))-Σ(y_i * log(ŷ_i))

    • Description: Generalization of binary cross-entropy for multiple classes. Measures the dissimilarity between predicted class probabilities and actual class labels.

  5. Hinge Loss (SVM Loss):

    • Use Case: Support Vector Machines (SVM) for binary classification.

    • Formula: Hinge Loss = Σmax(0,1yiy^i)Σmax(0, 1 - y_i * ŷ_i)

    • Description: Encourages the correct classification of data points while allowing a margin of error for some misclassified points.

  6. Huber Loss:

    • Use Case: Regression problems, robust to outliers.

    • Formula: Huber Loss = Σ0.5(yiy^i)2foryiy^iδ+Σδyiy^iforyiy^i>δΣ{0.5 * (y_i - ŷ_i)^2 for |y_i - ŷ_i| ≤ δ} + Σ{δ * |y_i - ŷ_i| for |y_i - ŷ_i| > δ}

    • Description: Combines the properties of MSE and MAE by using a quadratic loss for small errors and linear loss for large errors. More robust to outliers.

  7. Kullback-Leibler Divergence (KL Divergence):

    • Use Case: Used in probabilistic models and when comparing probability distributions.

    • Formula: KL Divergence = Σ(p(x)log(p(x)/q(x)))Σ(p(x) * log(p(x) / q(x)))

    • Description: Measures the difference between two probability distributions, p(x) and q(x). Used in tasks like variational autoencoders and generative adversarial networks (GANs).

  8. Custom Loss Functions:

    • Use Case: Tailored to specific problems.

    • Description: In some cases, custom loss functions are designed to address the unique requirements of a particular problem. For example, in recommendation systems, a loss function might be created to optimize recommendations based on user behavior.

The choice of a cost function depends on the nature of your data, the type of problem you're trying to solve, and your modeling goals. Selecting an appropriate cost function is a crucial step in designing and training machine learning models.

In the context of optimization problems, such as those encountered in machine learning and mathematical optimization, the terms "convex" and "non-convex" refer to the shape of the cost or objective function. These terms describe how the function's curvature changes as you move through the parameter space. Let's break down what each of these terms means:

  1. Convex Cost Function:

    • Definition: A cost function is convex if, when you draw a straight line between any two points on the function's graph, the line lies above the graph of the function for all points in between.

    • Characteristics:

      • Has a single global minimum (and no other local minima).

      • Gradient descent and similar optimization algorithms can efficiently find the global minimum.

      • Converges reliably to the optimal solution.

  2. Non-Convex Cost Function:

    • Definition: A cost function is non-convex if the straight line connecting two points on the graph may lie below the function at some intermediate points.

    • Characteristics:

      • Can have multiple local minima, making it challenging to find the global minimum.

      • Gradient-based optimization methods may get stuck in local minima.

      • Requires careful initialization and possibly more sophisticated optimization techniques, such as random restarts or simulated annealing, to avoid suboptimal solutions.

In machine learning, when training models, you often encounter non-convex cost functions because the relationships between model parameters and the objective function can be complex. Deep learning, for example, involves optimizing highly non-convex functions due to the complexity of neural network architectures.

The convexity or non-convexity of the cost function has significant implications for optimization. Convex problems are generally easier to solve because they have a single global minimum, while non-convex problems can be more challenging and may require more sophisticated optimization techniques to find good solutions.

Optimizers

Optimizers are algorithms used in data science and machine learning to adjust the parameters of a model during training to minimize the error or loss function. Each optimizer has its own way of updating these parameters, and they come with their own advantages and disadvantages.

Here are some common optimizers explained in simple terms with their pros and cons:

  1. Gradient Descent:

    • How it works: Gradient Descent computes the gradient (slope) of the loss function and takes small steps in the direction that reduces the loss.

    • Pros: Simple, widely used, and works well in many cases.

    • Cons: Can be slow to converge to the optimal solution, especially in high-dimensional spaces.

  2. Stochastic Gradient Descent (SGD):

    • How it works: Similar to Gradient Descent but updates parameters using a random subset (mini-batch) of the training data.

    • Pros: Faster convergence, works well with large datasets, and helps escape local minima.

    • Cons: Noisy updates can lead to oscillations, and it may require tuning of the learning rate.

  3. Mini-Batch Gradient Descent:

    • How it works: A compromise between Gradient Descent and SGD, where updates are made using small batches of data.

    • Pros: Faster than Gradient Descent, less noisy than SGD, and suitable for a wide range of problems.

    • Cons: Still requires tuning of the learning rate, and convergence depends on the batch size.

    In the case of Stochastic Gradient Descent, we update the parameters after every single observation and we know that every time the weights are updated it is known as an iteration. In the case of Mini-batch Gradient Descent, we take a subset of data and update the parameters based on every subset.

  4. Adam (Adaptive Moment Estimation):

    • How it works: Combines ideas from RMSprop and Momentum methods by adapting the learning rates for each parameter based on past gradients.

    • Pros: Fast convergence, works well with noisy data, and requires less hyperparameter tuning.

    • Cons: Might not perform as well on all problem types and could converge to suboptimal solutions in some cases.

  5. RMSprop (Root Mean Square Propagation):

    • How it works: Adjusts the learning rate for each parameter based on the magnitude of recent gradients.

    • Pros: Effective in handling non-stationary or noisy environments, requires less tuning compared to traditional SGD.

    • Cons: Not suitable for all problem types and may converge to local minima.

  6. Adagrad (Adaptive Gradient Algorithm):

    • How it works: Adapts the learning rate for each parameter based on the historical gradient information.

    • Pros: Automatically adapts learning rates, which can be beneficial for sparse data.

    • Cons: Learning rates can become too small over time, causing slow convergence and potentially overshooting the optimal solution.

  7. Nadam:

    • How it works: A combination of Nesterov Accelerated Gradient (NAG) and Adam optimizers, which combines their advantages.

    • Pros: Fast convergence, good for complex models, and less sensitive to the choice of hyperparameters.

    • Cons: Computationally more expensive than some other optimizers.

The choice of optimizer depends on the specific problem you are solving, the dataset size, and the architecture of your neural network. It's common to experiment with different optimizers and hyperparameters to find the one that works best for your particular task.

Parametric and non-parametric models

Parametric and non-parametric models are two fundamental approaches to modeling in statistics and machine learning. They differ in how they represent the underlying data distribution and make assumptions about the functional form of that distribution.

Parametric Models:

  1. Definition: Parametric models make specific assumptions about the functional form or shape of the data distribution. These assumptions are typically described by a fixed number of parameters.

  2. Characteristics:

    • The number of parameters in a parametric model is fixed, regardless of the amount of data available.

    • Examples of parametric models include linear regression, logistic regression, Gaussian Naive Bayes, and parametric probability distributions like the normal distribution.

    • Parametric models are computationally efficient and require relatively little data to estimate their parameters.

    • They are often interpretable because the model structure is well-defined.

    • However, they may not perform well when the assumptions about the data distribution are incorrect.

Non-Parametric Models:

  1. Definition: Non-parametric models make fewer assumptions about the functional form of the data distribution. Instead, they aim to let the data determine the model's complexity.

  2. Characteristics:

    • The number of parameters in a non-parametric model can grow with the amount of data. As more data becomes available, non-parametric models can become more complex and flexible.

    • Examples of non-parametric models include k-nearest neighbors (KNN), decision trees, random forests, support vector machines (SVMs), and kernel density estimation.

    • Non-parametric models are more flexible and can capture complex relationships in the data without strong assumptions about distribution shape.

    • They may require larger datasets to perform well, and their computational complexity can be higher.

    • Non-parametric models are often less interpretable because their structure is more data-driven.

Key Differences:

  1. Assumptions: Parametric models make specific assumptions about the data distribution, while non-parametric models make fewer assumptions and let the data dictate the model's complexity.

  2. Complexity: Parametric models have fixed complexity, while non-parametric models can adapt their complexity to the data.

  3. Interpretability: Parametric models are often more interpretable due to their well-defined structure, whereas non-parametric models can be more challenging to interpret because they are data-driven.

  4. Data Requirements: Parametric models can work well with relatively small datasets, while non-parametric models often require larger datasets to capture complex patterns.

Last updated