Stochastic Gradient

Boosting Machines

core concepts

 

 

Daina Bouquin

Center for Astrophysics | Harvard & Smithsonian

 

I'm a librarian

 

MS Data Science

MS Library and Information Science

CAS Data Science

I don't like the

"Black Box" metaphor in machine learning.

 

It's discouraging and misleading.

Literacy means being able to critically assess what you're presented with, learning to problem solve and actively contributing to a dialogue.

You can't do that if you believe something is unknowable.

"...a deep learning system is not a black box; even the development of such a system need not be a black box. The real challenge, however, is that both of these things are complex, and not necessarily well understood."

 

Dallas Card

The “black box” metaphor in machine learning

Stochastic Gradient Boosting Machines: not a black box

 

Can a set of weak learners create a single strong learner?

Yes. Boosting algorithms iteratively learn weak classifiers with respect to a distribution and add them to a final strong classifier

 

Boosting:  

ML ensemble method/metaheuristic

Helps with bias-variance tradeoff (reduces both)

 

metaheuristic is a higher-level procedure or heuristic designed to find, generate, or select a heuristic (partial search algorithm) that may provide a sufficiently good solution to an optimization problem, especially with incomplete or imperfect information or limited computational capacity

Bias = error from erroneous assumptions in the learning algorithm

 

 

 

Variance = sensitivity to small fluctuations in the training set 

You don't want to model noise

high bias means you could miss relevant relations between features 

∴ underfitting

∴ overfitting

 

Boosting algorithms:

Weighted in relation to the weak predictors' accuracy

Weighting decorrelates the predictors by focusing on regions missed by past predictors

New predictors learn from previous predictor mistakes

∴ take fewer iterations to converge

https://quantdare.com/what-is-the-difference-between-bagging-and-boosting/

Boosting means observations have an unequal probability of appearing in subsequent models

 

Observations with highest error

appear most

Ensembling

Bagging

Boosting

Handles overfitting

Reduces variance

Independent classifiers

Can overfit

Reduces bias & variance

Sequential classifiers

e.g. Random Forest

e.g. Gradient Boosting

Helps address main causes of differences between actual and predicted values:  variance and bias

(noise is somewhat irreducible)

Boosting with

Gradient Discent

gradient descent assuming a convex cost function

Local minimum must be a global minimum

 

 

 

Most common cost function is mean squared error

Too much random noise can be an issue with convex optimization.

Non-convex optimization options for boosting exist though e.g. BrownBoost

If you're worried about local minima check out restarts  (SGDR)

*The point of GD is to minimize the cost function*

(find the lowest error value/the deepest valley in that function)

https://hackernoon.com/gradient-descent-aynk-7cbe95a778da

 Slope points to the nearest valley

Choice of cost function will affect calculation of the gradient of each weight.

 

Cost function is for monitoring the error with each training example

 

The derivative of the cost function with respect to the weight (slope!) is where we shift the weight to minimize the error for that training example 

 

This gives us direction

https://hackernoon.com/gradient-descent-aynk-7cbe95a778da

GD optimizers use a technique called “annealing” to determine the learning rate (how small/large of a step to take) = α

 

Theta (weight) should decrease at each iteration (add weight when the model is wrong)

if alpha is too large we overshoot the min

if alpha is too small we take too many iterations to find the min

Example:

Black line represents a non linear cost function.

If our parameters are initialized to the blue dot, we need a way to move around parameter space to the lowest point.

https://medium.com/38th-street-studios/exploring-stochastic-gradient-descent-with-restarts-sgdr-fa206c38a74e

Then just do it stochastically

 

With every GD iteration shuffle the training set and pick a random training example

 

Since you’re only using one training example, the path to the minima will be all zig-zag crazy

 

(Imagine trying to find the fastest way down a hill

only you can't see all of the curves in the hill)

May want to consider mini-batching rather than stochastic approach with very large datasets

Gradient boosting machine - Linear Regression Example

GBM can be configured to different base learners (e.g. tree, stump, linear model)

 

basic assumption: sum of residuals = 0

leverage pattern in residuals  to strengthen weak prediction model until residuals become randomly distributed

if you keep going you risk overfitting

Algorithmically we are minimizing our cost function such that the test cost reaches its minima

Adjusted our predictions using the fit on the residuals and accordingly adjusting value of alpha

We are doing supervised learning here

 

you can check for overfitting using a

k-fold cross validation

resampling procedure used to evaluate machine learning models on a limited data sample

Pseudocode for a generic gradient boosting method

http://statweb.stanford.edu/~jhf/ftp/trebst.pdf

*MATH*

StackOverflow fixed my problems

https://bit.ly/2FwXUAF

(there are a lot of people who can help you if you're lost)

Please

make your code citable

https://guides.github.com/activities/citable-code/

 

 

Stochastic Gradient Boosting Machines: Core Concepts

By Daina Bouquin

Stochastic Gradient Boosting Machines: Core Concepts

Invited talk at the CfA's Classification in the Golden Era of X-ray Catalogs Workshop May 4, 2018

  • 2,183