Scene 1 | Goal Supervised
Scene 2 | Goal Supervised
Scene 2 | Goal Supervised
Scene 2 | Goal Supervised
Scene 2 | Goal Supervised
Scene 2 | Goal Supervised
Scene 2 | Goal Supervised
Scene 2 | Goal Supervised
So notice that if we predict below our target, we need to make sure our next prediction is higher.
Notice the gradient is negative in this case.
Also notice that if we predict too much, we need to make sure our next prediction is lower. The gradient is postive in this case.
So if we take our prediction and subtract it by our gradient. It will make sure that our next value is closer to the ideal solution
Scene 2 | Goal Supervised
Note that if we subtract the gradient by a value that is too high, we will never get a good solution.
In Science, this means it will never converge.
So that's where the learning rate comes in. It's a coefficient that we multiply by our gradient to make sure we converge gradually ensuring our algorithm learns.
Scene 2 | Goal Supervised
Now that's gradient descent, but what about our weights and biases?
How does a full architecture neural network learn?
Sign up for more.
Scene 2 | Goal Supervised
Scene 3| Goal Supervised
Scene 2 | Goal Supervised
Our input data does not change, its the truth, we can't change the radius squared in this case. That's our x value.
Our slope which you will see as w, is what our neural network can change in order to make better predictions and learn.
Finally, the b value is our y-intercept which the neural net can change in order to make better initial starting points
Scene 2 | Goal Supervised
So let's say we have a house of 5 Square meters, costing $2M and 9 Square meters costing $2.5M. What if we had 10 Square meters, what's the price? So our neural network will create a general function that will make the optimal predictions for us given input and output data.
Scene 2 | Goal Supervised
So by changing our "w" value, we can see that the slope or the rise over run, changes to make better predictions.
But how good are those predictions? Our cost function will tell us the error of our predictions through the equation.
Prediction - Actual Price.
We then square it in order to make it absolute
Scene 2 | Goal Supervised
So let's first manually change the weight in order to make the best outcome. So we
adjust the weights a little, adjust the bias. And, finally, we got the right outcome!
But wait, that was manual. How do you think a machine can do this automatically and learn from its mistakes using deep learning. How does your Tesla car learn, how do you automate reporting, and how do financial predictions actually work. Find out now
Chapter 1 | Goal Supervised
Chapter 1 | Goal Supervised
Chapter 1 | Goal Supervised
Chapter 1 | Goal Supervised
Chapter 1 | Goal Supervised
Chapter 1 | Goal Supervised
Chapter 1 | Goal Supervised
Chapter 1 | Goal Supervised
A function is a block of code which only runs when it is called.
You can pass data, known as parameters, into a function.
A function can return data as a result.
Chapter 1 | Intro
To create a function, use the keyword def Name():
Create a function named NeuralNet, with a property named and call it
Chapter 1 | Python, class intro
Chapter 5 | Forward propagation Intro
Chapter | Forward propagation Intro
Chapter | Forward propagation Intro
Chapter | Sigmoid Function
Chapter | Cost
Chapter | Error Calcuation
Chapter | Error Calcuation
https://www.surveymonkey.com/r/QY2W2C9
There are only 5 questions in the Survey