![](https://s3.amazonaws.com/media-p.slid.es/uploads/664662/images/5738974/1200px-NumPy_logo.svg.png)
![](https://s3.amazonaws.com/media-p.slid.es/uploads/664662/images/5738984/python-logo-master-v3-TM.png)
![](https://s3.amazonaws.com/media-p.slid.es/uploads/664662/images/5738985/pandas_logo.png)
![](https://s3.amazonaws.com/media-p.slid.es/uploads/664662/images/5738987/250px-Jupyter_logo.svg.png)
![](https://s3.amazonaws.com/media-p.slid.es/uploads/664662/images/5738988/1200px-Scikit_learn_logo_small.svg.png)
![](https://s3.amazonaws.com/media-p.slid.es/uploads/664662/images/5738991/keras-logo-2018-large-1200.png)
![](https://s3.amazonaws.com/media-p.slid.es/uploads/664662/images/5738992/tensorflow-logo.png)
![](https://s3.amazonaws.com/media-p.slid.es/uploads/664662/images/5738994/8cbf4a7ecbe7c6b573f98eeccb0f7584.png)
![](https://s3.amazonaws.com/media-p.slid.es/uploads/664662/images/5739001/8dce97d5544327cb2ffd93ed1a636d1386ae6c38.png)
![](https://s3.amazonaws.com/media-p.slid.es/uploads/664662/images/5739014/ml_map.png)
![](https://s3.amazonaws.com/media-p.slid.es/uploads/664662/images/5739036/Cover.png)
![](https://s3.amazonaws.com/media-p.slid.es/uploads/664662/images/5739038/LSTM3-chain.png)
![](https://s3.amazonaws.com/media-p.slid.es/uploads/664662/images/5739045/A-general-model-of-a-deep-neural-network-It-consists-of-an-input-layer-some-here-two.png)
Candidate Bot
![](https://s3.amazonaws.com/media-p.slid.es/uploads/664662/images/5739087/candidate-bot.jpg)
![](https://s3.amazonaws.com/media-p.slid.es/uploads/664662/images/5739094/sonderbot.png)
Scrape
![](https://s3.amazonaws.com/media-p.slid.es/uploads/664662/images/5739122/jazz_candidates.png)
![](https://s3.amazonaws.com/media-p.slid.es/uploads/664662/images/5739162/candidate_info.png)
![](https://s3.amazonaws.com/media-p.slid.es/uploads/664662/images/5739163/candidate-folderes.png)
Preprocess
![](https://s3.amazonaws.com/media-p.slid.es/uploads/664662/images/5739162/candidate_info.png)
Structured data
Extract City, province, source, apply context
![](https://s3.amazonaws.com/media-p.slid.es/uploads/664662/images/5739213/one-hot.png)
Text data
![](https://s3.amazonaws.com/media-p.slid.es/uploads/664662/images/5739162/candidate_info.png)
TF-IDF Vectorization
[0.14, 0, 0, 0.34, 0, 0, 0.33, 0, 0.21, ...]
PCA Dimensionality Reduction
[0.91, 0.03, 0.44, 0.82]
[0, 0, 0, 1, 1, 0, 1, 0.91, 0.03, 0.44, 0.82, 0.33, 0.10, 0.22, 0.81, 0.72, 0.43, 0.80 , 0.01]
Train / Predict
![](https://s3.amazonaws.com/media-p.slid.es/uploads/664662/images/5739267/candidate_ratings.png)
![](https://s3.amazonaws.com/media-p.slid.es/uploads/664662/images/5739269/candidate_bot.png)
![](https://s3.amazonaws.com/media-p.slid.es/uploads/664662/images/5739272/model.png)
![](https://s3.amazonaws.com/media-p.slid.es/uploads/664662/images/5739274/candidate-folderes.png)
Train with existing
candidates
Predict with new candidates
Classification
Reinforcement Learning
Past data
resume
cover letter
Hired / Not Hired
![](https://s3.amazonaws.com/media-p.slid.es/uploads/664662/images/5739339/eegs-c.png)
![](https://s3.amazonaws.com/media-p.slid.es/uploads/664662/images/5739272/model.png)
![](https://s3.amazonaws.com/media-p.slid.es/uploads/664662/images/5739345/eegs-bw.png)
Future data
resume
cover letter
Hired / Not Hired
![](https://s3.amazonaws.com/media-p.slid.es/uploads/664662/images/5739358/robot.png)
1. Observe state
- Who works here
- Who is applying
2. Predict
- What long term affect would each action have on the company
4. Remember
- Record initial state, resulting stat, and immediate affect on the company
3. Act
- hire / fire
5. Train
- Pick samples from memory and train AI to better predict long term affect of the action taken
Initial State | Action | Result State | Immediate Reward |
---|---|---|---|
S1 | A1 | S2 | + $1000 |
S2 | A2 | S3 | - $520 |
![](https://s3.amazonaws.com/media-p.slid.es/uploads/664662/images/5739358/robot.png)
S1
{
A1: "+$1,111,111",
A2: "-$222,222",
A3: "$3"
}
Long term reward
longTermReward_{A1} = immedateReward1 + 0.9(immedateReward2 + 0.9(immedateReward3 + ... ))
longTermRewardA1=immedateReward1+0.9(immedateReward2+0.9(immedateReward3+...))
longTermReward_{A1} = immedateReward1 + 0.9(longTermReward_{A2})
longTermRewardA1=immedateReward1+0.9(longTermRewardA2)
//memory = {state1: S1, action: A1, state2: S2, reward: 1000}
input = [S1]
output = bot.predict(S1) //{A1: 1111111, A2: -222222, A3: 3}
output.A1 = 1000 + 0.9 * max(bot.predict(S2))
bot.train(input, output)
deck
By Rob McDiarmid
deck
- 706