A Brief Introduction
Google releases an open source version to the public
Designed with massive neural nets in mind
I start geeking out about machine learning after reading a blog post on recurrent neural networks.
I have no idea how any of this works, but it's exciting!
I still don't even know which questions to ask
Data Science guild is forming
I know a few things about TensorFlow
I'll share what I know and hand-wave the rest
Models are constructed as computational graphs
Nodes on the graph are "Ops"
Edges represent Tensors (i.e. typed multi-dimensional arrays)
Manages the environment for executing ops in a graph
Specifies which device executes part of all of a graph
Each session has one graph
Each graph may be run in multiple sessions
Done in some "front-end" language
Python
C++ (Yeah, right)
Model inputs typically "fed" in via placeholder ops
Model outputs specified by "fetching" certain ops
Trained values as Variable ops which must be initialized before execution
Dataset containing images of handwritten digits. The classification problem of converting images to digits is a common benchmark for machine learning models.
1. Aggregate evidence in support of each class ("digit")
2. Convert the evidence into a likelihood for each given class
3. Train a set of weights to optimize the output probabilities.
Checked into the guild repo under tensorflow-mnist-examples
The TensorFlow project has a ton of additional docs and tutorials. This is based on the introductory tutorial.