By Cheuk Ting Ho
Twitter: @cheukting_ho
https://github.com/Cheukting
@PyLondinium
@cheukting_ho | https://github.com/Cheukting
@PyLondinium
from keras import layers
model = keras.models.Sequential()
model.add(layers.Embedding(max_num_word, 256, input_length=maxlen))
model.add(layers.LSTM(256))
model.add(layers.Dense(max_num_word, activation='softmax'))
@cheukting_ho | https://github.com/Cheukting
@PyLondinium
@cheukting_ho | https://github.com/Cheukting
@PyLondinium
"Global Vectors for Word Representation"
"count-based" model
based on factorizing a matrix of word co-occurence statistics
"words to vectors" ?
"predictive" model
feed-forward neural network and optimized
@cheukting_ho | https://github.com/Cheukting
@PyLondinium
@cheukting_ho | https://github.com/Cheukting
@PyLondinium