Stephen M
4/21/21
Demonstrate the feasibility of decoding human thought by translating brain activity into an image
EEG Encoder
fMRI Encoder
Image Reconstructor
Overall, a pretty simple idea. I don't think the architecture is too convoluted
Now that we have a "latent space" that is created from the image features, we can look at the other components
Unlike the EEG signal which can be directly used to train the learning model [...]
Encoders use the same architectures
We firstly employ Pearson Correlation Coefficient to select k most related features of the EEG signal or the fMRI signal according to each dimension of the m dimension image feature representation vector
Then, we construct m
parallel Bayesian regression sub-models. Each Bayesian regression predicts the
value of one corresponding dimension of the image feature representation vector
based on k most related features of the EEG signal or the fMRI signal
I think that there is a better way:
HCP (along with Haxby) has publically available datasets of people watching movies