

sitting down
drinking
order
The presence of noise is a common feature in most time series - that is, random (or apparently random) changes in the quantity of interest. This means that removing noise, or at least reducing its influence, is of particular importance. In other words, we want to smooth the signal.
The simplest smoothing algorithm is a moving average. In this example, we average each point along with its ten neighboring points.
smoothed = np.convolve(data, np.ones(10)/10)
smoothed = np.convolve(data, np.ones(50)/50)smoothed = np.convolve(data, np.ones(200)/200)smoothed = np.convolve(data, np.ones(100)/100)However, this technique can have a serious drawback. When a signal has some sudden jumps / occasional large spikes, the moving average is abruptly distorted. One way to avoid this problem is to instead use a weighted moving average, which places less weight on the points at the edge of the smoothing window. Using a weighted average, any new point that enters the smoothing window is only gradually added to the average and gradually removed over time.
The python pandas package has great tools + tutorials for smoothing time series data.




sit down
drinking
wait for beer
src Scikit-learn doc


?


A single EEG reading. Each power spectrum is "quantized" by 100, logarithmically-spaced bins. This makes it fast enough to train SVMs quickly.
pip install -U numpy scipy scikit-learn
git clone https://github.com/csmpls/sdha-svm.git# training set
train_X,train_y = assemble_vectors(3,['color','pass'],[0], readings)
# fit SVM
clf = svm.LinearSVC().fit(train_X,train_y)
# testing set
test_X,test_y = assemble_vectors(3,['color','pass'], [1,2,3,4,5,6,8,8,9], [0])
predictions = clf.predict(test_X)
print score_predictions(predictions,test_y)tutorial1.py
never train on your test set ~
def train_and_test(subject_number, gesture_pair):
# training set
train_X,train_y = assemble_vectors(subject_number,gesture_pair,[0], readings)
# fit SVM
clf = svm.LinearSVC().fit(train_X,train_y)
# testing set
test_X,test_y = assemble_vectors(subject_number,gesture_pair, [2,3,4,5,6,8,8,9], [0])
predictions = clf.predict(test_X)
return score_predictions(predictions,test_y)
tutorial1.py
print train_and_test(1, ['color', 'pass'])
print train_and_test(3, ['color', 'pass'])
print train_and_test(4, ['song', 'sport'])
base vs color
color vs song
song vs sport
sport vs pass
pass vs song
pass vs sport
pass vs eye
eye vs base
. . .

def cv_calibration_data(subject_number, gesture_pair):
# training set
train_X,train_y = assemble_vectors(subject_number,gesture_pair,[0,1,3,4,5,6], readings)
# fit SVM
clf = svm.LinearSVC().fit(train_X,train_y)
# cross-validate
cv_score = cross_validation.cross_val_score(clf,train_X,train_y, cv=7).mean()
print(subject_number, gesture_pair, cv_score)
cv_calibration_data(0,['pass','color'])
cv_calibration_data(12,['song','sport'])tutorial2.py
20 minutes to answer this question as best you can
Can we tell people apart based on their brainwaves?
Can we tell any 2 people apart? Any 3 people? Are some mental gestures better than others for telling people apart? How much data do we need to distinguish people from one another? etc . . . . . . . .
ffff@berkeley.edu