Week 17 Report
b02901085 徐瑞陽
b02901054 方為
We focus on slots 1 and 3 of subtask 1 first
Given a review text about a target entity (laptop, restaurant, etc.),
identify the following information:
Restaurant | Laptop | |
---|---|---|
Model | 12-class F-Measure |
81-class F-Measure |
Linear SVM (BOW) | 62.58 | 52.16 |
Linear SVM (Glove vec.) | 61.14 | 46.37 |
Linear SVM(BOW + Glove) | 64.17 | 51.05 |
RBF SVM (BOW) | 60.71 | 50.42 |
RBF SVM (Glove Vec.) | 61.03 | 42.16 |
RBF SVM (BOW + Glove) | 64.32 | 40.77 |
2015's BSET | 62.68 | 50.86 |
Restaurant | Laptop | |
---|---|---|
Model | 3-class accuracy | 3-class accuracy |
TreeLSTM | 71.23% | 74.93% |
Sentinue (SemEval 2015 best) | 78.69% | 79.34% |
Seems like we're on the right track...
removed conflicting labels for different aspects in a sentence
accuracy tested on dev set of training data where conflicting labels are removed, so it cannot completely reflect real acc.
Restaurant | Laptop | |
---|---|---|
Model | 3-class accuracy | 3-class accuracy |
Our model | 83% | 84.5% |
Sentinue (SemEval 2015 best) | 78.69% | 79.34% |
Note: accuracy obtained through cross-validation
Results
Exciting results!!! Note that the size of training data
for 2016 is about 50% larger than 2015, and there are about 1~3 % of training instances in 2015 that aren't in the 2016 dataset.
sentence
Subtask1-slot1 SVM
See if sentence contains aspect
if yes
predict polarity by Subtask1-slot3 model
For each sentence:
Finally, combine all aspect and polarity pairs