&
Wrapping up, Recap, Career Guidance. etc
3rd April 2024
&
Recap, Career Guidance. etc
Do our models understand our tasks?
Hupkes, Dieuwke, et al. "A taxonomy and review of generalization research in NLP." Nature Machine Intelligence 5.10 (2023): 1161-1174.
How much do models really generalize?
What makes NLP systems work?
What's going on inside NNs?
input sentence
Output
Black Box Model
Can we build interpretable, but performant models?
e.g Establishing bounds with Model Class Reliance:
This article proposes MCR as the upper and lower limit on how important a set of variables can be to any well-performing model in a class.
In this way, MCR provides a more comprehensive and robust measure of importance than traditional importance measures for a single model.
More reads: https://rssdss.design.blog/2020/03/31/all-models-are-wrong-but-some-are-completely-wrong/
Can understanding help find the next transformer?
There are significant gaps between high and low resource languages
Can we remove language resource gaps?
Benchmarks and how we evaluate drive the progress of the field
How do we evaluate things like interpretability?
How do we make NLP systems work in the real world, on real problems?
1 . Bio/Clinical NLP
2 . Legal Domain
3 . Scientific Communication
3 . Education