Cost
Function space
Parameter space
GP map
Redundancy
Bias
Robustness
Evolvability
Neutral networks
etc
Again, many analogies in learning theory
100101010001011101110101110111010101
100101010001011101110101110111010101
100101010001011101110101110111010101
Intuition: simpler outputs are much more likely to appear
100101010001011101110101110111010101
Turing machine --> Finite-state transducers
Kolmogorov complexity --> Lempel-Ziv complexity
Everything is now computable,
and can be analyzed theoretically.
Keywords: data compression, universal source coding
Can we do similar analysis for future systems we will be studying?
100101010001011101110101110111010101
100101010001011101110101110111010101
Optimal learning
Learn
Given
Good framework to study the effect of GP map biases on successful learning/evolution
like betting on some solutions more than on others
Compressing the input could help successful learning
I'm exploring ways of connecting it to VC dimension of neural nets
Average/typical case instead of worst case
depend on
studied with
studied with