by yuan meng
measures the acuity of the approximate number system (ans)
two packs of extremely dangerous dogs 👉 which pack has more?
probability of getting it right 👉 this small area
infer W from response data
data:
be sure to take log!
def metropolis_hastings(n1, n2, a, n_iters):
# generate a random W
W = # sample a value from some distribution
# log posterior of initial W
log_p = log_posterior(n1, n2, a, W)
# arrays to collect samples and log posteriors
Ws, log_ps = np.zeros(n_iters), np.zeros(n_iters)
# collect samples from posterior
for iter in range(n_iters):
# propose a new W
W_new = # add a noise from Normal(0, 0.1)
# calculate new log posterior
log_p_new = log_posterior(n1, n2, a, W_new)
# log ratio of new and old log posteriors
log_ratio = # what should this be?
# decide whether to accept W_new
if # write your own condition:
W = W_new
log_post = log_post_new
# collect sample from this iteration
Ws[iter], log_ps[iter] = W, log_post
# return samples
return {"W": Ws, "log_posteriors": log_ps}
most challenging bit
(ofc, critical parts commented out)
homework 9, q5
homework 9, q7
bayes as a model of cognition: a normative model that dictates what an ideal learner should do given data and prior
bayes as a data analysis tool: a descriptive model that captures what a real learner did do given data and prior
the same prior and data lead to the same inference 👉 if prior is optimal, then inference is optimal
e.g., seeing 100 heads in a row, the probability that the tosser is a psychic?
different people may discount observations differently 👉 can learn each person's "discount rate" from data