What we didn’t (much) cover
- Hypothesis testing
- p-values: the p-value is the probability of obtaining a test statistic at least as extreme as the one that was actually observed, assuming that the null hypothesis is true
- Confidence intervals, standard error estimates
- Gibbs Sampling /MCMC
- Conditional Random Fields (CRFs)
- supervised learning for HMM-style modes
- Metric learning
- Multitask learning
- simultaneously predict multiple {$y$}s
- Domain adaptation
- adapt model from one distribution {$p(x,y)$} to another
- Reinforcement learning
- Choose sequence of actions, to maximize the expected reward
- Markov Decision Processes (MDP, POMDP)
- Graphs and networks
- Markov fields
- graph Laplacians
- Structured learning
- predict a structure (e.g. a parse tree) instead of predicting a vector
Back to Lectures