Warning: "continue" targeting switch is equivalent to "break". Did you mean to use "continue 2"? in /cgihome/cis520/html/dynamic/2017/wiki/pmwiki.php on line 691

Warning: "continue" targeting switch is equivalent to "break". Did you mean to use "continue 2"? in /cgihome/cis520/html/dynamic/2017/wiki/pmwiki.php on line 694

Warning: Use of undefined constant MathJaxInlineCallback - assumed 'MathJaxInlineCallback' (this will throw an Error in a future version of PHP) in /cgihome/cis520/html/dynamic/2017/wiki/cookbook/MathJax.php on line 84

Warning: Use of undefined constant MathJaxEquationCallback - assumed 'MathJaxEquationCallback' (this will throw an Error in a future version of PHP) in /cgihome/cis520/html/dynamic/2017/wiki/cookbook/MathJax.php on line 88

Warning: Use of undefined constant MathJaxLatexeqrefCallback - assumed 'MathJaxLatexeqrefCallback' (this will throw an Error in a future version of PHP) in /cgihome/cis520/html/dynamic/2017/wiki/cookbook/MathJax.php on line 94
CIS520 Machine Learning | Lectures / Real ML
Recent Changes - Search:

Home

Real ML

 
  • Overfitting is your worst enemy
    • Train, Test (Quiz), Validate
    • Out-of-sample in the real world is subtle
      • new people, products, words, time periods, countries, …
  • Loss functions
    • L2 vs. L1 vs. L0 vs. cost
    • Classification problems often have asymmetric costs
    • precision/recall, sensitivity/specificity, ROC
  • Feature generation is critical
    • Think about the problem!!
    • How might you transform the features?
      • Do you want a scale-invariant method or not?
    • What else could you measure?
    • Is semi-supervised learning possible?
    • Are there surrogate labels you might use?
      • ‘distant supervision’
  • Feature Blocks
    • Different feature sets need different regularization
    • One solution: block-stagewise regression
  • Ensemble methods
    • Combinations of multiple methods are almost always the most accurate
    • Averaging methods (or experts)
      • equal weighting
        • {$ \hat{y} = (1/k) \sum_k \hat{y}_k$}
      • inverse variance-based weighting
        • {$ \hat{y} = (\sum_k \hat{y}_k / \sigma_k^2 ) /( \sum_k 1/ \sigma_k^2)$}
      • regression-based weighting
        • {$ \hat{y} = \sum_k w_k \hat{y}_k$}
    • Boosting
    • Random Forests
  • Missing data
    • Missing at random (MAR) or not requires different handling
      • Imputation work well for MAR, but most data are not MAR, so it is best to add a new variable which indicates whether or not the feature is missing.
  • Explanation/Insight is often important
    • Look at the data!
      • posts, images scoring highest in some feature or outcome
      • error analysis
    • variable importance
      • How “important” is each feature for the prediction?
    • visualization: word clouds, PCA, MDS
      • MDS: given an {$n x n$} matrix of distances between points, find a new (usually 2-D) representation of each of the points that as closely as possible preserves that distance matrix
  • Correlation is not causality

Back to Lectures

Edit - History - Print - Recent Changes - Search
Page last modified on 05 November 2017 at 11:34 AM