Warning: "continue" targeting switch is equivalent to "break". Did you mean to use "continue 2"? in /cgihome/cis520/html/dynamic/2017/wiki/pmwiki.php on line 691

Warning: "continue" targeting switch is equivalent to "break". Did you mean to use "continue 2"? in /cgihome/cis520/html/dynamic/2017/wiki/pmwiki.php on line 694

Warning: Use of undefined constant MathJaxInlineCallback - assumed 'MathJaxInlineCallback' (this will throw an Error in a future version of PHP) in /cgihome/cis520/html/dynamic/2017/wiki/cookbook/MathJax.php on line 84

Warning: Use of undefined constant MathJaxEquationCallback - assumed 'MathJaxEquationCallback' (this will throw an Error in a future version of PHP) in /cgihome/cis520/html/dynamic/2017/wiki/cookbook/MathJax.php on line 88

Warning: Use of undefined constant MathJaxLatexeqrefCallback - assumed 'MathJaxLatexeqrefCallback' (this will throw an Error in a future version of PHP) in /cgihome/cis520/html/dynamic/2017/wiki/cookbook/MathJax.php on line 94
CIS520 Machine Learning | Lectures / Locally Weighted Regression
Recent Changes - Search:

Home

Locally Weighted Regression

 

Locally Weighted Regression

Our final method combines advantages of parametric methods with non-parametric. The idea is to fit a regression model locally, weighting examples by the kernel K.

Locally Weighted Regression Algorithm

  1. Given training data {$D=\{\mathbf{x}_i,y_i\}$}, Kernel function {$K(\cdot,\cdot)$} and input {$\mathbf{x}$}
  2. Fit weighted regression {$\hat{\mathbf{w}}(\mathbf{x}) = \arg\min_w \sum_{i=1}^{n} K(\mathbf{x}, \mathbf{x}_i) (\mathbf{w}^\top \mathbf{x}_i - y_{i})^2$}
  3. Return regression prediction {$\hat{\mathbf{w}}(\mathbf{x})^\top \mathbf{x}$}.

Note that we can do the same for classification, fitting a locally weighted logistic regression:

Locally Weighted Logistic Regression Algorithm

  1. Given training data {$D=\{\mathbf{x}_i,y_i\}$}, Kernel function {$K(\cdot,\cdot)$} and input {$\mathbf{x}$}
  2. Fit weighted logistic regression {$\hat{\mathbf{w}}(\mathbf{x}) = \arg\min_w \sum_{i=1}^{n} K(\mathbf{x}, \mathbf{x}_i) \log(1+\exp\{-y_i\mathbf{w}^\top \mathbf{x}_i\})$}
  3. Return logistic regression prediction {$sign(\hat{\mathbf{w}}(\mathbf{x})^\top \mathbf{x})$}.

The difference between regular linear regression and locally weighted linear regression can be visualized as follows:


Linear regression uses the same parameters for all queries and all errors affect the learned linear prediction. Locally weighted regression learns a linear prediction that is only good locally, since far away errors do not weigh much in comparison to local ones.

Here’s a result of using good kernel width on our regression examples (1/32, 1/32 and 1/16 of x-axis width, respectively):


Back to Lectures

Edit - History - Print - Recent Changes - Search
Page last modified on 02 September 2010 at 04:10 PM