Recent Changes - Search:

Home

Teaching

Research

Publications

Students

Miscellanea


Personal

edit sidebar

StatisticalInferenceInWirelessSensorNetworks


The goal of this project is to develop a generic approach to statistical inference in wireless sensor networks (WSN). The common challenge in different inference problems is that observations are distributed through the WSN. The statistical inference task at hand, therefore, necessitates percolation of observations through the network. At the moment, parameter estimation, hypothesis testing, Kalman filtering and field estimation among others are investigated with different approaches. However, a common characteristic of all of them is that the entity to be inferred can be found as the maximazing argument of a function defined by the observations. Exploiting this commonality, is the starting point of this project

As explained in the previous paragraph, problems in statistical inference often involve finding the arguments that maximize a given function. To illustrate the concept and introduce the technical approach of this project consider the problem of estimating a deterministic parameter $\mathbf{s}$. This task is to be accomplished by a sensor network with $K$ sensors $\{S_k\}_{k=1}^{K}$ with every $S_{k}$ sensor collecting random observations $\mathbf{x}_{k}$. To keep the presentation simple let the observations be conditionally independent and assume that the probability distribution functions (pdfs) $p_k\left(\mathbf{x}_k;\mathbf{s}\right)$ of $\mathbf{x}_k$ parameterized on $\mathbf{s}$ are known. Communication between sensors is restricted to neighbors in a graph as determined by, e.g., physical proximity. The set of $S_{k}$ neighbors is denoted as $n_{k}$ and their number as $N_{k}$.

To obtain an estimate $\hat{\mathbf{s}}$ of $\mathbf{s}$, the workhorse solution is the maximum likelihood estimator (MLE). The MLE $\hat{\mathbf{s}}$ is defined as the value of the parameter $\mathbf{s}$ that maximizes the probability of observing $\{\mathbf{x}_k\}_{k=1}^{K}$, therefore given by

$\hat{\mathbf{s}}_{{ML}} = \arg \max_{\mathbf{s}} \sum_{k=1}^{K} \log p_k\left( \mathbf{x}_k ; \mathbf{s} \right).$

In some cases the above equation can be solved in closed form but more often than not $\hat{\mathbf{s}}_{ML}$ is found numerically. Setting aside for the moment the fact that the $\mathbf{x}_k$ are physically distributed, the difficulty of such numerical maximization depends on the form of $p_k\left( \mathbf{x}_k ; \mathbf{s} \right)$ and spans all the range between very well-posed and very ill-posed problems. A particular class of well-behaved estimation problems occurs when the logarithms of the pdfs $\log p_k\left( \mathbf{x}_k ; \mathbf{s} \right)$ are concave. For cases with log-concave pdfs the MLE is the argument that maximizes a concave function. Consequently, simple gradient ascent algorithms, including Newton's method if fast convergence is desired, are guaranteed to find the optimal argument $\hat{\mathbf{s}}_{ML}$. The class of log-concave models is broad. Particular examples occur when the random observations $\mathbf{x}_k$ have mean value $\mathbf{s}$ and their distribution is Gaussian, Laplacian or any other member of the exponential family. A log-concave model also appears when observations $\mathbf{x}_k$ are uniformly distributed in a convex region.

In a sensor network matters are further complicated by the distributed nature of data collection. While $S_{k}$ could estimate $\mathbf{s}$ relying on $\mathbf{x}_{k}$ only, accuracy would benefit from the information about $\mathbf{s}$ collected by other sensors. Further complication comes from the fact that knowledge of the signal-observation model is likely to be distributed through the network. In general, sensor $S_{k}$ has access to the local signal model, i.e., the pdf $p_k\left( \mathbf{x}_k ; \mathbf{s} \right)$, but not to the signal models of other sensors. Communication of both, observations and model, intertwines with the estimation problem.

Without entering into detailed formulations, it is worth noting that other problems in statistical inference can be written in a form similar to the MLE problem in the above equation. Within a Bayesian framework, minimum (M-) mean squared error (MSE) estimation involves minimization of the MSE integral. This is too cumbersome for most models, justifying the use of maximum a posteriori (MAP) probability estimates. MAP estimators are formulated by adding the signal's pdf to the maximand used to find the MLE . Another alternative to MMSE is to restrict the estimators to be linear functions of the observations. Although they are not usually formulated as such, linear MMSE estimation is the result of minimizing a quadratic function representing the expected value of the squared norm of the estimation error. % Moving further from the MLE, hypothesis testing and signal detection in general also involves maximization of suitable arguments. And asides from the point estimation problems mentioned so far, interval and pdf estimation can also be formulated as optimization problems.

In spite of different goals, be it estimation of $\mathbf{s}$, hypothesis testing or pdf estimation, many statistical inference problems are similar in that: i) they can be solved by finding the maximum of a given objective function; ii) observations are collected by physically separate sensors; and iii) knowledge of the signal-observation model is distributed through the network. The goal of this project is to develop a general framework to solve statistical inference tasks with properties i)-iii).

Edit - History - Print - Recent Changes - Search
Page last modified on June 02, 2010, at 09:17 AM