Autocorrelation

HomePage | Recent changes | View source | Discuss this page | Page history | Log in |

Printable version | Disclaimers | Privacy policy

Autocorrelation is a mathematical tool used frequently for example in digital signal processing for analysing series of values, such as time domain signals.


The one-dimensional autocorrelation is defined as the expected value of of correlation. Formally, the autocorrelation R at distance j for signal x(i) is R(j) = E{[x(n)-m]*[x(n-i)-m]}, where the expected value operator E{} is taken over n, and m is the average value (expected value) of x(n-i). Quite frequently, autocorrelations are calculated for zero-centered signals, that is, for signals with zero mean. The autocorrelation definition then becomes R(j) = E[x(n)*x(n-i)], which is the definition of autocovariance.


Multi-dimensional autocorrelation is defined similarly, that is, for example in three dimensions R(j,k,l) = E{[x(n,m,p)-m]*[x(n-j,m-k,p-l)-m]}. In the following, we will describe properties of one-dimensional autocorrelations only, since most properties are easily transfered from the one-dimensional case to the multi-dimensional cases.


A fundamental property of the autocorrelation is symmetry, R(i) = R(-i), which is easy to prove from the definition.

This needs a lot of more work...