Houjun Liu

interpolation

nyquist limit is great and all, but I really don’t want to wait for all \(T\) to be able to sample all the necessary terms to solve for every \(a_{j},b_{j}\) before we can reconstruct our signal.

So, even if we got our sequence of \(\frac{1}{2B}\) length of points, we need an alternative way to reconstruct the signal as we go.

One way to reconstruction via interpolation is just to connect the dots; however, this is bad because it creates sharp corners.

In General

Suppose you have a sampling period length \(T_{s}\):

\begin{equation} \hat{x}(t) = \sum_{m=0}^{\infty} X\qty(mT_{s}) F\qty( \frac{t-mT_{s}}{T_{s}}) = x(0) F \qty(\frac{t}{T_{s}}) + x(T_{s}) f\qty(\frac{t-T_{s}}{T_{s}}) + \dots \end{equation}

where \(F(t)\) is some interpolation function such that:

\begin{equation} \begin{cases} F(0) = 1 \\ F(k) = 0, k \in \mathbb{Z} \backslash \{0\} \end{cases} \end{equation}

Notice the above is a convolution between \(X\) and \(F\), where \(y\) is fixed as a multiple \(m\) around \(mT_{s}\) and the convolution is centered at \(\frac{t}{T_{s}}\).

However, because we are finite valued, we just slide a window around and skip around.

Consider now \(\hat{x}\) at \(kT_{s}\)

\begin{align} \hat{x}(kT_{s}) &= \sum_{m=0}^{\infty} X(mT_{s}) F \qty(\frac{kT_{s}- mT_{s}}{T_{s}}) \\ &= \sum_{m=0}^{\infty} X(mT_{s}) F \qty(k-m) \end{align}

now, recall that \(F\) is \(0\) for all non-zero integers, so each term will only be preserved once, precisely at \(m = k\). Meaning:

\begin{align} \hat{x}(kT_{s}) &= \sum_{m=0}^{\infty} X(mT_{s}) F \qty(k-m) \\ &= X(kT_{s}) 1 \\ &= X(kT_{s}) \end{align}

so this is why we need \(F(k) = 0, k \in \mathbb{Z} \backslash \{0\}\)

Zero-Hold Interpolation

Choose \(F\) such that:

\begin{equation} F = \begin{cases} 1, \text{if}\ |x| < \frac{1}{2} \\ 0 \end{cases} \end{equation}

Infinite-Degree Polynomial Interpolation

\begin{equation} F(t) = (1-t) (1+t) \qty(1- \frac{t}{2}) \qty(1+ \frac{t}{2}) \dots = \text{sinc}(t) = \frac{\sin(\pi t)}{\pi t} \end{equation}

This is the BEST interpolation; this is because it will be stretched such that every zero crossing matches eat \(mT_{s}\), meaning we will recover a sum of sinusoids.

This gives a smooth signal; and if sampling was done correctly with the nyquist limit, interpolating with sinc interpolation will give you your original signal.

Shannon’s Nyquist Theorem

Let \(X\) be a Finite-Bandwidth Signal where \([0, B]\) Hz.

if:

\begin{equation} \hat{X}(t) = \sum_{m=0}^{\infty} X(mTs) \text{sinc} \qty( \frac{t-mTs}{Ts}) \end{equation}

where:

\begin{equation} \text{sinc}(t) = \frac{\sin \qty(\pi t)}{\pi t} \end{equation}

  • if \(Ts < \frac{1}{2B}\), that is, \(fs > 2B\), then \(\hat{X}(t) = X(t)\) (this is a STRICT inequality!)
  • otherwise, if \(Ts > \frac{1}{2B}\), then \(\hat{X}(t) \neq X(t)\), yet \(\hat{X}(mTs) = X(mTs)\), and \(\hat{X}\) will be bandwidth limited to \([0, \frac{fs}{2}]\).

This second case is callled “aliasing”, or “strocoscopic effect”.


Alternate way of presenting the same info:

\begin{equation} \hat{X}(t) = \sum_{m=0}^{\infty} X(mTs) \text{sinc} \qty( \frac{t-mT_{s}}{T_{s}}) \end{equation}

Let \(X(t)\), as before, be a continuous-time, bandwidth limited, signal with Bandwidth \(B\); let \(\hat{X}(t)\) be the reconstruction of this signal with samples taken apart by \(T_{s} < \frac{1}{2B}\); then \(\hat{X}(t) = X(t)\). Otherwise, if \(T_{s} > \frac{1}{2B}\), then the reconstruction \(\hat{X}(t) \neq X(t)\), but the samples at \(mT_{s}\) will still match (that is, \(X(m T_{s}) = \hat{X}(m T_{s})\)) and \(\hat{X}(t)\) will be a Baseband Signal whose spectrum is limited by \([0, \frac{\frac{1}{T_{s}}}{2}] = [0, \frac{F_{s}}{2}]\). This second case is callled “aliasing”, or “strocoscopic effect”.