# Image Representation

## Continuous-domain, discrete-domain, and finite-size images

An *image* is a spatially varying signal $s(x,y)$ where $x$ and $y$ are two *spatial* coordinates. The signal value $s(x,y)$ at each spatial location $(x,y)$ can be either a scalar (e.g. light intensity for gray scale images) or a vector (e.g. 3 dimensional vector for RGB color images, or more general $P$-dimensional vector for multispectral images). In the latter case, we could treat each vector component separately as a scalar image (referred to as a channel), sometimes after a certain transformation in the $P$-dimensional vector space.

In digital image processing, images are discretized into samples at discrete spatial locations that are indexed by integer coordinates $[m,n]$. Typically, a *discrete-domain* image $s[m,n]$ is related to a *continuous-domain* image $s(x,y)$ through the *sampling* operation

where ${\Delta}_{x}$ and ${\Delta}_{y}$ are sampling intervals in $x$ and $y$ dimensions, respectively. More general, a discrete-space image is obtained through the generalized-sampling operation

where ${\phi}_{m,n}(x,y)$ is the *point-spread function* of the image sensor (e.g. a photometric sensor in a digital camera) at the location indexed by $(m,n)$. Typically, point-spread functions at different locations are simply shifted versions of a single function as

and ${\phi}_{m,n}(x,y)$ is called the sampling kernel.

Furthermore, a discrete image $s[m,n]$ is often of *finite* size; for example $0\le m\le M-1,\phantom{\rule{0.277778em}{0ex}}0\le n\le N-1$. Then $s[m,n]$ can also be treated as an $M\times N$ matrix. The image sample $s[m,n]$ and the corresponding location $[m,n]$ is often called a *pixel*, or picture element.

## Fourier transforms and sampling theorem

It is often very effective, conceptually and computationally, to represent images in the *frequency domain* using the *Fourier transform*. For a continuous-domain image $s(x,y)$, its Fourier transform is defined as

Here, $u$ and $v$ denote frequency variables and they have reciprocal unit with $x$ and $y$. For example, if the spatial coordinate $x$ has unit in $\text{mm}$, then the corresponding frequency variable $u$ has unit in ${\text{mm}}^{-1}$. Under certain conditions, the image $s(x,y)$ can be exactly recovered from its frequency-domain $S(u,v)$ by the *inverse Fourier transform*

We denote this pair of signals related by the Fourier transform (FT) as

For a discrete image $s[m,n]$ the *discrete-space Fourier transform* (DSFT) relation

is defined as

It is easy to see that ${S}_{d}(u,v)$ is a periodic function

and thus we only need to consider the function in one period; e.g. ${S}_{d}(u,v)$ with $\left|u\right|\le 1/2,\phantom{\rule{0.277778em}{0ex}}\left|v\right|\le 1/2$.

**Theorem 1 (Sampling)** Suppose that the discrete-domain image $s[m,n]$ is related to the continuous-domain image $s(x,y)$ through the sampling operation [link]. Then their Fourier transforms are related by

(Sketch) One way to prove this is to express $s[m,n]$ using [link] by substituting $x=m{\Delta}_{x},\phantom{\rule{0.277778em}{0ex}}y=n{\Delta}_{y}$ and then “match” with the right-hand side of [link].

The summation on the right-hand side of [link] consists of $S(u/{\Delta}_{x},v/{\Delta}_{y})$ and its translated copies in frequency by $(k,l)$. These copies with $(k,l)\ne (0,0)$ are called *alias* terms. If $s(x,y)$ is *bandlimited* such that

then these alias terms do not overlap with $S(u/{\Delta}_{x},v/{\Delta}_{y})$, and thus $S(u,v)$ can be exactly recovered from ${S}_{d}(u,v)$ simply by

Here the rectangular function is defined as

We will show later that [link] in the spatial domain is equivalent to

where the sinc function is defined as

For the discrete image $s[m,n]$ of finite size $M\times N$ with $0\le m\le M-1,\phantom{\rule{0.277778em}{0ex}}0\le n\le N-1$, we have the *discrete Fourier transform* (DFT) relation

which is defined as

Therefore the DFT maps an $M\times N$ image in the spatial domain into an $M\times N$ image in the frequency domain; both images can have complex values.

Relating [link] to [link], we see that if the $M\times N$ image $s[m,n]$ is zero padded outside its support $[0,M-1]\times [0,N-1]$ then $S[k,l]$ is a sampled image of ${S}_{d}(u,v)$,

In summary, we have seen the following three Fourier transforms

Among these transforms, only the last one, the DFT, is computationally feasible (i.e. with summations of finite terms). Moreover, the DFT can be implemented efficiently with fast Fourier transform algorithms. In moving from the FT to the DSFT and then to the DFT, we first discretize the spatial domain and then the frequency domain. Therefore, it is important to understand [link] and [link] so that we can relate the computational results and images by the DFT to the frequency representation of the original image in the real world.

## Vector-space framework

In a more abstract framework, we can view each image as a vector in an appropriate vector space (i.e. for continuous-domain, discrete-domain, or discrete-domain of finite support). The associate Fourier transform is a linear mapping or linear operator that maps a vector in the spatial domain into a vector in the frequency domain. We can express the Fourier transform and its inverse using the matrix-vector multiplication notation

In the vector-space framework, it is particularly useful to view the inverse Fourier transform as a *basis expansion*. For example, the inverse DFT in [link] can be written as

in which the image $s$ is expanded as a *linear combination of basis images*${f}_{k,l}$ where

Other transforms such as the wavelet transform simply provide other basis expansions.

**Problems**

- Complete the proof of the sampling theorem.
- Suppose that $s(x,y)$ is a bandlimited image with its Fourier transform $S({f}_{x},{f}_{y})=0$ for $|{f}_{x}|\ge 1/\left(2{\Delta}_{x}\right),\phantom{\rule{0.277778em}{0ex}}\left|{f}_{y}\right|\ge 1/\left(2{\Delta}_{y}\right)$. This image is captured by an CCD array where sensors are placed on a rectangular grid with spacing ${\Delta}_{x}\times {\Delta}_{y}$ and each sensor measures the integral of light intensity falling in an area of size ${\Delta}_{x}\times {\Delta}_{y}$ on this grid. Derive a formula to recover $s(x,y)$ exactly from these discrete CCD measurements.
- Prove that the inverse DFT perfectly recovers the image. That is, suppose that $S[k,l]$ is given in [link] then show that the right-hand side of [link] indeed returns $s[m,n]$.
- An 3 in by 4 in photo is discretized by a 200 dpi (dots-per-inch) scanner and results in a 600 by 800 digital image. Assume that alias is negligible. Suppose that we want to filter out all spatial high frequency above 50 ${\text{inch}}^{-1}$ in both horizontal and vertical dimensions out of the image. One way is to zero-pad the image to the size of 1024 by 1024 so that we can use a fast Fourier transform (FFT) algorithm to compute the DFT of size 1024 by 1024 of the zero-padded image. Find the indexes of the DFT coefficients that need to be zero out before taking the inverse DFT to achieve the desired filtering effect.

# Image Filtering

## Convolution operations

The image (linear) *filtering* operation in the continuous-domain is defined by *convolution*

And similarly in the discrete-domain:

The two-dimensional signal $h(x,y)$ or $h[m,n]$ is called *filter*, *mask*, or *point-spread function*.

## Examples

**Example 1 (First-order derivatives)**
The first-order derivatives in the $x$ and $y$ directions of a discrete image $s[m,n]$ can be approximated by finite differences

which are convolutions with the following filters

Here in the matrix form, row and column indexes correspond to $x$ (first) and $y$ (second) dimensions, respectively; and the sample in the box corresponds to the original (i.e. $(m,n)=(0,0)$).

**Example 2 (The Laplacian and image sharping)**
The Laplacian of a two-dimensional signal $s(x,y)$ is defined as

Extending the definition of the first-order derivatives above to the second-order derivatives, we have

It follows that the Laplacian can be computed as

which is convolution with the following filter

Sometimes, the Laplacian is extended by adding two more terms for two diagonal directions leading to the following filter

Since the Laplacian is a derivative operator, it highlights intensity discontinuities (or edges) in an image. We can sharpen an image by adding the negative of the Laplacian image to the original

Using the Laplacian filter [link] we can write the resulting sharpening operation as a convolution

with the following filter

**Example 3 (Gausian smoothing filter)**
The two-dimensional Gaussian filter, which is often used for image smoothing, is defined as

The 2D Gaussian filter is separable, which means it is a product of 1D filters in each dimension

Discrete-domain Gaussian filters used in practice are sampled and truncated versions of the above continuous-domain filters.

**Example 4 (Sobel edge detector)**
The Sobel edge detector is obtained by smoothing the image in the perpendicular direction before computing the directional derivatives. The associate Sobel edge detector filters are given by

Edges are detected as pixels $[m,n]$ where the magnitude of the gradient is above a certain threshold $T$; i.e.

## Frequency response of a filter

A key result in signal and image processing is that convolution in the space domain becomes multiplication in the frequency domain

Therefore, the Fourier transform $H(u,v)$ of the filter, called *frequency response*, indicates how certain frequency components of the input image $s(x,y)$ are amplified or attenuated in the resulting filtered image $r(x,y)$.

However, multiplication in the DFT domain corresponds to circular convolution in the space domain

The circular convolution operation for images of size $M\times N$ is defined as

where ${\langle n\rangle}_{N}$ denotes modulo $N$ of $n$.

**Problems**

# Image Denoising

Sometimes we obtain images that are contaminated by noise. Image denoising aims to remove or reduce noise present in the image. Some common image denoising methods are smooth filtering, median filtering, Wiener filtering, wavelet thresholding or shrinkage.

## Wiener filter

We now present the removal of additive noise using Wiener filtering; first in 1D and then extend to 2D. Let $s\left[n\right]$ denote the original clean signal and $z\left[n\right]$ denote the noise. The obtained noisy signal is then

The goal is to filter the obtained noisy signal $r\left[n\right]$ with a linear filter $g\left[n\right]$ so that the output $\widehat{s}\left[n\right]=(r*g)\left[n\right]$ is a an estimate of the clean signal $s\left[n\right]$. The Wiener filter $g\left[n\right]$ minimizes the expectation of the squared error given by

Differentiating the above with respect to $g\left[k\right]$ and setting the result to zero, we get

Therefore, the Wiener filter $g\left[k\right]$ has to satisfies the following (called orthogonal condition)

If we assume that $s\left[n\right]$ and $z\left[n\right]$ are *wide-sense stationary* random processes, then so is $r\left[n\right]$ given by [link]. That means their auto-correlation and cross-correlation functions only depend on the difference of sample indexes. For example, we can write

Since $\widehat{s}\left[n\right]=(r*g)\left[n\right]$, we have

Therefore the orthogonal condition [link] for the Wiener filter $g\left[k\right]$ can be rewritten as

Taking the DTFT of the above we obtain a closed form formula for the Wiener filter

where function $P$ (called power spectral density) is defined as the DTFT of the corresponding correlation function $R$. For example, if $z\left[n\right]$ is a white Gaussian random process with zero mean and variance ${\sigma}^{2}$, then ${R}_{zz}\left[k\right]={\sigma}^{2}\delta \left[k\right]$ and ${P}_{zz}\left(u\right)={\sigma}^{2}$.

If furthermore, we assume that the clean signal $s\left[n\right]$ and the noise $z\left[n\right]$ are uncorrelated, and the noise $z\left[n\right]$ has zero mean. Then

It follows that

Therefore, we can express the Wiener filter for denoising solely in terms of the *power spectral* densities of the clean signal $s\left[n\right]$ and the noise $z\left[n\right]$ as

From this, it is easy to see that the Wiener filter for image denoising is given by

## Wavelet denoising

Wavelet-based methods have established as a state-of-the-art denoising approach for additive white Gaussian noise (AWGN)

*speckle*or

*salt and pepper*noise. For this type of noise, normally a

*median filtering*can do a good job.

*piecewise smooth*, in the wavelet domain most of the signal information is captured in a few significant wavelet coefficients. As a result, the wavelet coefficients of the original signal stand out from the noisy one (after an orthonormal transform, an AWGN becomes another AWGN of the same variance). Therefore, a simple

*thresholding*in the wavelet domain can effectively remove the noise out of the signal. [link] illustrates this basic concept of wavelet thresholding.

Thresholding can be either “soft thresholding” or “hard thresholding”. A hard thresholding estimator is implemented with

A soft thresholding estimator is implemented with

What is remarkable is such a simple wavelet thresholding algorithm achieves optimal performance in certain sense. The asymptotically optimal threshold (also referred to as “the universal threshold”) is $T=\sigma \sqrt{2{log}_{e}N}$ for signals of length $N$. In practice, lower threshold such as $T=3\sigma $ improves the MSE significantly.

There are several variations of the basic wavelet thresholding scheme that offer performance gains.

- Translation invariant thresholding. An improved thresholding estimator called “cycle-spinning” is calculated by averaging estimators for translated versions of the signal. The algorithm is equivalent to thesholding the non-subsampled or `à trous' wavelet transform.
- Spatially adaptive thresholding.
- Denoising based on statistical modeling of wavelet coefficients.

**Problems**

- Prove [link] and [link].
- Derive the denoising algorithm using Wiener filter when both $s\left[n\right]$ and $z\left[n\right]$ are white Gaussian random processes with zero mean and variances ${\sigma}_{s}^{2}$ and ${\sigma}_{z}^{2}$, respectively.
- Suppose that you want to denoise an image that was contaminated by an additive white Gaussian noise. In the wavelet domain (via an orthornormal transform), the wavelet coefficient of the noisy image $y$ is related to the corresponding wavelet coefficient of the clean image $x$ by:
$$y=x+n,$$where $n$ is Gaussian random variable with the following PDF:$$\text{p}\left(n\right)=\text{const}\xb7{e}^{-\frac{{n}^{2}}{2{\sigma}^{2}}}.$$For typical natural images, $x$ can be modeled as a generalized Gaussian random variable with the following PDF:$$\text{p}\left(x\right)=\text{const}\xb7{e}^{-{\left(\frac{\left|x\right|}{\alpha}\right)}^{\beta}}.$$The parameters $\sigma $, $\alpha $ and $\beta $ can be estimated beforehand. Derive a simple closed form formula for the
*maximum a posteriori*estimate ${\widehat{x}}_{\text{MAP}}$ from $y$:$${\widehat{x}}_{\text{MAP}}=arg\underset{x}{max}\text{p}\left(x\right|y),$$using $\sigma $, $\alpha $ and $\beta $.

# Image Deconvolution

Suppose that the obtained image is a noisy convoluted version of the original image

## Inverse filtering and Wiener filtering

Here we assume that the convolution filter $h$ is known and fixed. Without noise, a simple way to recover $s$ from $r$ is by inverse filtering

However, in the presence of noise, then inverse filtering the obtained signal in [link] leads to

Hence noise might be “blew up” in the second term of the above equation near the frequency such that $H(u,v)$ is close to zero.

Following the derivation of the Wiener filter for denoising in the last section, we obtain the Wiener filter for deconvolution as:

## Vector-space approach

In the vector-space framework, the deconvolution problem $(h*s)[m,n]=r[m,n]$ can be written in the familiar matrix-vector equation

where $x$ is the column vector corresponding to the unknown image $s[m,n]$, $b$ is the column vector corresponding to the given filtering image $r[m,n]$, and $A$ is the matrix corresponding to convolution with the filter $h[m,n]$. For example, 1D filtering a signal ${\left\{s\left[n\right]\right\}}_{n=0}^{2}$ by a filter ${\left\{h\left[n\right]\right\}}_{n=0}^{1}$ can be expressed as

The matrix-vector formulation allows us to resort to a rich body of literature on solving *linear inverse problems*.

In the presence of additive noise, our problem becomes

where $z$ is a random vector. Given data $b$, a common approach is to search for the *maximum-likelihood* (ML) solution:

If $z$ is a white Gaussian noise of zero mean and variance ${\sigma}^{2}$, then given $x$, $b=Ax+z$ is also a Gaussian random vector of mean $Ax$ and variance ${\sigma}^{2}$. That is

In this case the ML solution of [link] is

which is also the *least-squares* (LS) solution of [link].

If *prior knowledge* of $x$ is known in the form of $\text{Pr}\left(x\right)$, then we can search for the *maximum a priori* (MAP) solution

The second term in [link] is often called *penalty* or *regularization* term.

If we again assume the noise $z$ is white Gaussian then the MAP solution becomes

which is also called a *regularized or penalized* LS solution of [link].

Consider the ML or LS solution in [link]. From linear algebra, we know that this solution can be obtained by the pseudo-inverse ${x}_{\text{ML/LS}}^{*}={A}^{\u2020}b={\left({A}^{T}A\right)}^{-1}{A}^{T}b$. However, in image processing applications, the size of $x$ is typically in the order of millions samples, and thus storing matrix $A$ of size $\tilde{1}{0}^{6}\times {10}^{6}$ and computing its pseudo-inverse are impractical. On the other hand, although $A$ is a big matrix, it has compact description (convolution with a filter $h[m,n]$) and fast algorithms for computing multiplications by $A$ and ${A}^{T}$ (convolution with short FIR filters or using FFT). Under these circumstances, *iterative methods* offer a practical solution.

## Iterative methods for linear inversion

Generally, in an iterative method, there is a sequence of iterations
where each one would successively improve the solution and hopefully they
would quickly converse to the true solution. To measure the improvement at
each iteration, an *objective function* is used.

The ML/LS solution of [link] is the minimizer of the following objective function

The gradient of this objective function is

A direct solver would set $\u25bdf\left(x\right)=0$ which leads to the normal equation ${A}^{T}Ax={A}^{T}b$ and the solution using the pseudo-inverse.

The search for the minimizer of $f\left(x\right)$ is a
multidimensional search problem. Therefore, at each iteration step we should
limit the search space to a one-dimensional search. Specifically, assume that
${x}_{n}$ the current estimate for the solution. Then for the next step, we will
fix a *search direction*${p}_{n}$ and limit the search space along that
direction:

where ${\alpha}_{n}\in \mathbb{R}$ is chosen as the optimal *step size*

This optimal step size ${\alpha}_{n}$ is the solution of the following equation

which can be easily derived to be

Therefore, with an initial guess ${x}_{0}$ and a sequence of search directions ${p}_{0},{p}_{1},...$, the iterative search process is completely determined.

A common choice for search directions is to use negative of the gradient of $f$ at each step, or

This choice of directions leads to the *steepest descent* procedure. Instead of using the optimal step size ${\alpha}_{n}$ at each iteration given in [link], which incurs some computation cost, a simple approach is to use a fixed step for all iteration. This leads to the following iteration which is known as *Landweber*

We want to examine the convergence property of the Landweber iteration. Rewrite [link] as

then we have

Therefore, the sequence ${x}_{n}$ converges if all the eigenvalues of $C=I-\lambda {A}^{T}A$ has magnitude less than 1. This condition is equivalent to

Under this condition we have

and thus

which is, as expected, the same as the solution using the pseudo-inverse.

**Problems**

- Derive the Wiener for deconvolution given in [link].
*Adjoint of convolution*Iterative methods often requires fast algorithm for computing multiplication with ${A}^{T}$, or adjoint of $A$.- Suppose that $\tilde{C}$ is the
*circular*convolution matrix of size $N\times N$ for the*circular*convolution with a length $N$ real filter $h=[{h}_{0},{h}_{1},...,{h}_{N-1}]$. Develop a MATLAB function for a fast implementation of ${\tilde{C}}^{T}y$ using

and`fft`

commands.`ifft`

- Suppose that $C$ is the
*linear*convolution matrix of size $(L+N-1)\times L$ for the*linear*convolution of a length $N$ real FIR filter $h=[{h}_{0},{h}_{1},...,{h}_{N-1}]$ with a length $L$ input signal. Suppose that $L$ is much greater than $N$. Develop a MATLAB function for a fast implementation of ${C}^{T}y$ using the

command, where`conv`

`w = conv(u,v) convolves vectors u and v.`

- Suppose that $\tilde{C}$ is the

# Image Interpolation

Interpolation can be considered as the inverse operation of sampling: it aims to reconstruct signal value at any locations given samples on a sampling grid. Image interpolation has numerous applications including general geometric transformations (image zooming, resizing, rotation, and warping) and digital-to-analog conversions.

We assume that the continuous-domain image $s(x,y)$ at resolution $({\Delta}_{x},{\Delta}_{y})$ has the following representation

Taking the Fourier transform of both sides of [link] we obtain the following equivalent representation in the frequency domain

For a fixed $\psi $, the space of all possible functions $s(x,y)$ that can be represented in the form [link] for some discrete-domain signal $\left\{c\right[m,n\left]\right\}$ is a subspace, called *shift-invariant* subspace.

Three basic examples of shift-invariant subspaces are:

- For
$$\psi (x,y)=\left\{\begin{array}{cc}1\hfill & \text{if}\phantom{\rule{4.pt}{0ex}}0\le x<{\Delta}_{x},0\le y<{\Delta}_{y}\hfill \\ 0\hfill & \text{otherwise,}\hfill \end{array}\right.$$we have a subspace of piecewise-constant functions with pieces are rectangular regions $(x,y)\in [m{\Delta}_{x},(m+1){\Delta}_{x})\times [n{\Delta}_{y},(n+1){\Delta}_{y})$. This corresponds to a zero-order-hold (ZOH) digital-to-analog (D/A) converter.
- For
$$\psi (x,y)=\text{sinc}(x/{\Delta}_{x})\phantom{\rule{0.277778em}{0ex}}\text{sinc}(y/{\Delta}_{y})$$we have a subspace of bandlimited functions to frequencies $\left|u\right|\le 1/\left(2{\Delta}_{x}\right),\phantom{\rule{0.277778em}{0ex}}\left|v\right|\le 1/\left(2{\Delta}_{y}\right)$. This corresponds to an ideal D/A converter. In this case, applying [link]-[link] we obtain [link].
- Between these two above extremes, if $\psi (x,y)$ is a B-spline function then we have a subspace of spline functions.

We consider the generalized sampling scheme where sampled image is given by

Here we use the notation $\overline{\phi}(x,y)=\phi (-x,-y)$ so that correlation can be written as a convolution.

If $s(x,y)$ has the form of [link] then substituting it to [link] we have

where

Note that in the last two convolutions, the first one is in the discrete domain while the second one is in the continuous domain.

Given $\psi $ and ${\phi}_{1}$ we can compute ${b}_{1}$. Then solving the deconvolution problem [link] we can recover coefficients $c$ from sampled data $w$. Finally, using [link] we can compute the image $s(x,y)$ at *any location*$(x,y)$.

Now suppose that we are given sampled image at a lower resolution

Then similarly, we have

where

In words, sampled image ${w}_{2}$ is obtained from $c$ by filtering followed by downsampling.

**Problems**

- Given [link], prove [link].
- Prove [link]-[link].
- Suppose that the input image $s(x,y)$ can be represented by a linear expansion
$$s(x,y)=\sum _{m\in \mathbb{Z}}\sum _{n\in \mathbb{Z}}c[m,n]\phantom{\rule{0.277778em}{0ex}}\psi (x-m\Delta ,y-n\Delta ),$$where $\phi $ is a chosen compactly supported basis function (e.g. B-spline). Our imaging system
*over-samples*the convolution of $s(x,y)$ with a known and compactly supported filter $\phi (x,y)$ and return discrete image$$r[m,n]=(s*\phi )(m\Delta /2,n\Delta /2),\phantom{\rule{1.em}{0ex}}\text{for}\phantom{\rule{4.pt}{0ex}}m,n\in Z.$$- Derive a fast algorithm to implement the “forward” linear operator $A$ that maps discrete image $\left\{c\right[m,n\left]\right\}$ into discrete image $\left\{r\right[m,n\left]\right\}$.
- Derive a fast algorithm to implement ${A}^{T}$, the adjoint operator of $A$.

# Image Reconstruction

## Image reconstruction from projections

In several important applications such as computer tomography (CT), we can collect projection data of an object and would like to reconstruct the internal view of the object. The projection data of an image $s(x,y)$ is defined as

**Theorem 2 (Projection-slice)**
Let ${P}_{\theta}\left(f\right)$ be the one-dimensional Fourier transform of ${p}_{\theta}\left(t\right)$. Then

From the definition of the Fourier transform and projection we have

Apply the following change of variables (in fact a rotation)

to the last integral we have

Given $S(fcos\theta ,fsin\theta )$ take the inverse Fourier transform using polar coordinates to recover $s(x,y)$

Let

then we have

From [link] we see that ${q}_{\theta}\left(t\right)$ is obtained by filtering ${p}_{\theta}\left(t\right)$. The operation [link] backproject to the spatial domain. Thus this called “filter-backprojection” algorithm for image reconstruction.

## General linear inversion

In more general setting, we have access to linear measurements of an unknown image and after discretization, image reconstruction amounts to solve the following linear inverse problem

where $A$ is a matrix representing the linear image-data relationship, $x$ is a column vector representing the unknown image, and $b$ is a column vector representing the collected data. We can use tools for solving linear inverse problems as described before.

**Problems**

- Suppose that you would like to reconstruct an unknown image from its given projection data. You planned to use the filtered back projection method, but forgot the
*ramp*filtering step in [link] and instead back projected directly the projection data. Even worse, the acquired projection data were deleted! Study this problem in the*continous-domain*, show how can you recover the true image from the wrongly reconstructed image? What would be the major problem in this recovery and how can you overcome it? *Projection and back projection*Consider the two by two image$$\begin{array}{c}\hfill \mathbf{S}=\left[\begin{array}{cc}{s}_{11}& {s}_{12}\\ {s}_{21}& {s}_{22}\end{array}\right]\end{array}$$Given the “row projections” ${p}_{j}^{R}={\sum}_{i}{s}_{ij}$ and the “column projections” ${p}_{i}^{C}={\sum}_{j}{s}_{ij}$, can the elements of the image be recovered? Explain. Find the image $\widehat{S}$ that is obtained by back projecting row and column projections, and compare $\widehat{S}$ with $S$.*Regularization using SVD*Consider the following regularized inverse problem$${x}^{*}=arg\underset{x}{min}\{{\parallel Ax-b\parallel}_{2}^{2}+\lambda {\parallel x-m\parallel}_{2}^{2}\}$$for solving $Ax=b$ while enforcing the solution $x$ to be closed to a known vector $m$. Using the SVD to diagonalize the above minimization problem and find the solution ${x}^{*}$ explicitly.

### Tập tin đính kèm

- ppdn.eps
- ppn_wt.eps
- ppn.eps
- ppdn_wt.eps
- pp.eps
- minh_do.zip
- pp_wt.eps