Abstract—This the object surfaces or sceneries. Since the

Abstract—This paper proposes a way to enhance the
compression ratio of images by deleting some parts of the image before
transmission. The remaining data besides essential details for recovering the
removed regions are encoded to produce the output data. At the decoding side an
inpainting method is applied to retrieve the removed region. The Shearlet
Transform is used for the smoothing purpose of the recovered image. The
Shearlet Transform has the ability to provide a very precise geometrical
characterization of general discontinuity occurring in images. This transform
can identify the location of singularities of a function and also the
orientation of discontinuity curves.

Keywords- Image inpainting, image compression, compression ratio, peak
signal-to-noise ratio, structural similarity.

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!


order now

                                                                                                                                                                
I.         
 Introduction

Image inpainting is a technique of filling in the missing
region of an image. It is the art of modifying an image in a form that is not
easily detectable by an ordinary observer. The main usage of inpainting is the
restoring of damaged part of the picture 1. Without transmitter sending
entire picture the receiver can recover the missing part of the image to come
up with the whole image in the end. Compression is acceptable for natural
images as a large amount of redundancy is included in such images. By not
sending significant portion of the image, that can later be restored from
remaining part, the amount of bits needed to transmit the image can be reduced
significantly.

Each image is composed of discrete points called pixels.
The value relevant to each pixel is the result of sampling from light or color
intensity in the original image domain. Natural images consist of separate
areas indicating the object surfaces or sceneries. Since the light intensity
and color in such areas are approximately constant the relevant values for
pixels are highly correlated. Every pixels in such areas are likely to be of
the same or very close value compared with the adjacent pixels. Representing
the image by storing all pixel values results in a large amount of redundancy.

For the inpainting method to be successful it is important
to choose and erase the block that can be easily restored. There are two types
of regions that can be relatively easily reconstructed, structure   and texture. It is important to properly
classify these blocks into either of these two. In image inpainting the only
information available for reconstruction is the average value of the erased
block.

The most significant information within an image is
located in the boundary regions or edges. The boundary region not only
specifies its overall shape but also shows how pixel values change from
neighboring regions to the inner regions. So it is possible to retrieve the
inner areas using pixels located on the boundaries. Therefore boundaries or
edges are all the required information for displaying an image. The variation in values of pixels
orthogonal to the edges is significant. Hence, areas in the neighborhood of
edges may be considered as essential image information. While moving along the
edge direction, no significant changes in pixel values will be observed. Moving
further to the inner points of the boundaries will result in a considerable
correlation for pixel values. Edges also represent some other necessary
information including shapes. Redundancies related to the correlation along the
edge direction may also be exploited via extracting shape information from
pixel values in boundary regions.

Pixel values at the endpoints of an edge will be used for
recovering the entire edge and boundary region. In order to recover boundary
regions and pixels located perpendicular to the edge direction, samples of
source points should be provided. These samples should come from different
areas at each side of the edge.

                                                                                                                                                          
II.        
Image Inpainting

Image
inpainting is a method for recovering regions in images whose pixels are
distorted or removed in some way. Inpainting methods are commonly based on
partial differential equations. The method proposed in this paper is based on
eliminating the information of correlated regions and filling in the missing
areas using sample pixels. In this method, some regions are intentionally
removed at the encoder and recovered using an inpainting or interpolation
technique at the decoder. In partial differential equation techniques, pixel
values around the region to be inpainted are considered to be the boundary
condition for a boundary value problem. Then, a proper equation for
interpolating in that area will be solved. Image inpainting has a variety of
applications such as text and object removal 2,3, denoising, super
resolution, digital zooming 4,filling-in 5 and compression.

 

                Figure
1   A general case of image inpainting
problem.

Image inpainting is aimed to fill in missing regions or to
modify damaged regions in a visually plausible and non-detectable way. In order
to well clarify the inpainting problem, assume u0 as the
intensity function of the image defined in domain D. As indicated in
Fig. 1, there is a hole ??D with unknown information. The objective is
to find the recovered version of u0, namely u, in such
a way that the intensity function in the area D?? remains equal
to u0 while meaningfully filling in other regions. In this
way, information on the boundary ? is diffused into ? via a 2D
interpolation technique.

In a
general form, no information is available about the regions to be inpainted.
The resulting inpainted image is not necessarily similar to the original one. For
the application of compression, it is necessary for the inpainted image to be
similar to the original one with a sufficient degree of accuracy. As the
original image is in hand, it is possible to extract all of the information
required for compression with an acceptable quality. Here the only essential
information for retrieving an image includes source point pixels and edges.

                                                                                                                                                     
III.       
Texture Inpainting

Texture inpainting is to find best match from referable
surrounding blocks in statistical aspect. Texture synthesis process is like
below. At first, when we call texture synthesis function, we give referable
neighborhood block information. Secondly, we set the template which is 3×3 or
4×4 and next to the missed pixel we want to fill. We classify blocks whose
statistical properties are similar to those of surrounding blocks into texture. So, when we want to fill the texture block,
we exploit the statistical similarity with other blocks. In big picture (512×512),
8×8 block is not a big portion. And, in smooth area, we don’t use texture
synthesis because of peak signal-to-noise ratio (PSNR)
quality, even though texture synthesis is faster than structure inpainting. In
most cases, we use texture synthesis in very coarse area or pattern area. From
those reasons, we copy the whole block (8×8) to the missed texture block, after
finding the block which has the closest mean and variance to missed texture
block. In this case, we can get a good result in visual aspect, despite of
almost same result in PSNR.

                                                                                                                                                  
IV.       
Structure Inpainting

Structure is the region that can be clearly divided into
two or more sectors through clear edges. Each sector is relatively devoid of
minute details and the missing block within the sector is easily predicted from
surrounding blocks. Structure inpainting is the process of gradually
propagating the information contained in the surrounding blocks into the
missing block. Basically, this process is very similar to that of diffusion.
When the heat source (surrounding block) is placed around closed area (missing
block), heat (information) gradually flows into the area.

                                                                                                                                                    
V.        
Shearlet Transform

It is now widely
acknowledged that traditional wavelet methods do not perform as well with
multidimensional data. Indeed wavelets are very efficient in dealing with point
wise singularities only. In higher dimensions, other types of singularities are
usually present or even dominant and wavelets are unable to handle them very
efficiently. Images, for example, typically contain sharp transitions such as
edges, and these interact extensively with the elements of the wavelet basis.
As a result, many terms in the wavelet representation are needed to accurately
represent these objects. In order to overcome this limitation of traditional
wavelets, In this paper, a new wavelet transform is introduced, namely shearlet.

A.   
Continuous Shearlet Transform

The continuous shearlet
transform is a non isotropic version of the continuous wavelet transform with a
superior directional sensitivity. In dimension n=2, this is defined as the
mapping,

                              SH ?
f(a,s,t) = ‹ f, ? a,s,t ›                            

Each analyzing
elements ? a,s,t called shearlets
has a frequency support on a pair of trapezoids, at various scales, symmetric
with respect to the origin and oriented along a line of slope s. The support
becomes increasingly thin as   a ? 0. As
a result, the shearlets form a collection of well localized waveforms at various scales, orientations and locations,
controlled by a, s, and t respectively. The frequency supports of some
representative shearlets are illustrated in Fig.2

 

 

 

      

 

          Figure 2   Frequency support of shearlets for various
values of  a and s

B.   
Discrete Shearlet Transform (DST)

By sampling the
continuous shearlet transform on appropriate discretizations of the scaling,
shear, and translation parameters a, s, t one obtains a discrete transform
which is associated to a Parseval (tight) frame for L2(R2).
The following procedure describes the construction of Shearlet transform and
the procedure is illustrated on Fig. 3.

 

(1) Apply the
laplacian pyramid scheme to decompose faj-1

into a low pass image faj and a high pass
image fdj.

 

(2) Compute P fdj on a pseudo polar grid.

 

(3) Apply a band pass filtering to the matrix P fdj.

 

(4) Directly
re-assemble the Cartesian sampled values and apply the inverse two-dimensional Fast
Fourier Transform (FFT).

 

                                                                                                                                                  
VI.       
Experimental Result

The steps of compression are depicted in Fig. 4. The
original image I is analyzed and the blocks to be removed are determined. The
assistant information R should be sent with masked image to decoder. R should
contain the locations of blocks removed and the algorithm to be used to fill in
the missing region. For missing region, fill in DC values to minimize the size
of JPEG-encoded image. R is compressed using lossless encoder while DC-filled
image D is encoded by JPEG. On the decoder side, R’ and D’ are decoded and R is
used to fill in the removed blocks of D. The bit rate is calculated by (size of
D’) + (size of R’ (entropy encoded R)) / (size of the image). The
different images are shown in Fig. 5.

For comparison purpose the PSNR values are computed and
the values are tabulated and shown in Table1.

 

 

 

 

          Figure 3   Succession of laplacian pyramid and directional filtering

Figure 4             An algorithm for image compression
using inpainting

 

TABLE
I.            
Performance Comparison Using Psnr Values

Image

Existing
Method

Proposed
Method Sheatlet

SSIM

PSNR (dB)

SSIM

PSNR (dB)

Image 1

0.8965

29.24

0.9272

36.45

Image 2

0.8864

33.67

0.9028

40.67

Image 3

0.8137

31.45

0.8798

37.45

 

 

  

 

                (a)                              (b)                            (c)

Figure 4             (a) Noisy image (b) masked image
(c) received image after inpainting.

 

A conventional image quality index is the PSNR, which is
the ratio between the maximum possible power of a signal and the power of the
corrupting noise that affects the fidelity of its representation. It is widely
used for the estimation of quality in lossy image compression algorithms. The
signal in this case is the original data and the noise is the error introduced
by compression. This index is popular for its simplicity; however, it loses its
advantages compared with natural human perception 6.

A better index for image quality measurement is the
structural similarity (SSIM), which is a method for measuring the similarity
between two images. The SSIM index is a full reference metric, the measuring of
image quality based on an initial uncompressed image as a reference and is calculated
as,

 

 

where,    the covariance of X and Y,     the average of X,  the average of Y,  the variance of X,   the variance of Y.

                                                                                                                                                                
VII.      
Conclusion

In this paper, a new sparse representation-Discrete
Shearlet Transform domain inpainting model is presented. In this
framework, some kinds of distinctive features are extracted from images at the
encoder side, and regions with high correlation values are intentionally
skipped during encoding. The remaining areas along with information associated
with the edges are encoded to form the compressed output data. Removed
information is to be recovered with the assistance of information sent to the
decoder side. At the decoder, using a PDE-based inpainting algorithm, the
removed areas are recovered

 

References

1    
M. Bertalmio, G. Sapiro, V. Caselles, and C.
Ballester, “Image inpainting,” SIGGRAPH, pp. 1033–1038, January.

2    
Criminisi, A., Perez, P., Toyama, K., 2004.
Region filling and object removal by exemplar-based image inpainting. IEEE
Trans. Image Process., 13(9):1200-1212. doi:10.
1109/TIP.2004.833105.

3    
Patwardhan, K.A., Sapiro, G., Bertalmio, M.,
2005. Video Inpainting of Occluding and Occluded Objects. IEEE Int. Conf. on
Image Processing, 2:69-72. doi:10.1109/ICIP. 2005.1529993.

4    
Chan, T.F., Shen, J., 2005a. Image Processing
and Analysis. Society for Industrial and Applied Mathematics (SIAM),
Philadelphia, p.277-279.

5    
Rane, S.D., Sapiro, G., Bertalmio, M., 2003.
Structure and texture filling-in of missing image blocks in wireless
transmission and compression applications. IEEE Trans. Image Process., 12(3):296-303.
doi:10.1109/TIP.2002. 804264.

6    
Wang, Z.,
Bovik, A.C., 2009. Mean squared error: love it or leave it? A new look at
signal fidelity measures. IEEE Signal Process. Mag., 26(1):98-117.

7    
Glenn
R.Easley, Demetrio Labate, and Flavia Colonna, “Shearlet
based total variation diffusion for denoising”, IEEE Transactions on Image Processing”, Vol.18,
No.2, February,2009.

8    
S.D. Rane, G.
Sapiro, and M. Bertalmio, “Structure and texture filing-in of missing image
blocks in wireless transmission and compression applications,” IEEE
Transactions on Image Processing, pp.
296–303, March 2003.