Digital Image Processing

 Image processing is the process of manipulating digital images. See a list of image processing techniques, including image enhancement, restoration, & others. Digital image processing is the use of a digital computer to process digital images through an algorithm. As a subcategory or field of digital signal processing, digital image processing has many advantages over analog image processing. It allows a much wider range of algorithms to be applied to the input data and can avoid problems such as the build-up of noise and distortion during processing. Since images are defined over two dimensions or more, digital image processing may be modeled in the form of multidimensional systems. 




The generation and development of digital image processing are mainly affected by three factors: first, the development of computers; second, the development of mathematics (especially the creation and improvement of discrete mathematics theory); and third, the demand for a wide range of applications in environment, agriculture, military, industry and medical science has increased.


How Image processing is done ?

The basic steps involved in digital image processing are:

  • Image acquisition: This involves capturing an image using a digital camera or scanner, or importing an existing image into a computer.
  • Image enhancement: This involves improving the visual quality of an image, such as increasing contrast, reducing noise, and removing artifacts.
  • Image restoration: This involves removing degradation from an image, such as blurring, noise, and distortion.
  • Image segmentation: This involves dividing an image into regions or segments, each of which corresponds to a specific object or feature in the image.
  • Image representation and description: This involves representing an image in a way that can be analyzed and manipulated by a computer, and describing the features of an image in a compact and meaningful way.
  • Image analysis: This involves using algorithms and mathematical models to extract information from an image, such as recognizing objects, detecting patterns, and quantifying features.
  • Image synthesis and compression: This involves generating new images or compressing existing images to reduce storage and transmission requirements. Digital image processing is widely used in a variety of applications, including medical imaging, remote sensing, computer vision, and multimedia.

Image processing mainly include the following steps:
  1. Importing the image via image acquisition tools; 
  2. Analyzing and manipulating the image; 
  3. Output in which result can be altered image or a report which is based on analyzing that image.

Types of an image
  • BINARY IMAGE– The binary image as its name suggests, contain only two pixel elements i.e 0 & 1,where 0 refers to black and 1 refers to white. This image is also known as Monochrome.
  • BLACK AND WHITE IMAGE– The image which consist of only black and white color is called BLACK AND WHITE IMAGE.
  • 8 bit COLOR FORMAT– It is the most famous image format. It has 256 different shades of colors in it and commonly known as Grayscale Image. In this format, 0 stands for Black, and 255 stands for white, and 127 stands for gray.
  • 16 bit COLOR FORMAT– It is a color image format. It has 65,536 different colors in it.It is also known as High Color Format. In this format the distribution of color is not as same as Grayscale image. A 16 bit format is actually divided into three further formats which are Red, Green and Blue. That famous RGB format. 
Image sensor

An image sensor or imager is a sensor that detects and conveys information used to form an image. It does so by converting the variable attenuation of light waves (as they pass through or reflect off objects) into signals, small bursts of current that convey the information. The waves can be light or other electromagnetic radiation. Image sensors are used in electronic imaging devices of both analog and digital types, which include digital cameras, camera modules, camera phones, optical mouse devices, medical imaging equipment, night vision equipment such as thermal imaging devices, radar, sonar, and others. As technology changes, electronic and digital imaging tends to replace chemical and analog imaging.

The two main types of electronic image sensors are the charge-coupled device (CCD) and the active-pixel sensor (CMOS sensor). Both CCD and CMOS sensors are based on metal–oxide–semiconductor (MOS) technology, with CCDs based on MOS capacitors and CMOS sensors based on MOSFET (MOS field-effect transistor) amplifiers. Analog sensors for invisible radiation tend to involve vacuum tubes of various kinds, while digital sensors include flat-panel detectors.


Image compression

Image compression is a type of data compression applied to digital images, to reduce their cost for storage or transmission. Algorithms may take advantage of visual perception and the statistical properties of image data to provide superior results compared with generic data compression methods which are used for other digital data.

An important development in digital image compression technology was the discrete cosine transform (DCT), a lossy compression technique first proposed by Nasir Ahmed in 1972. DCT compression became the basis for JPEG, which was introduced by the Joint Photographic Experts Group in 1992. JPEG compresses images down to much smaller file sizes, and has become the most widely used image file format on the Internet. Its highly efficient DCT compression algorithm was largely responsible for the wide proliferation of digital images and digital photos, with several billion JPEG images produced every day as of 2015.

Medical imaging techniques produce very large amounts of data, especially from CT, MRI and PET modalities. As a result, storage and communications of electronic image data are prohibitive without the use of compression. JPEG 2000 image compression is used by the DICOM standard for storage and transmission of medical images. The cost and feasibility of accessing large image data sets over low or various bandwidths are further addressed by use of another DICOM standard, called JPIP, to enable efficient streaming of the JPEG 2000 compressed image data.

Digital signal processor (DSP)

A digital signal processor (DSP) is a specialized microprocessor chip, with its architecture optimized for the operational needs of digital signal processing. DSPs are fabricated on metal–oxide–semiconductor (MOS) integrated circuit chips. They are widely used in audio signal processing, telecommunications, digital image processing, radar, sonar and speech recognition systems, and in common consumer electronic devices such as mobile phones, disk drives and high-definition television (HDTV) products.
The discrete cosine transform (DCT) image compression algorithm has been widely implemented in DSP chips, with many companies developing DSP chips based on DCT technology. DCTs are widely used for encoding, decoding, video coding, audio coding, multiplexing, control signals, signaling, analog-to-digital conversion, formatting luminance and color differences, and color formats such as YUV444 and YUV411. DCTs are also used for encoding operations such as motion estimation, motion compensation, inter-frame prediction, quantization, perceptual weighting, entropy encoding, variable encoding, and motion vectors, and decoding operations such as the inverse operation between different color formats (YIQ, YUV and RGB) for display purposes. DCTs are also commonly used for high-definition television (HDTV) encoder/decoder chips.


Image Representation

Image as a Matrix
Generally images are represented in rows and columns we have the following syntax in which images are represented: 
 


The right side of this equation is digital image by definition. Every element of this matrix is called image element , picture element , or pixel. 
 
Digital Image Representation In MATLAB:



In MATLAB the start index is from 1 instead of 0. Therefore, f(1,1) = f(0,0). 
henceforth the two representation of image are identical, except for the shift in origin. 
In MATLAB, matrices are stored in a variable i.e X,x,input_image , and so on. The variables must be a letter as same as other programming languages. 


Phases Of Image Processing:
  1. ACQUISITION : It could be as simple as being given an image which is in digital form. The main work involves: a) Scaling b) Color conversion(RGB to Gray or vice-versa) 
  2. IMAGE ENHANCEMEN : It is amongst the simplest and most appealing in areas of Image Processing it is also used to extract some hidden details from an image and is subjective. 
  3. IMAGE RESTORATION : It also deals with appealing of an image but it is objective(Restoration is based on mathematical or probabilistic model or image degradation). 
  4. COLOR IMAGE PROCESSING : It deals with pseudocolor and full color image processing color models are applicable to digital image processing. 
  5. WAVELETS AND MULTI-RESOLUTION PROCESSING : It is foundation of representing images in various degrees. 
  6. IMAGE COMPRESSION : It involves in developing some functions to perform this operation. It mainly deals with image size or resolution. 
  7. MORPHOLOGICAL PROCESSING : It deals with tools for extracting image components that are useful in the representation & description of shape. 
  8. SEGMENTATION PROCEDURE : It includes partitioning an image into its constituent parts or objects. Autonomous segmentation is the most difficult task in Image Processing. 
  9. REPRESENTATION & DESCRIPTION : It follows output of segmentation stage, choosing a representation is only the part of solution for transforming raw data into processed data. 
  10. OBJECT DETECTION AND RECOGNITION : It is a process that assigns a label to an object based on its descriptor. 

Advantages of Digital Image Processing
  • Improved image quality: Digital image processing algorithms can improve the visual quality of images, making them clearer, sharper, and more informative.
  • Automated image-based tasks: Digital image processing can automate many image-based tasks, such as object recognition, pattern detection, and measurement.
  • Increased efficiency: Digital image processing algorithms can process images much faster than humans, making it possible to analyze large amounts of data in a short amount of time.
  • Increased accuracy: Digital image processing algorithms can provide more accurate results than humans, especially for tasks that require precise measurements or quantitative analysis.

Comments

Popular posts from this blog

Memory devices

Registers: The Backbone of Computer Memory