Role of Mathematics in Image Processing.

Oshan Sharma
6 min readDec 19, 2020

Image processing is all about mathematics. It is the mathematical analysis of an image. A grayscale image is a m x n size matrix where each matrix element is an image pixel with illumination value. An RGB color image is a m x n x 3 size matrix. A hyperspectral image is a m x n x l size matrix, etc. If it is a video, then these matrices have also a time dimension. Many of the image processing methods rely on the basic Mathematical Techniques of Histogram Equalization, Probability and Statistics, Discrete Cosine Transforms, Fourier Transforms, Differential Equations, Integration, Matrix and Algebra. For now we will discuss about 1st and 2nd order derivatives and their uses in image processing.

Derivative — First Order

The “First Derivative” of a function represents the rate of change of one variable with respect to another variable. The first order derivative mainly tells us in which direction the function is going. That is whether the function is increasing or decreasing with respect to some given reference. First order derivative can be denoted in several ways. For the first derivative, the notations are: f’(x), d f(x)/dx, y’ and dy/dx.

Derivative — Second Order

The “Second Derivative” is the derivative of the derivative of a function. In order to find the second derivative one must find the first derivative first. A derivative is often shown with a little tick mark: f’(x). The second derivative is shown with two tick marks like this: f’’(x)

Edge detection using First Order Derivative (Sobel Filter)

Most edge detection methods work on the assumption that the edge occurs where there is a discontinuity in the intensity function or a very steep intensity gradient in the image. Using this assumption, if one take the derivative of the intensity value across the image and find points where the derivative is maximum then the edge could be located. The gradient is a vector, whose components measure how rapid pixel value are changing with distance in the x and y direction. Thus, the components of the gradient found by using the following equations (1) & (2).

Where dx & dy measure distance along the x and y directions respectively. In discrete images, one can consider dx & dy in terms of numbers of pixel between two points. dx = dy = 1 (pixel spacing) is the point at which pixel coordinates are(i, j) thus, the value of (∆𝑥 𝑎𝑛𝑑 ∆𝑦) can calculated by equations (3) & (4).

In order to detect the presence of a gradient discontinuity, one could calculate the change in the gradient at (i, j). This can be done by finding the following magnitude measure and the gradient direction θ is given by the equation (5).

The Sobel operator is an example of the gradient method. It is a discrete differentiation operator, computing an approximation of the gradient of the image intensity function.

An advantage of using a larger mask size is that the errors due to the effects of noise are reduced by local averaging within the neighborhood of the mask. An advantage of using a mask of odd size is that the operators are centered and can therefore provide an estimate that is based on a center pixel (i,j). One important edge operator of this type is the Sobel edge operator. The Sobel edge operator masks are given as:

The operator calculates the gradient of the image intensity at each point, giving the direction of the largest possible increase from light to dark and the rate of change in that direction. The result therefore shows how “abruptly” or “smoothly” the image changes at that point and therefore how likely it is that part of the image represents an edge, as well as how that the edge is likely to be oriented. In practice, the magnitude (likelihood of an edge) calculation is more reliable and easier to interpret than the direction calculation. Mathematically, the gradient of a two-variable function (the image intensity function) at each image point is a 2D vector with the components given by the derivatives in the horizontal and vertical directions. At each image point, the gradient vector points to the direction of largest possible intensity increase, and the length of the gradient vector corresponds to the rate of change in that direction. This implies that the result of the Sobel operator at any image point which is in a region of constant image intensity is a zero vector and at a point on an edge is a vector which points across the edge, from darker to brighter values.

An image of a contact lens which is enhanced using sobel filter.

Edge detection using Second Order Derivative (Laplacian Filter)

The Laplacian is a 2-D measure of the 2nd derivative of an image. The Laplacian of an image highlights regions of rapid intensity change and is therefore often used for edge detection zero crossing edge detectors). The Laplacian is often applied to an image that has first been smoothed with something approximating a Gaussian smoothing filter in order to reduce its sensitivity to noise. The operator normally takes a single gray level image as input and produces another binary image as output. The zero crossing detector looks for places in the Laplacian of an image where the value of the Laplacian passes through zero i.e. points where the Laplacian changes sign. Such points often occur at edges in images i.e. points where the intensity of the image changes rapidly, but they also occur at places that are not as easy to associate with edges. It is best to think of the zero crossing detector as some sort of feature detector rather than as a specific edge detector. Zero crossings always lie on closed contours, and so the output from the zero crossing detectors is usually a binary image with single pixel thickness lines showing the positions of the zero crossing points. The derivative operator Laplacian for an Image is defined as shown in equations (6), (7) and (8).

By substituting, Equations (7) and (8) in (6), we obtain the equation (9).

If we use the value of Mask as explain blow we obtain on equation (9):

Example of laplacian filter.

--

--