2024-10-13

Contour height, in the context of image processing and computer vision, refers to the measurement of the vertical extent or size of an object’s outline or contour within an image. It is typically calculated by finding the minimum and maximum vertical positions (i.e., the topmost and bottommost points) of the object’s boundary or contour.

Here’s how contour height is generally determined:

  1. Contour detection: The first step is to detect the contour of an object within an image. Contours are essentially the boundaries that separate an object from its background.
  2. A bounding rectangle: Once the contour is detected, a bounding rectangle (often referred to as the “bounding box”) is drawn around the contour. This rectangle encompasses the entire object.
  3. Measurement: To calculate the contour height, the vertical extent of the bounding rectangle is measured. This is done by finding the difference between the y coordinates (the vertical positions) of the top and bottom sides of the bounding rectangle.

In summary, contour height provides information about the vertical size of an object within an image. It can be a useful feature for various computer vision tasks, such as object recognition, tracking, and dimension estimation.

Let us see how we will use Python functions to detect the following images, based on contour height.

a: A bicycle with a person b: A bicycle without a person

Figure 5.2 – A comparison of two images with regards to the contour height

Here, the contour height of a person riding a bicycle in an image (Figure 5.2a) is greater than the contour height of the image of a bicycle without a person (Figure 5.2b).

Let us use the Python library CV2 Canny edge detector to detect the maximum contour height for the given image as, follows:
# Define a function to find the contour height of an object using the Canny edge detector
def canny_contour_height(image):

This function takes an image as input and returns the maximum contour height, found using the Canny edge detector:
    # Convert the image to grayscale
    gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
    # Apply the Canny edge detector with low and high threshold values
    edges = cv2.Canny(gray, 100, 200)
    # Find the contours of the edges
    contours, _ = cv2.findContours(edges, \
        cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
    # Initialize the maximum height as zero
    max_height = 0
    # Loop through each contour
    for cnt in contours:
        # Find the bounding rectangle of the contour
        x, y, w, h = cv2.boundingRect(cnt)
        # Update the maximum height if the current height is larger
        if h > max_height:
            max_height = h
    # Return the maximum height
    return max_height

Here, Python functions are used to find the contour height of the images. As seen in the images, the results show that the contour height of the person riding a bicycle image is greater than the contour height of the bicycle image. So, we can classify these two images by using a certain threshold value for the contour height, and if that is greater than that threshold value, then we classify the images as a bicycle with a person; otherwise, if the contour height is less than that threshold value, we classify those images as just showing a bicycle.

As shown in the preceding LF, (we learned about labeling functions in Chapter 2) we can automate such image classification and object detection tasks using Python, and label the images as either a man riding a bicycle or just a bicycle.

The complete code to find the contour height of the preceding two images is on GitHub.

By using a diverse set of LFs that capture different aspects of the image content, we can increase the likelihood that at least some of the functions will provide a useful way to distinguish between images that depict a bicycle, a bicycle with a person, or neither. The probabilistic label generated by the majority label voter model will then reflect the combined evidence provided by all of the LFs, and it can be used to make a more accurate classification decision.

Leave a Reply

Your email address will not be published. Required fields are marked *