CS1699: Homework 2

Due: 10/08/2015, 11:59pm

Instructions: Please provide your written answers (for parts I, II and III) and your code (for parts II and III). Include your image results in your written answers file. Your written answers should be in the form of a PDF or Word document (.doc or .docx). Your code should be written in Matlab. Zip or tar your written answers, image results and .m files and upload the .zip or .tar file on CourseWeb -> CS1699 -> Assignments -> Homework 2. Name the file YourFirstName_YourLastName.zip or YourFirstName_YourLastName.tar.

Part I: Short Answers (15 points)
  1. Suppose we form a texture description using textons built from a filter bank of multiple anisotropic derivative of Gaussian filters at two scales and six orientations (as displayed below). Is the resulting representation sensitive to orientation, or is it invariant to orientation? Explain why.


  2. Consider the figure below. Each small square denotes an edge point extracted from an image. Say we are going to use k-means to cluster these points' positions into k=2 groups. That is, we will run kmeans where the feature inputs are the (x,y) coordinates of all the small square points. What is a likely clustering assignment that would result? Briefly explain your answer.


  3. When using the Hough Transform, we often discretize the parameter space to collect votes in an accumulator array. Alternatively, suppose we maintain a continuous vote space. Which grouping algorithm (among k-means, mean-shift, or graph-cuts) would be appropriate to recover the model parameter hypotheses from the continuous vote space? Briefly explain.
Part II: Color Quantization with K-means (30 points)



For this problem you will write code to quantize a color space by applying k-means clustering to the pixels in a given input image, and experiment with two different color spaces--- RGB and HSV. You are welcome to use the built-in Matlab function kmeans. Include each of the following components in your submission:
  1. [5pts] Given an RGB image, quantize the 3-dimensional RGB space, and map each pixel in the input image to its nearest k-means center. That is, replace the RGB value at each pixel with its nearest cluster's average RGB value. For example, if you set k=2, you might get:

    Since these average RGB values may not be integers, you should round them to the nearest integer (1 through 255). Use the following form:
    function [outputImg, meanColors, clusterIds] = quantizeRGB(origImg, k)
    where origImg and outputImg are RGB images of type uint8, k specifies the number of colors to quantize to, and meanColors is a kx3 array of the k centers (one value for each cluster and each color channel). clusterIds is a numpixelsx1 matrix (with numpixels = numrows * numcolumns) that says which cluster each pixel belongs to. Matlab tip: if the variable im is a 3d matrix containing a color image with numpixels pixels, X = reshape(im, numpixels, 3); will yield a matrix with the RGB features as its rows.
  2. [5 pts] Given an RGB image, convert to HSV, and quantize the 1-dimensional Hue space. Map each pixel in the input image to its nearest quantized Hue value, while keeping its Saturation and Value channels the same as the input. Convert the quantized output back to RGB color space. Use the following form:
    function [outputImg, meanHues, clusterIds] = quantizeHSV(origImg, k)
    where origImg and outputImg are RGB images of type uint8, k specifies the number of clusters, meanHues is a kx1 vector of the hue centers, and clusterIds is defined as above.
  3. [5 pts] Write a function to compute the sum-of-squared-differences (SSD) error between the original RGB pixel values and the quantized values, with the following form:
    function [error] = computeQuantizationError(origImg, quantizedImg)
    where origImg and quantizedImg are both RGB images of type uint8, and error is a scalar giving the total SSD error across the image.
  4. [5 pts] Given an image, compute and display (using the Matlab function histogram) two histograms of its hue values. Let the first histogram use equally-spaced bins (uniformly dividing up the hue values), and let the second histogram use bins defined by the k cluster center memberships (i.e., all pixels belonging to hue cluster i go to the i-th bin, for i=1, ..., k). Reuse (call) functions you've written above whenever possible. Use the following form:
    function [histEqual, histClustered] = getHueHists(im, k)
    where im is the input color image of type uint8, and histEqual and histClustered are the two output histograms.
  5. [5 pts] Write a script colorQuantizeMain.m that calls all the above functions appropriately using the provided image fish.jpg, and displays the results. Include the image results, histograms, and error scores for both the RGB and HSV quantizations. Illustrate the quantization with at least three different values of k. Be sure to convert an HSV image back to RGB before displaying with imshow. Label all plots clearly with titles. Save your image results and include the results in your written answer sheet.
  6. [5 pts] Briefly answer the following. How and why do the results differ based on the value of k? How do the two forms of histogram differ? How do results vary depending on the color space?
Useful Matlab functions: kmeans, rgb2hsv, hsv2rgb, imshow, double, uint8, reshape, repmat, title, histogram.

Part III: Feature Extraction and Description (55 points)

In this problem, you will implement a feature extraction/detection and description pipeline, followed by a simple image retrieval task. While you will not exactly implement it, the SIFT paper by David Lowe is a useful resource, in addition to Section 4.1 of the Szeliski textbook. What you should include in your submission:
  1. [15 points] function [x, y, scores, Gx, Gy] = extract_keypoints(image) -- Code to perform keypoint detection (feature extraction) using the Harris corner detector, as described in class.
  2. [15 points] function [features, x, y, scores] = compute_features(image, x, y, scores, Gx, Gy) -- Code to perform feature description, similarly to Lowe's paper.

  3. [10 points] Now pick a test set of 10 images and run your feature extraction and description on them. Visualize the keypoints you have detected, for example by drawing circles over them. Use the scores variable and make keypoints with higher scores correspond to larger circles. Note that Matlab's plot counts from the top-left when plotting over an image. Save your code in a script called part3_c.m. Save the figures that show your features and include them in your answer sheet.

  4. [15 points] For one of the images in your test set (which we shall call the query image), rank the images in the remainder of the test set based on how similar they are to the query.
Now you have implemented a full basic image retrieval pipeline!

Acknowledgement: Parts I and Part II and adapted from Kristen Grauman. Part III was inspired in part by an assignment by James Hays.