CS1674: Homework 11 - Practice

Due: 12/8/2016, 11:59pm

This assignment is worth 50 points.

Unlike previous P assignments, here P stands for "Practice", not "Programming". Hence, you will submit a Word or PDF document with your answers for most of the parts. You can also write on paper and submit a scan. You will also write simple code for two of the parts.

Part I [5 points]

In this part, you will compute some network activations using a fixed input and weights.

Use the following network diagram (similar to the one we saw in class, but without biases).


The following are the input pixel values, and the weight values:
x1 = 10 w(1)31 = 0.82
x2 = 1 w(1)32 = 0.1
x3 = 2 w(1)33 = 0.35
x4 = 3 w(1)34 = 0.3
w(1)11 = 0.5 w(2)11 = 0.7
w(1)12 = 0.6 w(2)12 = 0.45
w(1)13 = 0.4 w(2)13 = 0.5
w(1)14 = 0.3 w(2)21 = 0.17
w(1)21 = 0.02 w(2)22 = 0.9
w(1)22 = 0.25 w(2)23 = 0.8
w(1)23 = 0.4
w(1)24 = 0.3

What is the value of z2, if a tanh activation is used? (Hint: You can use Matlab as a calculator, and Matlab's tanh function.)

Part II [5 points]

What is the output size resulting from convolving a 35x35 image with a filter of size 15x15, using:
  1. stride 1 and no padding?
  2. stride 1 and padding 1?
  3. stride 2 and padding 3?
Part III [10 points]

In this part, you will compute the output from applying a single set of convolution, non-linearity, and pooling operations, on a small example. You will use no padding, and a stride of 2 (in both the horizontal and vertical directions). Below are your image (with size N = 9) and your filter (with size F = 3).


  1. First, show the output of applying convolution.
  2. Second, show the output of applying a Rectified Linear Unit (ReLU) activation.
  3. Third, show the output of applying max pooling over 2x2 regions.
Part IV [15 points] -- includes simple coding

In this part, you will compute two types of loss functions: SVM and softmax. You will use three different sets of weights W, each of which will result in a different set of scores s for four image examples. You have to determine which set of weights results in the smallest SVM loss, and which set results in the smallest softmax loss. The weights and inputs are in this file. The first image (x1) is of class 1, the second of class 2, the third of class 3, and the fourth of class 4. We will use a simple function f(x) = W*x.

Write a script hw11p_part3.m to compute the losses, and include it in your submission. Also include two helper functions, SVM_loss(scores, correct_index) and softmax_loss(scores, correct_index), which include code to compute the Li loss for an individual example. Include your answers about which weight matrix results in the smallest (a) SVM loss and (b) softmax loss, in your Word/PDF write-up.

If you're checking your code against the slides from class, note that in our example from class, slide 49 in slide deck 20, the correct answer from -log(0.13) should be 2.0402.

Part V [15 points] -- includes simple coding

In this part, you will compute the numerical gradient for the first weight vector from the previous part. Write a simple script hw11p_part4.m to loop over the dimensions of the weight vector and numerically compute the derivative for that dimension. Then stack the derivatives together, and output the resulting vector as the gradient. Use the SVM loss to compute the loss for that weight vector over all examples. Use h=0.0001.

After you compute the gradient, also show the result of a weight update with learning rate of 0.001.

Technicalities: Make W1 into a vector via W1(:). Use reshape to reshape any intermediate W1_plus_h (needed in the process of computing a derivative) back into a 4x25 matrix. Make sure to make changes to the dimensions of the weight vector one at a time, so to keep an original version of the weight vector before any changes were made to it, and reset the weight vector to that original each time you loop.