This assignment is worth 50 points.

In this part, you will compute some network activations using a fixed input and weights.

Use the following network diagram (similar to the one we saw in class, but without biases).

The following are the input pixel values, and the weight values:

x_{1} = 10 | w^{(1)}_{31} = 0.82 |

x_{2} = 1 | w^{(1)}_{32} = 0.1 |

x_{3} = 2 | w^{(1)}_{33} = 0.35 |

x_{4} = 3 | w^{(1)}_{34} = 0.3 |

w^{(1)}_{11} = 0.5 | w^{(2)}_{11} = 0.7 |

w^{(1)}_{12} = 0.6 | w^{(2)}_{12} = 0.45 |

w^{(1)}_{13} = 0.4 | w^{(2)}_{13} = 0.5 |

w^{(1)}_{14} = 0.3 | w^{(2)}_{21} = 0.17 |

w^{(1)}_{21} = 0.02 | w^{(2)}_{22} = 0.9 |

w^{(1)}_{22} = 0.25 | w^{(2)}_{23} = 0.8 |

w^{(1)}_{23} = 0.4 | |

w^{(1)}_{24} = 0.3 |

What is the value of z

What is the output size resulting from convolving a 35x35 image with a filter of size 15x15, using:

- stride 1 and no padding?
- stride 1 and padding 1?
- stride 2 and padding 3?

In this part, you will compute the output from applying a single set of convolution, non-linearity, and pooling operations, on a small example. You will use no padding, and a stride of 2 (in both the horizontal and vertical directions). Below are your image (with size N = 9) and your filter (with size F = 3).

- First, show the output of applying convolution.
- Second, show the output of applying a Rectified Linear Unit (ReLU) activation.
- Third, show the output of applying max pooling over 2x2 regions.

In this part, you will compute two types of loss functions: SVM and softmax. You will use three different sets of weights W, each of which will result in a different set of scores s for four image examples. You have to determine which set of weights results in the smallest SVM loss, and which set results in the smallest softmax loss. The weights and inputs are in this file. The first image (x

Write a script hw11p_part3.m to compute the losses, and include it in your submission. Also include two helper functions, SVM_loss(scores, correct_index) and softmax_loss(scores, correct_index), which include code to compute the L

If you're checking your code against the slides from class, note that in our example from class, slide 49 in slide deck 20, the correct answer from -log(0.13) should be 2.0402.

In this part, you will compute the numerical gradient for the first weight vector from the previous part. Write a simple script hw11p_part4.m to loop over the dimensions of the weight vector and numerically compute the derivative for that dimension. Then stack the derivatives together, and output the resulting vector as the gradient. Use the SVM loss to compute the loss for that weight vector over all examples. Use h=0.0001.

After you compute the gradient, also show the result of a weight update with learning rate of 0.001.

Technicalities: Make W