CS 1501, Algorithm Implementation
Fall 2013, Term 2141

This page lists basic information on the topics we have covered during each recitation. It is meant to help you review what we have learned.

Recitation Date Topics
13 Fri 6 Dec Dynamic programming for matrix multiplication; the 0, 1-Knapsack problem.
Fri 29 Nov No Class: Thanksgiving Recess
12 Fri 22 Nov Fast modular arithmetic; modular exponentiation and its role in RSA encryption.
11 Fri 15 Nov Ford–Fulkerson algorithm of augmenting paths for network flow; minimum flow, maximum cut, and the max-flow/min-cut theorem.
10 Fri 8 Nov Knuth–Morris–Pratt string searching; Boyer–Morre string searching.
9 Fri 1 Nov Vitter (update) algorithm for adaptive Huffman coding; Lempel–Ziv–Welch (LZW) compression and decompression.
Optional reading: The Wikipedia page for LZW notes that “Further refinements include reserving a code to indicate that…allows the table to be reinitialized after it fills up, which lets the encoding adapt to changing patterns in the input data. Smart encoders can monitor the compression efficiency and clear the table whenever the existing table no longer matches the input well.”
8 Fri 25 Oct n-ary Huffman coding; Rice–Golomb coding variation.
Optional reading: The algorithm that we followed for the second problem results in codewords of size e or e + 1 for an alphabet of m symbols, where 2e < m ≤ 2e + 1. This is actually a variation of a Rice–Golomb coding algorithm with a parameter M = m. Since encoding symbols in this way always results in a quotient of 0, we ignore encoding the quotient part and instead focus only on the remainder part. Of course, this works best when we know which symbols are less frequent and thus should get the (e + 1)-bit codes.
7 Fri 18 Oct Huffman coding and Huffman trees (minimum variance Huffman trees).
6 Fri 11 Oct Review of Exam 1 solutions.
5 Fri 4 Oct Minimum spanning trees (MSTs); Prim’s algorithm (lazy vs eager versions); Kruskal’s algorithm; Reverse-delete algorithm.
Optional reading: Kruskal’s algorithm and the reverse-delete algorithm were both developed by Joseph Kruskal and first appeared in the same 1956 paper. Both are greedy algorithms for finding an MST and each is simply the reverse of the other: Kruskal’s adds the least-cost edges that don’t form cycles, while reverse-delete removes the greatest-cost edges that don’t cause a disjoint graph.
4 Fri 27 Sep Graph search paradigms (depth- and breadth-first search); Recursion in depth-first search (DFS); Queues in breadth-first search (BFS); Graph data structures (adjacency lists, strategies for disallowing duplicate edges and self-edges); Graph operations (graph join).
Optional reading: DFS does not actually require recursion, but can instead be implemented using a stack, much like BFS uses a queue. In effect, the only difference between DFS and BFS is that DFS visits the vertices most recently discovered, whereas BFS visits vertices in the order they were discovered.
3 Fri 20 Sep Ternary and R-way search tries; Binary heaps as priority queue; Array representation of binary heap; Mathematical transformation between array and heap paradigms; Heap operations (insert, delete); Heap algorithms (bubble, sink).
2 Fri 13 Sep Digital search trees vs radix search tries (AKA Particia tries or compact prefix trees); Asymptotic runtime analysis; Hashtables; Hash collisions; Linear probing; Double hashing; Considerations with deleting elements from hashtables.
Optional reading: Apart from linear probing and double hashing, there are many other ways to resolve or otherwise handle hashtable collisions, many of which make use of additional data structures to improve some aspect of performance.
1 Fri 6 Sep Linear search in an arbitrary array; Binary search in a sorted array; Searching a linked list vs searching an array; Landau (Big O) notation review; Binary search trees; Balance in binary search trees (degenerate case is a linked list).
Optional reading: AVL trees are a common type of self-balancing binary search tree. By paying for the cost of balancing up-front, the complexity of search is reduced.