Cache Memory

The idea of the cache memory is to minimize the time for the memory accesses. When accessing memory one may have long delays of 5 or even more cycles. However if we keep some data in local "cache" memory, then we should be able to access it much faster, usually in one clock cycle.

We can rely on data and instruction locality, which means that the next data or instruction accessed is likely to be close to the previous one accessed. Thus we can load a block of memory into a "cache line" so that the CPU would have a fast access to it later.

Note that pipelined CPU has two ports for memory access: one for instructions and the other for data. Therefore you need two caches: Instruction cache and Data cache. The major difference between to is that the data cache must be capable of performing both read and write operations, while instruction cache needs to provide only read operation.

Memory cache controller has a memory for data storage and a control unit. The memory holds data fetched from the main memory or updated by the CPU. The control unit decides whether a memory access by the CPU is HIT or MISS, serves the requested data, loads and stores the data to the main memory and decides where to store data in the cache memory. Another common part of the cache memory is a tag table. It keeps the information on what data is stored in which cache line. Usually cache controller stores part of the address in the tag table. You will use the following COELib component for the cache memory:  RAM32x1. You are encouraged to use your own modules for larger memory built from the RAM module.

There are many cache designs possible. Your cache memory:

More information on cache memories and controllers is available in the text book.

Suggestions on the design:

Good luck!