Fill This Form To Receive Instant Help

Help in Homework
trustpilot ratings
google ratings


Homework answers / question archive / Problem 1: Seraph’s Staff Injured Merovingian The L3 cache for a fast new machine with byte-addressable memory is 32M bytes, with a block size of 4K bytes

Problem 1: Seraph’s Staff Injured Merovingian The L3 cache for a fast new machine with byte-addressable memory is 32M bytes, with a block size of 4K bytes

Computer Science

Problem 1: Seraph’s Staff Injured Merovingian

The L3 cache for a fast new machine with byte-addressable memory is 32M bytes, with a block size of 4K bytes. Answer the following 4-point questions about potential cache organizations for this machine, assuming that the address has 36 bits.

1.1 Diagram the address of a typical memory access, assuming that the organization of the cache is 8-way set associative. Label all parts of the address clearly with name and size.

1.2 Suppose that your colleague modifies the cache organization so that you have 13 index bits. What is the organization of this new cache, assuming that no other cache parameters were altered? Be as specific as possible, and justify your work for full credit.

1.3 Another colleague proposes a different modification to the 32MB cache. She raises the block size to 8K bytes, she makes the cache fully associative and writeback. Draw a diagram of a typical cache slot or cache line associated with this configuration, being sure to label each field clearly with name and size.

1.4 Write an expression for the number of bits of overhead required to access information in the 32MB cache, as a function of t, the number of tag bits for a machine with 36 address bits. For this problem only, assume that the block size is 16K and that space is present for a dirty bit. To assist the grader, make sure you label all fields in the cache line that are considered part of the overhead. Note: since t is represented by a variable, you don’t have to know anything about the associativity of the cache to answer this question.

1.5 We now move to the time associated with memory access. Write an expression for memory access time, given that the hit time (HT) is 2 clock cycles, the miss rate is 5% and the miss penalty is 800 clock cycles. This is all that you know about the behavior of the system.

1.6-1.11 The next questions deal with a slightly more complex memory hierarchy having an L1 and an L2 cache using a Harvard architecture model which means that at least one level has separate data and instruction caches

The performance parameters associated with a multi-level cache machine are given below. Note that only the top level cache is split into separate instruction and data memories. The miss penalties for each level account for the behavior of all deeper levels in the hierarchy. Please label your answer clearly to assist the grader.

L1 Access

L1 Hit Rate

L2 Hit Rate

Ins Fetch

0.90

0.80

Data Fetch

0.85

0.80

 

Cache Access

 

L2 Miss Penalty

 

 

Reads

 

75 CC

 

 

Writes

 

150 CC

 

             

1.8 Suppose that, on average, 30% of the instructions are data reads and 15% are data writes. Write an expression for the average memory access time if the hit time, HT1, is 5CC for both instruction and data L1 caches, and L2 hit time , HT2, is 15CC.

1.9 Suppose that we add the assumption that a pipelined CPU is accessing memory. Is the estimate for the average memory access time in the previous problem overly optimistic, overly pessimistic, or just right? Please explain your answer for full credit.

1.10 Briefly explain why the hit rates for instructions and data differ in the L1 caches, but are identical in the L2 cache.

1.11 Why might the miss penalty associated with reads differ from that associated with writes? (One clearly explained answer will do here)

 

Option 1

Low Cost Option
Download this past answer in few clicks

18.99 USD

PURCHASE SOLUTION

Already member?


Option 2

Custom new solution created by our subject matter experts

GET A QUOTE