Cache Coherence

Cache Coherence assures the data consistency among the various memory blocks in the system, i.e. local cache memory of each processor and the common memory shared by the processors. It confirms that each copy of a data block among the caches of the processors has a consistent value.

In this section, we will discuss the cache coherence problem and the protocol resolving the cache coherence problem.

Content: Cache Coherence in Computer Architecture

  1. Cache Coherence Problem
  2. Cache Coherence Protocols
  3. Key Takeaways

What is Cache Coherence Problem?

In a multiprocessor environment, all the processors in the system share the main memory via a bus. Now, keeping a common cache for all the processor will enhance the size of cache thereby slowing down the performance of the system.

For better performance, each processor implements its own cache. Processors may share the same data block by keeping a copy of this data block in their cache. The figure below shows how processor P1, P3 & Pn have the copy of shared data block X of main memory in their caches.

cache coherence 1

In case, the processors P1 modifies the copy of shared memory block X  present in its cache. It would result in data inconsistency. As the processor P1 will have the modified copy of shared memory block i.e. X1. But, the main memory and other processor’s cache will have the old copy of shared memory block. And this problem is the cache coherence problem.

The figure below shows the cache coherence problem in a multiprocessing environment.

Cache coherence problem

Well, this cache coherence problem can be sorted using protocols discussed below.

Cache Coherence Protocols

1. Write-Through Protocol

In write-through protocol when a processor modifies a data block in its cache, it immediately updates the main memory with the new copy of the same data block. So, main memory here always has consistent data.

Write Through protocol 2

The write-through protocols have two versions and those are:

  1. Updating the affected copies of shared data in other processors cache.
  2. Invalidating the affected copies of shared data in other processors cache.

Let us understand the first version where the affected copies of shared data in other caches are updated.

When a processor modifies a data block in its cache and it also updates the main memory with the new copy of the same data block. The other processors in the system also have the copy of the data block that has been modified and now they carry the invalid value.

To update the copies in other caches. The processor that modifies the data block, broadcast the modified data to all the processors in the system. On receiving the broadcast data, the processor verifies whether it has the same data block in its cache if present, then it modifies the content of that data block else discard the broadcasted data.

Now, let us see the second version where the affected copies in other processors cache are invalidated.

Here, when a processor modifies any data block in its cache it also updates the main memory with the modified copy of the same data block. But, in this case, the processor modifying the data block sends the broadcast requests to other processors in the system to invalidate the copies of the same data block in their caches.

2. Write-Back Protocol

This protocol permits the processor to modify a data block only if it acquires ownership. Now how this works.

Initially, the memory is the owner of all the data block and it retains that ownership when a processor reads a data block and sites its copy in its cache.

When a processor wants to modify a data block in its cache it has to confirm that it is an exclusive owner of that data block. For this, it has to first invalidate the copies of this data block in the other caches by broadcasting an invalidating request to all processors. Once it has become the exclusive owner, it can modify the data block.

Write Back protocol 1

If any processor wants to read this modified data block it has to send the request to the current owner processor of that data block.

The owner forwards the data to the requesting processor and to the main memory. The main memory updates the content of the data block that has been modified and reacquires it ownership again over the data block. If any processor requires this data block it will be serviced by main memory.

If another processor in the system wishes to modify/write the data block that has been modified. It sends a request to the current owner. The current owner sends the data and control over the block to the requesting processor.

Now, the requesting processor is the owner. It modifies the data block and also services the other processor’s request for the data block. Here the modified data block is not updated in the main memory. Since only the owner is authorized to modify the data block.

3. Snoopy Protocol

In the multiprocessor environment, all the processors are connected to memory modules via a single bus. The transaction between the processors and the memory module i.e. read, write, invalidate request for the data block occurs via bus.

If we implement the cache controller to every processor’s cache in the system, it will snoop all the transaction over the bus and perform the appropriate action. So, we can say that the Snoopy protocol is the hardware solution to cache coherence problem.

It is used for small multiprocessor environment as the large shared-memory multiprocessors are connected via the interconnection network.

Snoopy cache protocol

Consider a scenario from write-back, if a processor has just modified a data block in its cache, and is a current owner of the block.

Now, if processor P1 wishes to modify the same data block that has been modified. P1 would broadcast the invalidation request on the bus and becomes the owner for that data block  and modifies the data block. The other processors who have the copy of the same data block snoop the bus and invalidate their copy of the data block (I). It updates memory using write-back protocol.

4. Directory-Based Cache Coherence Protocol

Directory-Based cache coherence protocol is a hardware solution to cache coherence problem. It is implemented to a large multiprocessor system where the shared memory and processors are connected using the interconnection network.

Directory-based protocol

The directories are implemented in each memory module of the multiprocessors system. The directory keeps the record of all the actions taken to each data block. Due to its cost and the complexity directory-based cache coherence protocols are implemented only to large multiprocessors system.

5. MESI Protocol

MESI is a cache coherence protocol that assures data consistency on a symmetric multiprocessor (SMP). The word MESI defines the four states of the data block in the caches of the processors of the multiprocessing system.

Let us discuss them one by one:

Modify (M): The data block in a cache is modified and processor modifying the data block is the owner of that data block. This copy of the data block is not available with any other caches in the system.

The main memory copy for the same data block does not contain the modified value of the data block. If the processor wants to modify it again, it doesn’t need to broadcast this request over the bus again.

Exclusive (E): When the processor wants to modify a data block in its cache, it broadcast the request to invalidate the copy of same data block in other caches.

So, the data block to be modified is now only with the processor that wishes to modify it and with the main memory. Here, the processor is the exclusive owner of the data block.

Shared (S): A data block in the main memory is shared by many processors in the system and all the processors have the valid copy of the data block in their caches.

Invalid (I): The cache has a data block which does not have valid data. If it wants to read or write/modify this data block it has to send a request to the owner of the same data block.

Key Takeaways

  • Cache coherence promises the data consistency among all the memory blocks in the system (cache memory of various processors and the main memory).
  • Whenever a processor modifies a data block in its cache, the copies of the same data block in other caches and the memory are not updated. So, the other caches would have the old copy of the same data block. This leads to data inconsistency and it’s a cache coherence problem.
  • We have the protocols to maintain the cache coherence in the system like write-through protocol, write-back protocol, snoopy protocol, directory-based protocol, and MESI protocol.
  • Along with the protocol mentioned above, there are several other approaches to maintain the cache coherence in the system.

So, the cache coherence is one of the important things to be maintained by the processor. Ignoring it wouldn’t work and shows disastrous results.

Leave a Comment

Your email address will not be published. Required fields are marked *