Cache memory is the fastest memory in a computer that improve the processing speed of the central processing unit i.e. CPU. The cache memory stores the instructions and the data that is frequently used by the processor.
In this section, we will discuss various features of cache memory along with the types of cache memory. We will wind up the discussion with the advantages and disadvantages of cache memory over the other memories of the computer.
Content: Cache Memory in Computer Architecture
- Cache Memory
- Advantages and Disadvantage
- Key Takeaways
What is Cache Memory?
Cache memory can be defined as smaller, faster and costlier among the main memory and the secondary memory of the computer. Mostly the cache memory is contained inside the processor on the same integrated chip or some time it is external to the processor.
Whenever the processor searches for the content and find it in the cache memory it is termed as cache hit and if the content is not found in the cache memory than it is termed as a cache miss.
The efficiency of the processor is achieved when its processing speed matches the accessing speed of the memory it is referring to. If we talk of the secondary memory it has a slow accessing speed as compared to the processor clock speed. This is why we use main memory i.e. RAM.
The main memory is costlier, smaller and faster as compared to the secondary memory. Still, it does not match the clock speed of the processor. This is the reason why there was a requirement for cache memory. The access time of cache memory is comparatively faster than the main memory and it improves the computing speed of the processors.
At the starting of any program consider the cache is empty. If the processor wants to execute the program, it has to bring all the program instructions and data associated with them into the main memory.
Now, when the execution starts the instructions are brought into the processor chip and a copy of the instruction is placed into the cache memory one by one as it is accessed by the processor. Some instructions require data so the processor brings the data from the main memory to the processor chip and a copy of it is also placed into the cache memory.
It happens that some set of instructions in the program are executing because of loops, in such cases, the instruction could be directly fetched from the cache memory. And even the data that is required frequently for the execution of the instructions can be fetched from the cache memory which increases the speed of the processing thereby increasing the efficiency of the processor.
Cache memory works on the principle of locality or what we call the locality of reference. According to the principle of locality of reference the computer program spends most of its time executing the same set of instructions repeatedly.
Executing the same set of instructions tends the program to access the same memory locations repeatedly. This is the case when the executing instruction has loops, nested loops or procedures that call each other over and over again.
This repeated execution of an instruction can be exhibited in two ways temporal and spatial. The temporal aspect expresses that the instruction that is recently executed have chances to be executed again soon. The spatial aspect expresses that the instructions closer to the recently executed instruction are likely to be executed very soon.
The temporal aspect of the locality of reference suggests that the instruction or data associated with it that is needed the first time must be brought to cache memory as it may be required again soon. The spatial aspect suggest that instead of bringing only one item that is currently needed, several items at the adjacent address i.e. the set of contiguous address location (cache block) must be brought in the cache memory
The cache memory acts between comparatively larger and slower main memory and the faster processor. The presence of cache memory makes the appearance of the main memory faster than its real speed.
The size of cache memory must be small as large caches take more time in addressing and hence tend to be slower. There is one more criterion concerning the size of the cache. The size of the cache must be small such that its overall average cost per bit is similar to the cost of main memory.
Types of Cache Memory
When the cache was introduced, systems used to have single cache memory. Later the use of multiple cache memories became a standard. To implement multiple caches in a system its designing concerns two ways multilevel caches and unified versus split caches.
Nowadays it is possible to have cache on the same chip and this kind of cache is termed as on-chip cache. The on-chip cache lets the processor perform execution even faster because of its reachability.
Whenever the processor request any data and it is found in the on-chip cache then there is no need to access the buses. Elimination of bus access lets the busses free to support other transports.
You may be thinking when the on-chip cache is more efficient do we need an external cache. Of course, most of the modern design includes an on-chip as well as an external cache.
Let us start with the two-level cache where L1 is the internal or on-chip cache and L2 is an external cache. If the data requested by the processor is not found in the L1 on-chip cache then it checks it in the L2 cache. L2 provides the missing information quickly.
To improve the performance of the two-level cache we can implement the L2 cache in two ways:
- Instead of using a data bus to access the L2 cache, a separate data path must be used as it will reduce the burden on data buses.
- In the modern design computers, the processors are getting compact and they have incorporated the L2 cache on the processor’s chip thereby making the L2 chip the on-chip processor.
Now, as the L2 cache are implemented as an on-chip cache a new level is introduced L3 cache. The L3 cache is accessed over the external data buses.
Unified Versus Split Caches
When the on-chip cache was introduced a single cache used to store reference of both data and instructions. But now a day’s cache is split into two. One cache is dedicated to the instructions and one dedicated to data. Both of these caches are at the L1 level. When required the processor accesses data from the data L1 cache and instruction from the instruction L1 cache.
Though the split caches are helpful in improving the performance of the processor the unified caches have their own benefits.
- The unified cache has a higher hit rate compared to the split cache. This is because if the processor is accessing more instruction than data then the unified cache tends to fill up with the instruction and if the processor is accessing more data then it will fill itself with the data.
- Only one cache needs to be designed.
Nowadays the caches at the L1 level are implemented as split cache and the caches at the higher level are implemented as a unified cache.
Advantages and Disadvantages
- It is faster than main memory and secondary memory.
- The on-chip cache or internal cache eliminates access to data buses thereby letting busses involved in other transfers.
- It can be split into an instruction cache or data cache.
- It is an expensive memory.
- It is of very small size.
- Cache memory smaller, faster and costlier than main memory.
- When the processor finds the content to be fetched in cache memory we call it a cache hit else if the content is not found in cache memory it is termed as a cache miss.
- It can be implemented at multiple levels such as L1, L2 and L3. Where L1 is the on-chip cache, L2 may be on-chip or external cache and L3 is the external cache.
- The on-chip cache is faster than the external cache.
- The cache can be implemented as a unified cache that will hold both data and instruction together or it can be split into an instruction cache and data cache.
So this is all about cache memory, we have learned about cache from almost all perspective. From its working, purpose, size, principles to its types.
Leave a Reply