IBM SA14-2339-04 Manual Do Utilizador

Página de 552
Cache Operations
4-1
Chapter 4. Cache Operations
The PPC405 core incorporates two internal cache units, an  instruction cache unit (ICU) and a  data
cache unit (DCU). Instructions and data can be accessed in the caches much faster than in main
memory, if instruction and data cache arrays are implemented. The PPC405B3 core has a 16KB
instruction cache array and an 8KB data cache array.
The ICU controls instruction accesses to main memory and, if an instruction cache array is
implemented, stores frequently used instructions to reduce the overhead of instruction transfers
between the instruction pipeline and external memory. Using the instruction cache minimizes access
latency for frequently executed instructions.
The DCU controls data accesses to main memory and, if a data cache array is implemented, stores
frequently used data to reduce the overhead of data transfers between the GPRs and external
memory. Using the data cache minimizes access latency for frequently used data.
The ICU features:
• Programmable address pipelining and prefetching for cache misses and non-cachable lines
• Support for non-cachable hits from lines contained in the line fill buffer
• Programmable non-cachable requests to memory as 4 or 8 words (or half line or line)
• Bypass path for critical words
• Non-blocking cache for hits during fills
• Flash invalidate (one instruction invalidates entire cache)
• Programmable allocation for fetch fills, enabling program control of cache contents using the icbt
instruction
• Virtually indexed, physically tagged cache arrays
• A rich set of cache control instructions
The DCU features:
• Address pipelining for line fills
• Support for load hits from non-cachable and non-allocated lines contained in the line fill buffer
• Bypass path for critical words
• Non-blocking cache for hits during fills
• Write-back and write-through write strategies controlled by storage attributes
• Programmable non-cachable load requests to memory as lines or words.
• Handling of up to two pending line flushes.
• Holding of up to three stores before stalling the core pipeline
• Physically indexed, physically tagged cache arrays
• A rich set of cache control instructions