On-chip cache design method with memory stack compilation and layout collaboration and on-chip cache

An on-chip cache and design method technology, applied in static memory, computer-aided design, CAD circuit design, etc., can solve problems such as large timing margin, and achieve the effects of power consumption optimization, power consumption improvement, and area reduction

Active Publication Date: 2022-04-15
NAT UNIV OF DEFENSE TECH
View PDF4 Cites 2 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Benefits of technology

This patented technology allows for efficient allocation of memory space by combining multiple types of data into one unit called an integrated circuit (IC). It involves create a special structure that combines these two components together without compromising their functionality or performance. These structures can be customized based upon factors like size, type of work being done, etc., making them suitable for various applications such as computer systems. Overall, this innovations improve efficiency and effectiveness in managing large amounts of RAM efficiently.

Problems solved by technology

Technics: Current designs for reducing energy usage during caching operations involve optimizing data movement within an integrated circuit (IC). However, this can lead to increased power use due to excessive clock skew between different parts of the IC's internal structure. To address these issues, there has been proposed methods like precompiling certain memories before adding them into the final product, but doing so may result in wasted space that could be used up again later if needed.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • On-chip cache design method with memory stack compilation and layout collaboration and on-chip cache
  • On-chip cache design method with memory stack compilation and layout collaboration and on-chip cache
  • On-chip cache design method with memory stack compilation and layout collaboration and on-chip cache

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0052] In order to make the purpose, technical solution and advantages of the present application clearer, the present application will be further described in detail below in conjunction with the accompanying drawings and embodiments. It should be understood that the specific embodiments described here are only used to explain the present application, and are not intended to limit the present application.

[0053] The traditional layout structure of on-chip cache is as follows figure 1 As shown, the main body is a storage array composed of memory banks, and the outside of the storage array is glue logic, including a series of merging, selection, registration operations or bus protocol conversion logic, and finally output to the external unit or bus. Specifically, the memory bank may be an SRAM memory bank. Due to the large size of the memory banks, the distance of the memory banks from the glue logic varies in different locations. Such as figure 1 The memory bank in the middl

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention relates to an on-chip cache design method for memory stack compilation and layout collaboration and an on-chip cache. The method comprises the following steps: performing pre-layout planning on bonding logic to obtain a layout area, marking the layout area as a layout area, selecting a convergent point of a memory bank in the layout area, taking the convergent point as a reference point for calculating the distance from the memory bank to the bonding logic, selecting a position closest to the convergent point in a non-layout area, and taking the position as a reference point for calculating the distance from the memory bank to the bonding logic. And calculating the time sequence requirement of the current memory bank at the layout position according to the distance between the layout position and the rendezvous point, performing exhaustive compiling on the current memory bank to obtain a candidate compiling configuration set of the current memory bank, and selecting compiling configuration meeting the time sequence requirement in the set. By adopting the method, the position information of the memory bank can be considered while the memory bank is compiled, so that the time sequence requirement of the memory bank compiling can be accurately formulated, and the memory bank with the speed meeting the requirement and the optimal power consumption can be compiled.

Description

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Owner NAT UNIV OF DEFENSE TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products