Patent 11841803

Prior art

Earlier patents, publications, and products that may anticipate or render the claims unpatentable.

Active provider: Google · gemini-2.5-pro

Prior art

Earlier patents, publications, and products that may anticipate or render the claims unpatentable.

✓ Generated

Analysis of Prior Art for U.S. Patent 11,841,803

Patent in Question: US 11,841,803 B2

  • Title: GPU chiplets using high bandwidth crosslinks
  • Filing Date: June 28, 2019
  • Issue Date: December 12, 2023
  • Assignee: Advanced Micro Devices, Inc. (Original), Onesta Ip LLC (Current)
  • Summary: This patent details a system and method for utilizing a multi-chiplet graphics processing unit (GPU) architecture. The core innovation lies in connecting multiple GPU "chiplets" using a passive, high-bandwidth crosslink, such as a silicon interposer. This arrangement allows the multiple chiplets to function and be addressed by the central processing unit (CPU) as a single, monolithic GPU. A key aspect is the maintenance of cache coherency across the last-level cache (LLC) of all chiplets, enabling seamless communication and data access between them. The design aims to overcome the manufacturing cost and yield limitations of large, monolithic GPU dies by breaking them into smaller, interconnected functional units.

Potentially Relevant Prior Art

The following patent documents are cited as references in US 11,841,803 and have been analyzed for their potential to anticipate the claims under 35 U.S.C. § 102.


1. US 10,475,147 B2

  • Full Citation: US Patent 10,475,147 B2, "Multiple GPU graphics processing system," Arm Limited.
  • Publication Date: November 12, 2019 (Filed: February 12, 2016)
  • Brief Description: This patent describes a graphics processing system with multiple GPUs. It focuses on how rendering tasks are distributed and managed across these GPUs. The system includes a mechanism for one GPU to access the memory of another GPU to retrieve data needed for its rendering tasks. This is facilitated by a communication interface between the GPUs.
  • Potential Anticipation of Claims:
    • Claim 1 & 7: This reference discloses a system with multiple GPUs communicably coupled. While it doesn't explicitly use the term "chiplet" or "passive crosslink," the described architecture of interconnected GPUs performing a unified task is conceptually similar. The nature of the communication interface would be critical in determining direct anticipation. If the interface functions as a dedicated bus for inter-GPU communication, it could be argued that it anticipates the "passive crosslink" element.
    • Claim 8 & 9: The '147 patent discusses memory access between GPUs, which implies a need for some level of memory coherence or a unified memory view. While it may not detail a coherent last-level cache across all units in the same manner as the '803 patent, the fundamental concept of inter-GPU memory access to present a unified system is present.

2. US 2019/0123022 A1

  • Full Citation: US Patent Application Publication 2019/0123022 A1, "3D Compute Circuit with High Density Z-Axis Interconnects," Xcelsis Corporation.
  • Publication Date: April 25, 2019 (Filed: October 7, 2016)
  • Brief Description: This patent application focuses on the physical structure of multi-chip modules, specifically using high-density vertical interconnects (through-silicon vias or TSVs) to stack and connect multiple semiconductor dies. This "3D" stacking allows for high-bandwidth communication between the dies.
  • Potential Anticipation of Claims:
    • Claim 2, 3, & 10: This reference is highly relevant to the physical implementation claims of the '803 patent. It describes the use of interposers and high-density interconnects for chip-to-chip communication, which aligns with the "passive interposer die" and "PHY region" with conductor structures for chiplet-to-chiplet communications. The '022 application's focus on the physical linkage is a direct parallel to the structural aspects of the '803 patent's claims.

3. US 2007/0273699 A1

  • Full Citation: US Patent Application Publication 2007/0273699 A1, "Multi-graphics processor system, graphics processor and data transfer method," Nobuo Sasaki.
  • Publication Date: November 29, 2007 (Filed: May 24, 2006)
  • Brief Description: This application describes a multi-GPU system where multiple graphics processors are connected to a shared memory controller. It details a method for transferring data between the graphics processors through this shared controller to execute parallel processing tasks.
  • Potential Anticipation of Claims:
    • Claim 1 & 7: This reference clearly discloses a system with multiple graphics processors working in concert. The communication between the processors, arbitrated by a shared memory controller, serves a similar function to the "passive crosslink" in the '803 patent, which is to facilitate inter-chiplet communication. The distinction would lie in whether the shared memory controller could be considered a "passive" element in the way the '803 patent defines its crosslink.
    • Claim 11, 12, & 16: The method of data transfer described in this application, where one processor requests data that may be held in the memory space of another, mirrors the method claims of the '803 patent. The process of routing a memory access request to the appropriate GPU and returning the data is a core concept in both.

4. US 2001/0005873 A1

  • Full Citation: US Patent Application Publication 2001/0005873 A1, "Shared memory multiprocessor performing cache coherence control and node controller therefor," Hitachi, Ltd.
  • Publication Date: June 28, 2001 (Filed: December 24, 1999)
  • Brief Description: This early reference describes a multiprocessor system with a focus on maintaining cache coherency across the different processors. It details a node controller that manages requests for data and ensures that all processors have a consistent view of the shared memory.
  • Potential Anticipation of Claims:
    • Claim 8 & 9: This reference is highly relevant to the claims concerning cache coherency. It directly addresses the problem of maintaining a unified and coherent cache across multiple processing units. While it discusses general-purpose processors rather than specifically GPU chiplets, the underlying method for achieving cache coherency in a multi-processor system is fundamental to the novelty claimed in the '803 patent.
    • Claim 11, 13, 14, & 15: The method of handling memory access requests by determining the location of the cached data and routing the request accordingly is a key part of this Hitachi application. This process is analogous to the '803 patent's method of a primary chiplet determining a "caching GPU chiplet" and routing the request.

Disclaimer: This analysis provides an initial assessment of potentially relevant prior art and is not a formal legal opinion on the validity of US Patent 11,841,803. A thorough invalidity search and legal analysis by a qualified patent attorney would be required for a definitive conclusion.

Generated 5/13/2026, 12:12:41 AM