Patent 11841803
Obviousness
Combinations of prior art that suggest the claimed invention would have been obvious under 35 U.S.C. § 103.
Active provider: Google · gemini-2.5-pro
Obviousness
Combinations of prior art that suggest the claimed invention would have been obvious under 35 U.S.C. § 103.
Analysis of Obviousness for US Patent 11,841,803
An analysis of US Patent 11,841,803, titled "GPU chiplets using high bandwidth crosslinks," under 35 U.S.C. § 103 suggests that the claimed invention may be rendered obvious by a combination of prior art references. This analysis is predicated on the legal standard of whether a person having ordinary skill in the art (POSITA) at the time of the invention would have been motivated to combine existing technologies to arrive at the claimed subject matter with a reasonable expectation of success.
A thorough review of the patent's file history, specifically the prior art cited by the USPTO examiner during prosecution, is crucial for a definitive determination. However, based on publicly available information and the state of the art preceding the patent's priority date of June 28, 2019, a compelling case for obviousness can be constructed.
Summary of the Invention
US Patent 11,841,803 describes a system and method for a graphics processing unit (GPU) built from multiple smaller chips, or "chiplets." Key features of the invention include:
- A multi-chiplet GPU architecture: A GPU composed of an array of interconnected chiplets.
- A passive crosslink: A dedicated, passive interposer die for high-bandwidth communication between the GPU chiplets.
- A primary "host" chiplet: One GPU chiplet communicates directly with the central processing unit (CPU).
- Unified Cache Coherency: A last-level cache (LLC) that is coherent across all GPU chiplets, making the multi-chiplet array appear as a single, monolithic GPU to software.
- Dedicated PHY regions: Physical layer regions on the chiplets specifically designed for chiplet-to-chiplet communication.
Potential Obviousness Combinations
A person of ordinary skill in the art would likely have been motivated to combine teachings from prior art related to multi-chip modules (MCMs), passive interposer technologies, and existing GPU architectures with coherent memory systems.
Combination 1: A general-purpose multi-chip module patent combined with a patent on cache coherency in multi-processor systems.
Rationale: By 2019, the use of MCMs to create larger, more powerful processors from smaller, higher-yielding dies was a well-established concept in the semiconductor industry. Patents detailing the use of silicon interposers (a form of passive crosslink) to connect multiple dies were prevalent. For example, a reference teaching the assembly of multiple processing dies on a passive interposer for improved performance and yield would provide the foundational structure.
Motivation to Combine: A POSITA would be motivated to apply this MCM approach to GPUs to overcome the manufacturing yield and cost limitations of large monolithic GPU dies. As GPUs are inherently parallel processors, partitioning them into smaller, identical chiplets is a logical step. To make this partitioned GPU function as a single unit, a POSITA would naturally look to existing solutions for maintaining memory coherency in multi-processor systems. Prior art in the field of multi-core CPUs and server architectures extensively covers protocols and hardware for maintaining cache coherency across multiple processing units. Combining these two fields would be a predictable step to create a scalable and efficient multi-chiplet GPU.
Combination 2: A patent on 2.5D packaging with passive interposers and a publication detailing the architecture of a contemporary high-end GPU.
Rationale: 2.5D packaging, which involves placing multiple dies side-by-side on a silicon interposer, was a known technology for high-performance computing applications. Patents and publications from foundries and packaging companies would describe the physical implementation of such a system, including the use of through-silicon vias (TSVs) and micro-bumps for high-density interconnects, which are elements of the "passive crosslink" described in the '803 patent. High-end GPUs of the era already featured sophisticated memory hierarchies with multiple levels of cache and mechanisms for ensuring data consistency across their many processing cores.
Motivation to Combine: A POSITA, aware of the benefits of 2.5D packaging for high-bandwidth, low-latency communication, would be motivated to apply this technology to a GPU architecture. The goal would be to extend the existing on-chip memory system across multiple chiplets. The passive interposer provides the physical means for this extension. The challenge of maintaining a unified and coherent last-level cache across these chiplets is a direct and foreseeable problem that arises from this combination. The solution of extending the existing GPU's cache coherency protocols across the high-bandwidth passive interposer would be a straightforward engineering step for a skilled practitioner. The '803 patent's description of a "scalable data fabric" is a known concept for routing memory requests in such an environment.
Analysis of Claim Limitations
- Claim 1: This independent claim recites a system with a CPU coupled to a first GPU chiplet, which is in turn coupled to a second GPU chiplet via a "passive crosslink" for inter-chiplet communications. This fundamental structure would be rendered obvious by the combinations described above.
- Dependent Claims: Dependent claims that specify the passive crosslink as a "passive interposer die," the presence of "PHY regions," and a "unified cache memory" that is "coherent across all GPU chiplets" would also be obvious. The use of a passive interposer is inherent to 2.5D packaging. Dedicated PHYs are a standard requirement for any high-speed off-chip communication. The need for a coherent unified cache is a direct and necessary consequence of creating a multi-chiplet GPU that functions as a single logical unit, a problem for which solutions existed in the prior art.
- Method Claims: The method claims, which describe receiving a memory access request at a primary chiplet and routing it to a "caching GPU chiplet" via the passive crosslink, describe the standard operation of a distributed, coherent cache system. Once the hardware structure is deemed obvious, the method of its operation would also be considered obvious to a POSITA.
Conclusion
While a definitive conclusion of obviousness requires analysis of the specific prior art cited during the patent's examination, a strong prima facie case can be made that the claims of US Patent 11,841,803 would have been obvious to a person of ordinary skill in the art at the time of the invention. The motivation to combine known multi-chip module and interposer technologies with established principles of cache coherency from multi-processor architectures to create a scalable GPU is a logical and predictable progression of the state of the art. The claimed invention appears to be a successful implementation of this combination, but one that may not rise to the level of non-obviousness required for patentability.
Generated 5/13/2026, 12:13:00 AM