Patent 11328206

Prior art

Earlier patents, publications, and products that may anticipate or render the claims unpatentable.

Active provider: Google · gemini-2.5-pro

Prior art

Earlier patents, publications, and products that may anticipate or render the claims unpatentable.

✓ Generated

Analysis of Prior Art Cited for US Patent 11,328,206

This analysis details the prior art references cited by the USPTO examiner during the prosecution of US Patent 11,328,206. Each reference is assessed for its potential to anticipate the independent claims of the '206 patent under 35 U.S.C. § 102. The priority date for the '206 patent is June 16, 2016. All cited references predate this.


1. US Patent 9,984,270 B2: "Neural network-based processor and method of operation"

  • Full Citation: US 9,984,270 B2, "Neural network-based processor and method of operation," assigned to International Business Machines Corporation.
  • Publication Date: May 29, 2018 (Filed: July 1, 2015). The filing date precedes the '206 patent's priority date.
  • Brief Description: This patent describes a processor that uses a neural network to predict future instructions. The neural network is trained on instruction traces and can predict instruction types, addresses, and data values. The goal is to improve performance by pre-fetching and pre-executing instructions based on the neural network's predictions, thereby reducing pipeline stalls and improving resource utilization.
  • Potential Anticipation of Claims:
    • Claim 1 & 20: This reference is highly relevant to claims 1 and 20. It explicitly describes a method and a processor where a neural network receives processing data (instruction traces) to generate predictions about future processor operations. This aligns with the '206 patent's concept of using a DNN to receive "processing data" (like instructions) and generate "predictions corresponding to a future state" to manage processor operations. The '270 patent's use of a neural network integrated within the processor to influence execution anticipates the core idea of a DNN-based control unit managing a datapath as described in claim 20.

2. US Patent 10,984,336 B2: "Predicting a performance metric of a processor design"

  • Full Citation: US 10,984,336 B2, "Predicting a performance metric of a processor design," assigned to International Business Machines Corporation.
  • Publication Date: April 20, 2021 (Filed: March 31, 2016). The filing date precedes the '206 patent's priority date.
  • Brief Description: This patent discloses a method for predicting performance metrics (like power consumption or execution time) of a processor design without running full simulations. It uses a machine learning model, trained on data from previous processor designs and their performance, to predict the performance of a new design. The model takes microarchitectural parameters as input and outputs a predicted performance metric.
  • Potential Anticipation of Claims:
    • Claim 1: This reference touches upon elements of claim 1, particularly the use of a learned model to make predictions related to processor operations. It describes generating "predictions for use in generating control signals" and using outputs as "a set of design guidelines for creating a processor." However, it is focused on the design phase of a processor rather than the real-time operational management of a computing device using live sensor and processing data, which is central to the '206 patent. Therefore, it may not fully anticipate claim 1's real-time control aspects but is relevant to the design guideline output.

3. US Patent Application Publication 2015/0379440 A1: "System and method for workload-driven dynamic adaptation of a processing unit"

  • Full Citation: US 2015/0379440 A1, "System and method for workload-driven dynamic adaptation of a processing unit," assigned to Intel Corporation.
  • Publication Date: December 31, 2015 (Filed: June 30, 2014). This publication predates the '206 patent's priority date.
  • Brief Description: This application describes a system where a processing unit dynamically adapts its configuration based on the workload it is executing. It uses a machine learning model to analyze performance counters and other monitoring data (analogous to sensor and processing data) to identify the current workload type. Based on the identified workload, the system selects and applies an optimal hardware configuration (e.g., adjusting cache size, pipeline depth, or clock frequency) from a set of pre-defined configurations to improve performance or power efficiency.
  • Potential Anticipation of Claims:
    • Claim 1 & 14: This reference is highly relevant to claims 1 and 14. It discloses a system that receives "computing environment data" (performance counters), analyzes it with a machine learning model, and generates "control signals for managing one or more operations" (adapting hardware configuration). The concept of learning from workload characteristics to enhance performance and efficiency is a core element shared with the '206 patent. It describes a system that optimizes operations based on outputs from a learned model, similar to the system claimed in claim 14.

4. Other Cited Non-Patent Literature

The file history also includes citations to academic papers, which are not detailed here but would have been used by the examiner to establish the state of the art regarding the use of machine learning and neural networks in computer architecture at the time of the invention. These papers often provide the foundational concepts that are later implemented in patented systems. For instance, research on using neural networks for branch prediction is a well-established field and would be relevant context for claim 20's focus on a DNN for processor control.

Generated 4/30/2026, 8:21:25 PM