Patent 11328206

Obviousness

Combinations of prior art that suggest the claimed invention would have been obvious under 35 U.S.C. § 103.

Active provider: Google · gemini-2.5-pro

Obviousness

Combinations of prior art that suggest the claimed invention would have been obvious under 35 U.S.C. § 103.

✓ Generated

Obviousness Analysis of US Patent 11,328,206

This analysis evaluates the obviousness of the independent claims of US Patent 11,328,206 (the '206 patent) under 35 U.S.C. § 103. The core of this inquiry is whether the differences between the claimed invention and the prior art would have been obvious to a "person having ordinary skill in the art" (PHOSITA) at the time the invention was made. This requires not only finding the claimed elements in the prior art but also establishing a clear reason or "motivation to combine" those elements.

A PHOSITA in this technical domain (computer architecture and machine learning) around the 2016 priority date would be a computer engineer or scientist with graduate-level education and experience in processor design, performance analysis, and the application of machine learning models to system optimization. Such a person is presumed to have knowledge of all relevant public prior art.

The primary prior art references considered are:

  • US 9,984,270 B2 ("'270 patent"): Teaches a processor with an integrated neural network that predicts future instructions from instruction traces to pre-fetch and pre-execute them.
  • US 2015/0379440 A1 ("'440 publication"): Discloses a system that uses a machine learning model to analyze performance counters, identify the current workload, and dynamically adapt hardware configurations (like clock speed or cache size) to optimize performance or power usage.
  • US 10,984,336 B2 ("'336 patent"): Describes using a machine learning model during the processor design phase to predict performance metrics.

Analysis of Independent Claim 1 (Method for Managing Operations)

Claim 1 outlines a method for managing a computing device using a deep neural network (DNN). The DNN receives inputs like sensor data and processing data to generate outputs such as control signals, predictions, and warnings to improve performance, efficiency, or security.

Obviousness Argument: Claim 1 is arguably obvious over the combination of the '440 publication and the '270 patent.

  • Mapping Claim Elements to Prior Art:

    • The '440 publication establishes the foundational method. It teaches using a machine learning model to receive "computing environment data" (performance counters, which are a form of sensor and processing data) and, in response, generate "control signals for managing one or more operations" (adapting hardware configurations) to enhance performance and efficiency. This directly addresses the core of Claim 1.
    • The '270 patent introduces the key element of prediction. It explicitly teaches using a neural network to generate "predictions corresponding to a future state" (predicting upcoming instructions). The '206 patent's inclusion of "DNN data" as an input is a predictable feature of advanced machine learning systems, where model outputs are often fed back as inputs for subsequent analyses.
  • Motivation to Combine: A PHOSITA would have been motivated to combine these teachings to create a more proactive and intelligent system. The '440 publication provides a reactive system that adapts to the current state. The '270 patent demonstrates the feasibility of a proactive system that anticipates future states. A skilled artisan would recognize that the adaptive system of '440 could be significantly improved by incorporating the predictive capabilities of '270. This would allow the system to anticipate future resource needs or bottlenecks and adjust hardware pre-emptively, rather than just reacting. This combination would be a logical step to solve a known problem: improving processor efficiency. The result would be a system that yields predictable results—enhanced performance and efficiency—by combining known elements.


Analysis of Independent Claim 14 (System for Optimizing Operations)

Claim 14 describes a physical system, including a computing device and a DNN, that performs the method of Claim 1. A key feature is the system's ability to provide a "warning signal" based on predicted unexpected behavior.

Obviousness Argument: Claim 14 is arguably obvious over the combination of the '440 publication and the '270 patent, with the concept of a warning signal being an obvious addition.

  • Mapping Claim Elements to Prior Art:

    • The system architecture is taught by the combination of '440 (a system that adapts a processor based on a learned model) and '270 (a processor with an integrated neural network).
    • The generation of a "warning signal" for unexpected behavior (faults, exceptions) is a logical extension of a system that learns normal operational patterns. The '440 system learns workload characteristics to optimize performance. A PHOSITA would understand that a model trained on normal behavior could inherently be used for anomaly detection. Flagging a significant deviation from the learned norm as a potential fault or security threat would be a common-sense application of machine learning principles to enhance system reliability.
  • Motivation to Combine: The motivation is the same as for Claim 1—to enhance the reactive system of '440 with the predictive power shown in '270. The further motivation to add a warning signal stems from the desire for a more robust and secure computing system. Anomaly detection was a known problem in the field, and applying a model already monitoring system behavior to this task would have been a straightforward and predictable solution for a skilled artisan.


Analysis of Independent Claim 20 (Processor with DNN Control Unit)

Claim 20 specifies a processor where the control unit itself includes a DNN. This DNN is trained on workloads, takes in real-time operational data, and outputs control signals to command the processor's datapath.

Obviousness Argument: Claim 20 is arguably obvious over the '270 patent alone or in view of the '440 publication.

  • Mapping Claim Elements to Prior Art:

    • The '270 patent is highly relevant as it describes a "neural network-based processor" where the neural network is integrated into the processor's control logic. It receives "instruction traces" (data related to processor operation) and its predictions are used to manage the execution pipeline (commanding the datapath). This directly maps to the core architecture of Claim 20.
    • The '440 publication further teaches using machine learning outputs to control a wider range of processor functions, such as power management (voltage/current adjustments), which are also managed by the control unit.
  • Motivation to Combine/Extend: The '270 patent already establishes the principle of embedding a neural network into a processor's control flow for a specific task (instruction prediction). A PHOSITA, seeing the success of this approach, would be motivated to generalize it. If a neural network can optimize one aspect of the datapath, it would be an obvious step to explore its use for other control functions traditionally handled by heuristic-based logic, such as branch prediction, cache pre-fetching, and dynamic power management (as suggested by '440). The motivation would be to create a more holistic, adaptive, and efficient control unit that can learn from complex workload patterns, a clear design incentive in the field of computer architecture. This represents a predictable evolution from a specialized neural network application to a more generalized DNN-based control unit as claimed.

Generated 4/30/2026, 8:34:06 PM