Patent 8438120

Obviousness

Combinations of prior art that suggest the claimed invention would have been obvious under 35 U.S.C. § 103.

Active provider: Google · gemini-2.5-pro

Obviousness

Combinations of prior art that suggest the claimed invention would have been obvious under 35 U.S.C. § 103.

✓ Generated

Obviousness Analysis (35 U.S.C. § 103)

Under 35 U.S.C. § 103, an invention is unpatentable if the differences between the claimed invention and the prior art are such that the subject matter as a whole would have been obvious at the time the invention was made to a person having ordinary skill in the art (PHOSITA). The analysis considers the scope and content of the prior art, the differences between the prior art and the claims at issue, and the level of ordinary skill in the pertinent art.

Definition of a Person Having Ordinary Skill in the Art (PHOSITA)

At the time of the invention (priority date April 25, 2007), a PHOSITA in the field of machine learning and computational optimization would have a Master's degree or equivalent experience in computer science, electrical engineering, or a related field. This individual would be familiar with fundamental machine learning concepts (e.g., classifiers, hyperparameters, training/testing) and various optimization techniques, including statistical methods and evolutionary algorithms like genetic algorithms. They would have practical experience implementing and tuning machine learning models and would read and understand academic publications in the field, such as proceedings from major conferences like ICML.

Primary Obviousness Combination: De Boer et al. in view of General Knowledge of "Elitism" in Evolutionary Computation

A strong case for obviousness can be made by combining the teachings of De Boer et al., "A Tutorial on the Cross-Entropy Method," with the widely-known principle of "elitism" from the field of evolutionary computation.

1. Scope of De Boer et al. (2005):
As established in the prior art analysis, De Boer et al. is a foundational text describing the Cross-Entropy (CE) method for optimization. It explicitly teaches an iterative process for finding optimal parameters:

  • Drawing a random sample of candidate solutions (vectors).
  • Evaluating the performance of each sample.
  • Selecting a subset of the best-performing ("elite") samples from the current iteration.
  • Updating the sampling parameters based on this elite subset to guide the next iteration's search.

2. The Missing Element:
The key distinction in claim 1 of the '120 patent is the specific step of selecting and using the single hyperparameter vector that produced the best result across the present and any previous iterations. De Boer et al. teach updating based on a percentage of the best samples from the current generation, not preserving the single best-ever solution found throughout the entire history of the search. If the random sampling in a subsequent iteration does not reproduce the previous best solution or find a better one, the standard CE algorithm as described by De Boer et al. could potentially "forget" the best-so-far solution.

3. Motivation to Combine with Elitism:
The strategy of preserving the best-performing solution across all generations is a well-known and fundamental concept in the related field of evolutionary and genetic algorithms, where it is known as "elitism." By 2007, elitism was a standard, textbook technique used to ensure that a stochastic search algorithm does not discard the best solution found so far.

A PHOSITA, tasked with applying the CE method from De Boer et al. to a difficult optimization problem like hyperparameter tuning, would have been motivated to incorporate elitism for a clear and predictable reason: to guarantee convergence and prevent the loss of a high-performing solution. Stochastic algorithms like CE do not guarantee that each successive generation of samples will be better than the last. A run of bad "luck" in the random sampling could cause the algorithm to move away from a promising area of the search space.

Incorporating an elitist strategy—simply storing the best-so-far vector and its performance score in memory and carrying it over to the next iteration if no better solution is found—is a simple, logical, and almost trivial modification to prevent this known issue. A PHOSITA would see this not as an inventive leap, but as the application of a standard optimization heuristic to improve the robustness of the CE algorithm. There would be a reasonable expectation of success, as elitism was a proven method for improving the performance of similar population-based search algorithms.

Conclusion:
The invention claimed in US 8,438,120 would have been obvious to a PHOSITA. De Boer et al. teaches all elements of the iterative, sampling-based optimization method except for the preservation of the single best-so-far solution across all iterations. This missing element is supplied by the well-known principle of elitism from evolutionary computation. A PHOSITA would have been motivated to combine the two to create a more robust and reliable optimization algorithm, with a high expectation of success. Therefore, claims 1, and by extension dependent claims and claims 12, 13, and 14, are rendered obvious by this combination.

Generated 4/30/2026, 11:58:35 PM