Patent 12236456
Derivative works
Defensive disclosure: derivative variations of each claim designed to render future incremental improvements obvious or non-novel.
Active provider: Google · gemini-2.5-pro
Derivative works
Defensive disclosure: derivative variations of each claim designed to render future incremental improvements obvious or non-novel.
Defensive Disclosure and Prior Art Generation
Regarding U.S. Patent 12,236,456: "System and method for delivering targeted advertisements and/or providing natural language processing based on advertisements"
Publication Date: April 28, 2026
Author: Senior Patent Strategist and Research Engineer
Purpose: This document discloses a series of technical implementations, variations, and applications derived from the core inventive concepts of U.S. Patent 12,236,456. The intent of this publication is to place these derivative concepts into the public domain, thereby establishing them as prior art for the purposes of patentability analysis under 35 U.S.C. §§ 102 and 103.
Section 1: Component and Data Structure Substitutions
1.1 Neuromorphic Processor for Conversational Language Interpretation
Enabling Description: The method of claim 1 is implemented using a specialized neuromorphic processor, such as the Intel Loihi 2 or a comparable spiking neural network (SNN) hardware accelerator, to perform the functions of the
Conversational Language Processor (120)and theContext Determination Module (130). The user-specific profile, built from tracked purchase opportunity interactions, is encoded as a set of synaptic weights and neuronal firing thresholds within the SNN. When a subsequent utterance is processed, the pre-existing state of the SNN (the profile) directly influences the pattern of spike propagation, thus altering the interpretation of the new utterance in an energy-efficient, parallelized manner. The tracked interaction (e.g., accepting a purchase) sends a training signal that adjusts the synaptic plasticity of the SNN via a spike-timing-dependent plasticity (STDP) learning rule.Mermaid Diagram:
graph TD A[User Utterance 1] --> B{ASR Engine}; B --> C[Recognized Text]; C --> D[SNN Processor (Loihi 2)]; subgraph SNN State (User Profile) D -- Reads Current State --> D; end D --> E[Context & Purchase Opportunity]; E --> F[Deliver to User Device]; F --> G{User Interaction}; G -- STDP Learning Signal --> H(Update Synaptic Weights in SNN); A2[User Utterance 2] --> B; B2[Recognized Text 2] --> D; D -- Uses Updated Weights for Interpretation --> I[Interpreted Intent 2];
1.2 Federated Learning Architecture for Profile Management
Enabling Description: The centralized
user-specific profileis replaced with a federated learning (FL) architecture. The user's electronic device maintains a local Natural Language Understanding (NLU) model. After a user interacts with a purchase opportunity, the device does not transmit the raw interaction data. Instead, it computes a model update (e.g., a gradient vector) based on that interaction. This update, which represents the user's revealed preference, is encrypted and sent to a central server. The server aggregates updates from many users to create an improved global NLU model, which is then distributed back to the user devices. The critical interpretation of the next utterance uses the locally updated model before it is even sent for aggregation, ensuring immediate personalization without compromising raw data privacy.Mermaid Diagram:
sequenceDiagram participant UserDevice participant FLS_Server UserDevice->>UserDevice: Receives Utterance, Interprets with Local_Model_V1 UserDevice->>FLS_Server: Request Purchase Opportunity FLS_Server-->>UserDevice: Delivers Opportunity UserDevice->>UserDevice: User Interacts with Opportunity UserDevice->>UserDevice: Compute Gradient_Update based on interaction UserDevice->>UserDevice: Local_Model_V2 = Local_Model_V1 + Gradient_Update UserDevice->>FLS_Server: Send Encrypted Gradient_Update UserDevice->>UserDevice: Receives Utterance 2, Interprets with Local_Model_V2 FLS_Server->>FLS_Server: Aggregate updates from many users FLS_Server-->>UserDevice: Distribute updated Global_Model_V2
Section 2: Operational Parameter and Scale Expansion
2.1 Industrial Control System for Automated Resupply
Enabling Description: The disclosed system is adapted for command and control of autonomous ground vehicles (AGVs) and robotic arms in a large-scale manufacturing facility. An array of noise-canceling microphones captures voice commands from floor managers. An utterance like, "Robot 7, inspect the A-pillar welding," is processed. The system, noting from sensor data that Robot 7's welding tip is near its operational limit, presents a "purchase opportunity" on the manager's tablet: "Robot 7 requires a new ER5356 welding tip within 5 hours. Order from Supplier A (2-hr delivery) or Supplier B (4-hr delivery)?" The manager's tapped selection is tracked. This interaction updates the manager's user-specific profile, so a subsequent ambiguous command like "Get that part ordered for the next robot" is interpreted by the NLU as specifically "Order an ER5356 welding tip from Supplier A."
Mermaid Diagram:
flowchart LR subgraph Factory Floor A[Manager Utterance: "Inspect welding"] -- Captured by --> B(Microphone Array); end subgraph Control Server B --> C{ASR}; C --> D[NLU & Context Engine]; D -- Fuses with --> E[SCADA/Robot Sensor Data]; E --> F{Identify Resupply Need}; F --> G[Generate Resupply Opportunity]; end subgraph Manager Tablet G --> H(Present Options: Supplier A vs B); H -- User Selection --> I{Track Interaction}; end I --> J(Update Manager's NLU Profile); K[Manager Utterance 2: "Order that part"] --> C; D -- Uses Profile J to Disambiguate --> L[Execute Action: Order ER5356 from Supplier A];
2.2 Voice-Assisted Control of Laboratory Nanofabrication
Enabling Description: The system is scaled down to operate in a nano-engineering laboratory for controlling atomic force microscopes (AFMs) or nano-assemblers. A researcher's voice command, "Begin assembly of the carbon lattice," is interpreted. The system cross-references the requested protocol with a database of available molecular components. It presents a "purchase opportunity" (a design choice) on a monitor: "Warning: Graphene source purity is 98.7%. Proceed, or use CNT source (99.9% purity, +12% cost)?" The researcher's voiced response, "Use the CNTs," is tracked. This choice updates the researcher's NLU profile. Later, when the researcher says, "Run the standard purification cycle," the system's NLU, informed by the profile, interprets "standard" to mean the higher-purity protocol associated with carbon nanotubes, not the default graphene protocol.
Mermaid Diagram:
graph TD A["Researcher: 'Begin assembly'"] --> B{Voice Interface}; B --> C[NLU Processor]; C -- Checks --> D[Component Database]; D --> E{Purity/Cost Conflict Identified}; E --> F[Display Choice: Graphene vs CNT]; F -- "Researcher: 'Use the CNTs'" --> G{Track Choice}; G --> H[Update Researcher Profile: Prefers Purity]; I["Researcher: 'Run standard purification'"] --> B; C -- Uses Profile H --> J[Interpret 'Standard' as CNT-specific Protocol]; J --> K[Execute AFM/Nano-assembler Commands];
Section 3: Cross-Domain Applications
3.1 Aerospace: Adaptive In-Cockpit Flight Assistant
Enabling Description: The system is integrated into a modern glass cockpit as a pilot's voice assistant. During flight, the pilot issues an utterance: "What's the weather like at KDEN?" The system retrieves and displays the weather. Concurrently, it analyzes the data and presents a "purchase opportunity" (a proactive safety choice): "Moderate turbulence is reported over the front range. Suggest vectoring 15 degrees south. Acknowledge?" The pilot's verbal confirmation, "Acknowledge, vector south," is tracked. This interaction updates the pilot's profile to indicate a preference for turbulence avoidance. On a subsequent flight, if the pilot says, "Plan our descent," the NLU model, now biased by the updated profile, will automatically interpret this command as "Plan our descent while prioritizing turbulence avoidance," and it will query for and favor routes with smoother air, even if slightly less fuel-efficient.
Mermaid Diagram:
sequenceDiagram participant Pilot participant Cockpit_VUI participant FMS as Flight Management System Pilot->>Cockpit_VUI: "Weather at KDEN?" Cockpit_VUI->>FMS: Request Weather Data FMS-->>Cockpit_VUI: Weather Data (contains turbulence) Cockpit_VUI->>Cockpit_VUI: Analyze, Generate Avoidance Choice Cockpit_VUI->>Pilot: Display/Voice: "Suggest vectoring south?" Pilot->>Cockpit_VUI: "Acknowledge" Cockpit_VUI->>Cockpit_VUI: Track Interaction, Update Pilot Profile (Prefers turbulence avoidance) Note right of Cockpit_VUI: Later in flight... Pilot->>Cockpit_VUI: "Plan our descent" Cockpit_VUI->>Cockpit_VUI: Interpret "descent" with Profile Bias Cockpit_VUI->>FMS: Request Descent Routes (Constraint: Minimize Turbulence) FMS-->>Cockpit_VUI: Provides Smoothest Route
3.2 Precision Agriculture: Smart Irrigation and Resource Management
Enabling Description: The system is used by a farmer to manage a smart irrigation system. The farmer, viewing a field, says "Give me the moisture level for Zone 4." The system reports the data. Based on weather forecasts and soil conditions, it presents a "purchase opportunity": "A heatwave is expected in 48 hours. Pre-hydrate Zone 4 with 1 inch of water now (standard cost), or apply hydrogel amendment (premium cost)?" The farmer responds, "Apply the hydrogel." This interaction updates the farmer's profile with a preference for capital expenditure to mitigate risk. Later in the season, if the farmer gives an ambiguous command like "Get Zone 7 ready for the heat," the NLU will interpret this as "Apply hydrogel amendment to Zone 7," rather than the cheaper, less effective pre-hydration option.
Mermaid Diagram:
flowchart TD A["Farmer: 'Moisture Zone 4?'"] --> B{VUI}; B --> C[Query Soil Sensors]; C --> D[Fuse with Weather Forecast]; D --> E{Generate Mitigation Opportunity}; E --> F["Present Choice: Pre-hydrate vs Hydrogel"]; F -- "Farmer: 'Apply hydrogel'" --> G(Track Interaction); G --> H(Update Farmer Profile: Prefers Risk Mitigation); I["Farmer: 'Ready Zone 7 for heat'"] --> B; B -- NLU uses Profile H --> J[Interpret as 'Apply Hydrogel']; J --> K[Activate Irrigation & Amendment Sprayers];
Section 4: Integration with Emerging Technologies
4.1 AI-Optimized Meta-Learning for Profile Adaptation
Enabling Description: The core system is wrapped by a higher-level meta-learning AI framework. This meta-AI does not process the user's utterance directly. Instead, it observes the performance of the primary NLU model. It monitors the "tracked interaction" data and correlates it with changes in NLU accuracy (e.g., reductions in clarification questions). The "purchase opportunity" itself can be an A/B test controlled by the meta-AI. Based on this analysis, the meta-AI adjusts the hyperparameters of the profile-building module. For example, it might learn that for this specific user, interactions with high-cost purchase opportunities should apply a 5x greater learning rate to the NLU model update than interactions with low-cost ones. This optimizes the personalization process itself.
Mermaid Diagram:
graph TD subgraph Core NLU Loop A[Utterance] --> B{NLU Model}; B -- Uses --> C[User Profile]; B --> D[Select Purchase Opp.]; D --> E{User Interaction}; E --> F{Profile Update Module}; F -- Updates --> C; end subgraph Meta-Learning AI E -- Observes --> G[Meta-AI Monitor]; B -- Reports Accuracy --> G; G --> H{Analyze Efficacy of Update}; H --> I[Adjust Hyperparameters]; I -- Modifies --> F; end
4.2 Blockchain-Based Self-Sovereign Profile and Verification
Enabling Description: The
user-specific profileis implemented as a self-sovereign digital identity wallet based on the W3C DID (Decentralized Identifiers) standard. When a user interacts with apurchase opportunityfrom a vendor, the vendor issues a cryptographically signed Verifiable Credential (VC) to the user's wallet (e.g., "This user purchased product X on date Y"). This transaction is recorded on a permissioned blockchain for immutability. When the user issues a subsequent utterance to the NLU system, their device presents a Zero-Knowledge Proof (ZKP) to the NLU engine, proving they hold a relevant credential (e.g., "I can prove I have a credential related to 'automotive parts' from a trusted vendor") without revealing the specific credential itself. The NLU engine uses this verified "interest" as a high-confidence input to interpret the new utterance.Mermaid Diagram:
sequenceDiagram participant UserWallet participant NLU_Engine participant Vendor participant Blockchain Vendor-->>UserWallet: User completes purchase Vendor->>UserWallet: Issue Verifiable Credential (VC) Vendor->>Blockchain: Anchor hash of VC UserWallet->>NLU_Engine: User makes new utterance UserWallet->>NLU_Engine: Provide Zero-Knowledge Proof of holding relevant VC NLU_Engine->>NLU_Engine: Use ZKP result to inform interpretation NLU_Engine-->>UserWallet: Respond with interpreted action
Section 5: Inverse Operation and Safe Failure Modes
5.1 Privacy-Preserving "Amnesiac Mode"
Enabling Description: The system is designed with a stateful "privacy mode" which can be triggered by a keyword ("go private"), detection of a guest's voice via voiceprinting, or entry into a geofenced private area (e.g., a hospital). When in this mode, the feedback connection between
tracking the user's interaction(step E in claim 1) andbuilding or updating a user-specific profile(step F) is severed. The system can still present purchase opportunities, but a log of the interaction is either not kept or is explicitly firewalled from the NLU profile datastore. The system provides an audible or visual cue, such as "Amnesiac mode is on," to inform the user that their current interactions will not influence future NLU interpretations, thus providing a safe-fail mechanism for privacy.Mermaid Diagram:
stateDiagram-v2 [*] --> Normal_Mode Normal_Mode: User interactions update NLU Profile Amnesiac_Mode: User interactions DO NOT update NLU Profile Normal_Mode --> Amnesiac_Mode: Keyword ("go private") or Guest Voice Detected Amnesiac_Mode --> Normal_Mode: Keyword ("exit private") or Guest Voice Departs
Section 6: Combination with Open-Source Standards
6.1 Combination 1: DeepSpeech, RabbitMQ, and spaCy
- Enabling Description: A voice-based assistant is constructed where the speech recognition engine is a self-hosted instance of Mozilla's DeepSpeech. Recognized utterances are published as messages to a RabbitMQ message broker. A consumer service, written in Python, uses the spaCy NLP library to perform context determination and NLU. A user's profile is maintained as a custom extension attribute on spaCy's
Docobject. When a user interacts with a purchase opportunity, the interaction data is sent to a separate topic in RabbitMQ. The spaCy consumer subscribes to this topic, and upon receiving an interaction message, it updates its statistical model weights for entity recognition or text classification before processing the next utterance from the primary topic. This creates a distributed, event-driven implementation of the core patent claim.
6.2 Combination 2: Web Speech API, ActivityPub, and IndexedDB
- Enabling Description: A fully client-side, privacy-focused implementation is created for a web browser. The W3C Web Speech API is used for in-browser speech-to-text. The user's profile is stored locally in the browser's IndexedDB. A
purchase opportunityis delivered as a message formatted using the W3C ActivityPub protocol, allowing for interactive, federated advertisements. A JavaScript Service Worker listens for both speech recognition results and user interactions with ActivityPub objects. When an interaction is detected (e.g., a click), the Service Worker directly updates the user profile object in IndexedDB. When the next speech recognition event occurs, the Service Worker intercepts the transcribed text, applies interpretation rules based on the updated IndexedDB profile, and then passes the refined intent to the web application.
6.3 Combination 3: Kubernetes, OpenRTB, and Apache Kafka
- Enabling Description: A highly-scalable, cloud-native version of the system is built on Kubernetes. The
purchase opportunityselection and delivery mechanism is fully compliant with the IAB's OpenRTB (Real-Time Bidding) 3.0 standard, treating the user's utterance context as the basis for a bid request. User interactions (clicks, conversions) are captured and streamed as events into an Apache Kafka topic. A Kafka Streams application consumes this event stream in real-time. This application maintains a state store (the user profile) and continuously updates an NLU model (e.g., a TensorFlow model). This updated model is immediately containerized and deployed back into the Kubernetes cluster, replacing the older model, ensuring a continuous integration/continuous deployment (CI/CD) pipeline for NLU personalization.
Generated 4/28/2026, 4:44:02 AM