Patent 12406663
Derivative works
Defensive disclosure: derivative variations of each claim designed to render future incremental improvements obvious or non-novel.
Active provider: Google · gemini-2.5-pro
Derivative works
Defensive disclosure: derivative variations of each claim designed to render future incremental improvements obvious or non-novel.
Defensive Disclosure and Prior Art Generation for U.S. Patent 12,406,663
Publication Date: May 8, 2026
Reference ID: DDPUB-2026-0508-A
Subject Matter: Systems and methods for contextual routing of commands between a primary control interface (e.g., a vehicle) and a plurality of disparate, remote-controlled ecosystems (e.g., Smart Home, Industrial IoT). This document describes derivative works, alternative embodiments, and expansions of the core concepts disclosed in U.S. Patent 12,406,663 to establish prior art against future incremental patent claims in this domain.
Derivatives of Independent Claim 1 (System Claim)
The core claim describes a system with a recognition module (NLU), a connection manager, and a feedback loop for updating NLU models, all operating from a vehicle head unit to control a user's smart home. The following disclosures expand upon this foundation.
1. Component Substitution: Federated Edge-Based NLU Processing
- Enabling Description: This embodiment replaces the centralized, cloud-based NLU module with a federated learning architecture executing on edge devices. The in-vehicle head unit, the user's mobile device, and a home hub each contain a local instance of the NLU model. A command issued in the vehicle is processed by the local NLU model. Instead of sending raw utterance data to the cloud, only the resulting model update gradients (e.g., deltas representing learned associations) are encrypted and shared with a cloud-based aggregator. The aggregator compiles updates from all user devices to create an improved global model, which is then pushed back to the edge devices. This approach enhances privacy by keeping raw user data on-device and reduces latency for command interpretation. The connection manager function remains to route the post-interpretation command to the target ecosystem API.
- Mermaid Diagram:
sequenceDiagram participant V as Vehicle HU participant P as User's Phone participant H as Home Hub participant C as Cloud Aggregator participant E as Target Ecosystem API V->>V: User speaks command; Local NLU processes it V->>C: Sends encrypted model update gradients P->>C: Periodically sends local learning gradients H->>C: Periodically sends local learning gradients C->>C: Aggregates gradients to create new global model C-->>V: Pushes updated global model C-->>P: Pushes updated global model C-->>H: Pushes updated global model V->>E: Connection Manager routes command to API E-->>V: Returns command feedback V->>V: Local NLU model is updated with feedback
2. Cross-Domain Application: Surgical Operating Room (OR) Command Arbitration
- Enabling Description: The system is adapted for a surgical OR environment. The "head unit" is a sterile, microphone-equipped console. The "ecosystems" are disparate pieces of surgical equipment from different manufacturers (e.g., a da Vinci surgical robot, a Stryker endoscopy tower, a Philips patient monitoring system). A surgeon's spoken command, such as "increase insufflation pressure," is processed by an NLU module. The contextual data includes the current phase of the operation (e.g., "laparoscopic cholecystectomy - dissection phase") retrieved from the hospital's electronic health record (EHR) system. The NLU determines that during this phase, "insufflation pressure" refers to the CO2 insufflator managed by the endoscopy tower. The connection manager then formats the command into the proprietary protocol for the Stryker tower and transmits it. Feedback (e.g., confirmation of pressure change) is used to update the NLU models for surgical-phase-specific command routing.
- Mermaid Diagram:
flowchart TD A[Surgeon's Utterance: "increase pressure"] --> B{NLU Module}; C[EHR Context: "Cholecystectomy - Dissection Phase"] --> B; B --> D{Arbitration Engine}; D -- "Pressure" in this context refers to insufflator --> E[Select Target: Stryker Endoscopy Tower]; D -- Not surgical robot --> F((da Vinci Robot)); D -- Not patient monitor --> G((Philips Monitor)); E --> H[Connection Manager]; H --> I[Format command for Stryker API]; I --> J[Transmit to Endoscopy Tower]; J --> K[Feedback: "Pressure set to 15 mmHg"]; K --> B;
3. Cross-Domain Application: Aerospace Flight Deck Subsystem Routing
- Enabling Description: The invention is applied to a commercial aircraft flight deck. The pilot's voice commands are captured by the avionics system. The "disparate ecosystems" are distinct flight-critical subsystems such as the Flight Management System (FMS, e.g., Honeywell), the satellite communication system (SATCOM, e.g., Inmarsat), and the cabin environmental controls (e.g., Liebherr). A command like "contact maintenance on the ground" is interpreted by the NLU. The contextual data is the flight phase (e.g., "en-route cruise over Atlantic"). The NLU module, trained on flight operations manuals, determines this command should be routed to the SATCOM system via an ACARS data link message. The connection manager formats the request and transmits it. If the command was "set cabin temperature to 22 degrees," the same system would route it to the environmental control subsystem.
- Mermaid Diagram:
graph LR subgraph Flight Deck A(Pilot Utterance) end subgraph Avionics Core B(NLU Module) C(Connection Manager) end subgraph Disparate Subsystems D[FMS] E[SATCOM] F[Cabin Control] end A --> B B -- Context: Flight Phase --> B B -- Intent: Comms --> C C --> E B -- Intent: Navigation --> C C --> D B -- Intent: Environment --> C C --> F
4. Integration with Emerging Tech: AI-Powered API Command Synthesis and Blockchain Auditing
- Enabling Description: This derivative integrates a large language model (LLM) and a blockchain ledger. When the NLU module identifies the target ecosystem, instead of using a pre-programmed command format, it passes the user's intent to a generative AI model. This AI model has been trained on the API documentation for hundreds of IoT ecosystems. It synthesizes the exact, syntactically correct API call (e.g., a complex JSON payload) required for that specific ecosystem in real-time. Simultaneously, the connection manager generates a transaction containing the original utterance, the identified intent, the target ecosystem, the synthesized command, and a timestamp. This transaction is hashed and recorded on a private, immutable blockchain (e.g., Hyperledger Fabric). The feedback from the ecosystem (success/failure) is recorded in a subsequent block, creating a secure, tamper-proof audit trail for all commands, critical for security and high-value asset control.
- Mermaid Diagram:
sequenceDiagram participant V as Vehicle Assistant participant NLU as NLU Module participant LLM as Generative AI participant CM as Connection Manager participant BC as Blockchain Ledger participant API as Target Ecosystem API V->>NLU: Utterance: "Make my house secure" NLU->>LLM: Intent: "lock_doors, arm_alarm" LLM->>CM: Synthesized Command (JSON payload) CM->>BC: Log TX: {utterance, intent, command, timestamp} CM->>API: Execute synthesized command API-->>CM: Feedback: {status: success} CM->>BC: Log TX: {feedback, timestamp} CM-->>V: Confirmation
5. Inverse/Failure Mode: Graceful Degradation to Local Deterministic Control
- Enabling Description: The system is designed for high-reliability environments where network connectivity may be intermittent (e.g., remote areas, underground tunnels). The system operates in two modes. In "Cloud-Connected Mode," it functions as described in the patent, using cloud-based NLU and AI. When connectivity is lost, it enters "Local Deterministic Mode." In this mode, the vehicle's head unit relies on a small, embedded speech recognition engine and a pre-defined, cached ruleset. This ruleset maps specific, simple phrases directly to commands for critical ecosystems (e.g., "house lockdown" maps to pre-authenticated API calls to both the door lock and security system ecosystems). No complex NLU or contextual interpretation occurs. The system only supports a limited vocabulary of 10-20 critical commands. When connectivity is restored, it automatically switches back to Cloud-Connected Mode and syncs any state changes.
- Mermaid Diagram:
stateDiagram-v2 [*] --> Disconnected Disconnected --> Connected: Network Detected Connected --> Disconnected: Connection Lost state Connected { direction LR CloudNLU: Full-feature NLU Context: Real-time Data DynamicRouting: Route to any ecosystem CloudNLU --> Context --> DynamicRouting } state Disconnected { EmbeddedASR: Limited Vocabulary CachedRules: Pre-defined command map CriticalOnly: Supports only essential commands EmbeddedASR --> CachedRules --> CriticalOnly }
Derivatives of Independent Claim 3 (Method Claim)
The core claim outlines the method of receiving utterances and context, translating to text, determining a target, transmitting, and receiving confirmation.
1. Process Substitution: End-to-End Spoken Language Understanding (SLU)
- Enabling Description: This method variation replaces the distinct, sequential steps of Automatic Speech Recognition (ASR) and Natural Language Understanding (NLU) with a single, unified end-to-end Spoken Language Understanding (SLU) model. The input to this model is the raw audio waveform of the user's utterance. The output is a structured intent object that directly includes the target ecosystem, the command, and its parameters (e.g.,
{'target': 'SimpliSafe_API', 'action': 'set_state', 'parameters': {'state': 'armed_away'}}). This eliminates the intermediate text representation, reducing potential errors from ASR transcription and lowering overall latency. The contextual data (location, time) is provided as an additional input vector to the SLU model during inference, allowing it to directly influence the audio-to-intent mapping. - Mermaid Diagram:
flowchart TD subgraph Traditional Method A[Audio Waveform] --> B(ASR Module); B --> C[Text: "arm the security system"]; C --> D{NLU Module}; E[Context Data] --> D; D --> F[Structured Intent]; end subgraph SLU Method G[Audio Waveform] --> H{End-to-End SLU Model}; I[Context Data] --> H; H --> J[Structured Intent]; end F --> K((Transmit to Ecosystem)); J --> K;
2. Operational Parameter Expansion: High-Frequency Trading (HFT) Command Routing
- Enabling Description: The method is applied to a high-frequency trading environment where low latency is critical. A trader's spoken utterance (e.g., "sell fifty thousand at market") is captured. The "contextual data" is not geographic location but rather sub-second market data, including stock volatility, order book depth, and breaking news sentiment analysis scores from a live data feed. The method determines the "target ecosystem" to be one of several available execution algorithms (e.g., "Iceberg Algorithm," "TWAP Algorithm," "Aggressive Market-Taker Bot"). Based on the high volatility context, the NLU determines that the "Aggressive Market-Taker Bot" is the appropriate target to ensure immediate execution. The command is transmitted to that algorithmic trading system's API, and confirmation of the trade execution is received within milliseconds. The feedback loop trains the model to associate specific market conditions and utterance types with the best-performing execution algorithms.
- Mermaid Diagram:
graph TD A[Trader Utterance: "sell 50k at market"] --> B{NLU}; C[Market Data Stream: High Volatility, Low Liquidity] --> B; B --> D{Determine Target Algorithm}; D -- Volatility is high --> E[Aggressive Market-Taker Bot]; D -- Volatility is low --> F((TWAP Algorithm)); E --> G[Transmit Order]; G --> H[Confirmation: Fill at $123.45]; H --> B;
Combination Prior Art Scenarios
These scenarios describe the integration of the core invention of US12406663 with existing, open-source standards to create novel but obvious combinations.
1. Combination with Matter Protocol
- Enabling Description: The system acts as a high-level "contextual bridge" for Matter-enabled devices. While the Matter standard provides a common IP-based application layer for device interoperability within a home "fabric," it does not specify how a user's ambiguous command should be routed. This invention is combined with Matter by having the in-vehicle NLU module resolve ambiguity and select a target Matter device or group. For example, the command "turn on the lights" from a vehicle approaching home would cause the Connection Manager to issue a standard Matter command to the
group.living_room_lightswithin the home's Matter fabric. The feedback is the standard success/failure code from the Matter command. The system learns user preferences for which lights to turn on when arriving home, a layer of intelligence not inherent in the Matter specification itself. - Mermaid Diagram:
sequenceDiagram participant V as Vehicle Assistant participant NLU as Contextual NLU participant CM as Connection Manager (Matter Controller) participant M as Home Matter Fabric V->>NLU: Utterance: "turn on the lights" NLU->>NLU: Context: "approaching home, 7 PM" NLU->>CM: Intent: "activate_lights", Target: "group.entryway" CM->>M: Matter Command: write-attribute(endpoint=group.entryway, attribute=on-off, value=true) M-->>CM: Matter Status: SUCCESS CM-->>V: Confirmation
2. Combination with MQTT (Message Queuing Telemetry Transport)
- Enabling Description: The Connection Manager is implemented as an intelligent MQTT client/broker. The vehicle assistant publishes all voice commands as messages to a generic MQTT topic, e.g.,
vehicle/vin123/commands/raw. The NLU/Connection Manager module subscribes to this topic. After processing the utterance and its context, it determines the target ecosystem. It then re-publishes a new, structured message to an ecosystem-specific topic (e.g.,home/alexa/commandsorhome/security/commands). Dedicated bridges on the home network subscribe to these topics and translate the MQTT messages into API calls for non-native MQTT devices. The feedback loop is implemented by subscribing to response topics (e.g.,home/alexa/responses), allowing the NLU model to be updated based on command success or failure. - Mermaid Diagram:
flowchart LR subgraph Vehicle A[Utterance] --> B{Vehicle Assistant}; B -- Publishes --> C(MQTT Topic: `.../commands/raw`); end subgraph Cloud/Hub D{NLU/Connection Manager} -- Subscribes --> C; D -- Processes --> D; D -- Publishes --> E(MQTT Topic: `.../ecosystem_A/command`); D -- Publishes --> F(MQTT Topic: `.../ecosystem_B/command`); end subgraph Home G[Ecosystem A Bridge] -- Subscribes --> E; H[Ecosystem B Bridge] -- Subscribes --> F; G --> I[Device A]; H --> J[Device B]; end
3. Combination with OAuth 2.0 and OpenID Connect
- Enabling Description: The system's user onboarding and authentication process is built entirely on the OAuth 2.0 and OpenID Connect (OIDC) standards. To link a new ecosystem (e.g., Google Home), the user initiates a process from the vehicle's head unit. The system acts as an OAuth 2.0 client and redirects the user (e.g., to their phone) to the ecosystem's authorization server. The user authenticates and grants permission. The ecosystem's server returns an authorization code, which the vehicle system's backend exchanges for an access token and a refresh token. The "authentication cache" described in the patent is specifically an encrypted database for storing these OAuth 2.0 refresh tokens. The Connection Manager attaches the valid access token to every API call sent to the target ecosystem, adhering to this universal standard for secure, delegated access. The user's identity across systems can be federated using their OIDC identity token.
- Mermaid Diagram:
sequenceDiagram participant User as User participant VA as Vehicle Assistant (Client) participant AS as Ecosystem Auth Server participant API as Ecosystem Resource API User->>VA: "Link my smart home" VA->>User: Redirect to AS for login/consent User->>AS: Logs in, gives consent AS->>VA: Returns Authorization Code VA->>AS: Exchanges Auth Code for Tokens AS->>VA: Returns Access Token + Refresh Token VA->>VA: Securely stores Refresh Token Note over VA, API: Later, for a command... VA->>API: API Request + Bearer Access Token API-->>VA: API Response
Generated 5/8/2026, 10:07:06 PM