Patent 10347248
Derivative works
Defensive disclosure: derivative variations of each claim designed to render future incremental improvements obvious or non-novel.
Active provider: Google · gemini-2.5-pro
Derivative works
Defensive disclosure: derivative variations of each claim designed to render future incremental improvements obvious or non-novel.
Defensive Disclosure and Prior Art Generation
Publication Date: April 26, 2026
Subject: Derivative Works and Obvious Implementations of U.S. Patent 10,347,248
This document serves as a defensive publication to disclose concepts, systems, and methods that build upon, extend, or are obvious variations of the invention described in U.S. Patent 10,347,248 ("System and method for providing in-vehicle services via a natural language voice user interface"). The purpose of this disclosure is to place these concepts into the public domain, thereby establishing them as prior art against future patent applications for similar inventions.
Part 1: Derivative Variations of Core Claims (Claims 1 & 13)
The following disclosures describe technical variations of a system where a vehicle's telematics unit receives a natural language voice request, determines the vehicle's current geographic location, and provides a location-dependent service.
Axis 1: Material & Component Substitution
1.1. GNSS-Denied Location Determination via Multi-Sensor Fusion
Enabling Description: This variation replaces the sole reliance on a Global Navigation Satellite System (GNSS) receiver for location determination with a multi-sensor fusion system. The telematics unit processor executes a Kalman filter algorithm that fuses data from an onboard Inertial Measurement Unit (IMU) providing acceleration and gyroscopic data, wheel tick sensors measuring rotation and thus distance traveled, and a Vehicle-to-Infrastructure (V2I) communication module capable of triangulating the vehicle's position from fixed, short-range radio beacons (e.g., IEEE 802.11p DSRC or C-V2X). When a natural language request is received in a GNSS-denied environment such as a tunnel or urban canyon, the system uses the fused dead-reckoning and beacon-based position as the "current location" to determine the response. For example, the request "Take the next exit" inside a tunnel is processed using the dead-reckoned position to identify the upcoming tunnel exit and provide correct guidance.
Diagram:
graph TD A[Voice Request] --> B{Telematics Unit}; C[IMU Sensor Data] --> D[Kalman Filter]; E[Wheel Tick Data] --> D; F[V2I Beacon Signals] --> D; G[GNSS Data] -- Optional --> D; D --> H{Fused Vehicle Location}; H -- Location Context --> B; B --> I[Process Request]; I --> J[Provide In-Vehicle Service];
1.2. Edge-Native NLU Processing with Federated Learning
Enabling Description: This derivative replaces the dependency on a remote, cloud-based server for Natural Language Understanding (NLU) with a dedicated edge computing module within the telematics system, such as an NVIDIA Jetson AGX Orin or a Google Coral Edge TPU. The entire speech-to-text and NLU model runs locally, ensuring operation in areas without network connectivity and enhancing user privacy. To improve the model over time, a federated learning framework is employed. The local model is updated based on user interactions, and only the resulting non-user-specific model updates (gradients) are encrypted and periodically sent to a central server for aggregation into an improved global model. This new global model is then pushed to the vehicle fleet as a software update. The "current location" is used locally by the edge processor to ground the NLU interpretation, for example, by prioritizing local place names in the speech recognition grammar.
Diagram:
sequenceDiagram participant User participant Edge_NLU_Module participant Vehicle_Systems participant Central_FL_Server User->>Edge_NLU_Module: "Find a nearby EV charger" Edge_NLU_Module->>Vehicle_Systems: Get Current GPS Location Vehicle_Systems-->>Edge_NLU_Module: (Lat, Lon) Edge_NLU_Module->>Edge_NLU_Module: Process NLU with location context Edge_NLU_Module->>Vehicle_Systems: Command: Display EV chargers near (Lat, Lon) Note over Edge_NLU_Module, Central_FL_Server: Periodically and Anonymously Edge_NLU_Module->>Central_FL_Server: Send encrypted model gradients Central_FL_Server->>Central_FL_Server: Aggregate gradients into new global model Central_FL_Server-->>Edge_NLU_Module: Push updated global model
Axis 2: Operational Parameter Expansion
2.1. Ruggedized System for Extreme Industrial Environments
Enabling Description: The system is designed for operation in extreme temperatures (-40°C to +85°C) and high-vibration environments typical of mining haul trucks or military vehicles. The telematics processor is a passively cooled, industrial-grade System-on-Chip (SoC) enclosed in a vibration-dampened, IP67-rated housing. The voice input is captured by a multi-microphone array implementing adaptive beamforming and a spectral subtraction algorithm. This algorithm uses a pre-calibrated noise profile of the vehicle's specific machinery (e.g., diesel engine, hydraulics) to filter out predictable, high-decibel background noise before the voice signal is passed to the NLU engine. A driver's request like "What's my current payload weight?" is processed against the vehicle's location on a geo-fenced mine site map to correlate it with data from onboard weigh scales for that specific zone.
Diagram:
flowchart LR subgraph Vehicle A[Voice Request] --> B(Microphone Array); C[Engine/Hydraulic Noise] --> B; B -- Raw Audio --> D{Spectral Subtraction Filter}; D -- Cleaned Audio --> E{NLU Processor}; F[Onboard Sensors<br/>e.g., Payload Scale] --> E; G[RTK-GPS<br/>Mine Site Location] --> E; E --> H[Provide Auditory/Visual Response]; end
2.2. Fleet-Level Natural Language Command and Control
Enabling Description: The concept is scaled from a single vehicle to a logistics fleet management platform. A central dispatcher issues a single natural language command, such as "Reroute all trucks in downtown Boston to avoid the marathon route and find alternative parking." A central processing system ingests this command. It first uses a geocoding service to define the "downtown Boston marathon route" as a set of polygons. It then queries the real-time location of every vehicle in the fleet, identifying the subset within those polygons. For each identified vehicle, it generates a specific, machine-readable command (e.g., a new route plan and a query to a parking availability API), which is transmitted to that vehicle's individual telematics unit. Each truck then receives and acts upon its unique instruction.
Diagram:
graph TD A[Dispatcher NL Request] --> B{Fleet Management NLU}; B --> C{Geofence Definition<br/>(e.g., Marathon Route)}; B --> D{Query Fleet Locations}; D -- List of all truck locations --> E{Identify Trucks in Geofence}; E -- Truck ID 1, 2, 3... --> F{Generate Individual Rerouting & Parking Commands}; F -- Command for Truck 1 --> G1[Telematics Unit 1]; F -- Command for Truck 2 --> G2[Telematics Unit 2]; F -- Command for Truck N --> Gn[Telematics Unit N];
Axis 3: Cross-Domain Application
3.1. Aerospace: Phase-Aware Cockpit Voice Assistant
Enabling Description: The system is implemented in an aircraft avionics suite. The "current location" context is expanded to include not just GPS coordinates but also altitude, airspeed, and flight phase (e.g., taxi, takeoff, cruise, approach). A pilot's command, "Show me the weather ahead," is interpreted based on the aircraft's current flight vector and altitude. The system requests and displays weather radar data (e.g., from a NEXRAD feed via datalink) for the flight path 100 nautical miles ahead at the current flight level, rather than ground-level weather. A command like "Configure for ILS runway two-seven right" would use the aircraft's proximity to a specific airport to automatically tune the NAV radios to the correct frequency for that instrument landing system approach.
Diagram:
stateDiagram-v2 [*] --> Taxi Taxi --> Takeoff: "Cleared for takeoff" Takeoff --> Climb: "Positive rate, gear up" Climb --> Cruise: Reaches cruising altitude Cruise --> Descent: "Begin descent to flight level one-zero-zero" Descent --> Approach: "Cleared for approach" state Approach { direction LR [*] --> ILS_Capture ILS_Capture --> Landing: Pilot command: "Configure for ILS..." note right of ILS_Capture System uses location (proximity to airport) to auto-tune NAV radios and display approach plates for the correct runway. end note } Approach --> Landing Landing --> Taxi
3.2. AgTech: Precision Farming Voice Command System
Enabling Description: The technology is integrated into the control system of an autonomous tractor equipped with a high-precision Real-Time Kinematic (RTK) GPS receiver providing centimeter-level accuracy. A farmer issues a command, "Switch to soybean seeding profile and begin pass in Field 7." The system uses the tractor's RTK-GPS location to confirm it is within the geofence of "Field 7" as defined in the farm management information system (FMIS). It then queries the FMIS for the specific soil type and prescribed seeding rate for that field, automatically adjusts the connected seeder's parameters, and engages the autosteer system to execute the pre-planned path for that field.
Diagram:
sequenceDiagram participant Farmer participant Tractor_Voice_System participant FMIS_Database participant Tractor_Controls Farmer->>Tractor_Voice_System: "Begin planting soybeans in Field 7" Tractor_Voice_System->>Tractor_Controls: Get RTK-GPS Location Tractor_Controls-->>Tractor_Voice_System: (Lat, Lon, Alt) Tractor_Voice_System->>FMIS_Database: Query for parameters at (Lat, Lon) within "Field 7" FMIS_Database-->>Tractor_Voice_System: Soil type, Seeding Rate, Path Plan Tractor_Voice_System->>Tractor_Controls: Command: Set Seeder(Rate), Engage Autosteer(Path)
Axis 4: Integration with Emerging Tech
4.1. AI-Driven Proactive Itinerary Management
Enabling Description: The system is integrated with an AI-powered predictive engine that analyzes the user's calendar, traffic data, and historical travel patterns. The system does not wait for a voice request. Upon starting the vehicle, the AI engine determines the most probable destination (e.g., "Office" for a weekday morning). It uses the vehicle's location and real-time traffic data from an API (e.g., Google Maps, Waze) to calculate the ETA. If the ETA is later than the start of the first calendar appointment, the system proactively initiates a dialogue: "Good morning. Traffic is heavy on I-5. Your ETA to the office is 9:15 AM, which is after your 9:00 AM meeting. Would you like me to join the meeting audio call via your phone and send a 'running late' message?"
Diagram:
flowchart TD A[Vehicle Start] --> B{AI Predictive Engine}; B -- Polls --> C[Calendar API]; B -- Polls --> D[User Travel History]; B --> E{Determine Probable Destination}; F[Vehicle GPS] --> G{Traffic API}; E --> G; G --> H{Calculate ETA}; C -- Meeting Time --> I{Compare ETA vs. Appointment}; I -- If ETA > Appointment Time --> J(Proactive Voice Prompt); J --> K[User Response];
4.2. Blockchain-Verified Logistics and Handover
Enabling Description: This variation is for supply chain and logistics. The vehicle telematics system includes a cryptographic wallet and client for a permissioned blockchain (e.g., Hyperledger Fabric). When the driver arrives at a pickup location and gives the voice command, "Log pickup of pallet A-7," the system uses its precise GPS location to confirm it is at the correct, geo-fenced warehouse. It then queries the manifest for an item matching "pallet A-7" scheduled for that location. The system generates a transaction on the blockchain, recording the item ID, timestamp, and GPS coordinates. The warehouse operator confirms the handover on their own device, which digitally co-signs the transaction, creating an immutable, auditable record of the chain of custody transfer.
Diagram:
graph TD subgraph Vehicle A[Driver: "Log pickup of pallet A-7"] --> B{Telematics NLU}; end subgraph Warehouse C[Operator Device] end subgraph Blockchain D[Distributed Ledger] end B -- GPS, Request --> E{Transaction Proposal}; E --> C; C -- Operator Confirms --> F{Digital Signature}; E -- Vehicle Signature --> G{Signed Transaction}; F -- Operator Signature --> G; G --> H{Commit to Ledger}; H --> D;
Axis 5: The "Inverse" or Failure Mode
5.1. Graceful Degradation to On-Device Command Grammar
Enabling Description: The system is designed for high availability and safe failure. Under normal operation with a stable 5G/6G connection, it uses a powerful cloud-based NLU service for a full conversational experience. The system continuously monitors network latency and packet loss. If these metrics exceed a predefined threshold for more than a few seconds, it determines the network is unreliable. It then seamlessly transitions to a "limited mode." In this mode, it loads a smaller, on-device speech recognition grammar that only recognizes a fixed set of critical commands (e.g., "Navigate to home," "Call contact [name]," "Increase temperature"). It announces the state change to the user: "Network connection is poor. Switching to basic commands." The location determination remains fully functional via the onboard GNSS receiver, ensuring core navigation commands are always available. When the network connection becomes stable again, it reverts to full NLU mode.
Diagram:
stateDiagram-v2 state "Full NLU Mode (Cloud)" as Full state "Limited Mode (On-Device)" as Limited [*] --> Full Full --> Limited: Network Unreliable Limited --> Full: Network Stable state Full { direction LR [*] --> Awaiting_Request Awaiting_Request --> Processing_Request: NL Voice Request Processing_Request --> Cloud_NLU: Send Audio Cloud_NLU --> Processing_Request: Return Intent Processing_Request --> Executing_Service Executing_Service --> Awaiting_Request } state Limited { direction LR [*] --> Awaiting_Command Awaiting_Command --> Processing_Command: Fixed Voice Command Processing_Command --> On_Device_Grammar: Match Command On_Device_Grammar --> Processing_Command: Return Match Processing_Command --> Executing_Service_Limited Executing_Service_Limited --> Awaiting_Command }
Part 2: Combination with Open-Source Standards
2.1. Combination with Automotive Grade Linux (AGL) and Geoclue
- Enabling Description: The method described in patent 10,347,248 is implemented as a software service within the Automotive Grade Linux (AGL) operating system. The Natural Language Voice User Interface is an AGL application that binds to the AGL application framework. Upon receiving a voice utterance, the application makes an asynchronous D-Bus call to the
org.freedesktop.Geoclue2.Managerservice, a standard component for location services in modern Linux systems. Geoclue provides the vehicle's current location, which the application then uses to contextualize the NLU processing and fulfill the request via other AGL services (e.g., the navigation or media player application). This makes the invention a predictable integration module for any vehicle manufacturer using the open-source AGL platform.
2.2. Combination with the RISC-V Instruction Set Architecture
- Enabling Description: The telematics processor that executes the claimed method is built upon the open-source RISC-V instruction set architecture (ISA). To accelerate the NLU processing, the processor core implements a custom instruction set extension for vector processing and dot-product operations, which are fundamental to running neural network inference for speech recognition and language models. The software instructions stored on the computer-readable medium, as claimed in the patent, are compiled specifically for this RISC-V core with the custom extensions. This combination discloses the implementation of the patent's method on an open, customizable hardware standard, making the specific hardware/software co-design obvious to one skilled in the art of embedded systems design.
2.3. Combination with the MQTT Protocol for IoT Integration
- Enabling Description: The system is integrated into a broader vehicle IoT architecture using the open-source MQTT (Message Queuing Telemetry Transport) protocol. The telematics unit acts as an MQTT client. When a voice request is received and interpreted, the system publishes a JSON-formatted message to a specific MQTT topic, for example,
vehicle/vin123/voice/intent. The message payload contains the user's intent, the extracted entities, and the location context (e.g.,{"intent": "find_parking", "location": {"lat": 47.6, "lon": -122.3}}). Any other system in the vehicle or in the cloud that is subscribed to this topic (e.g., a parking service module, a fleet management dashboard) can then act on this information. This decouples the voice system from the service-providing system and makes the invention a component in a standardized, message-based architecture.
Generated 5/9/2026, 12:46:35 AM