Patent 11573939
Derivative works
Defensive disclosure: derivative variations of each claim designed to render future incremental improvements obvious or non-novel.
Active provider: Google · gemini-2.5-pro
Derivative works
Defensive disclosure: derivative variations of each claim designed to render future incremental improvements obvious or non-novel.
Defensive Disclosure and Prior Art Generation
Publication Date: May 13, 2026
Subject: Derivative Works and Obvious Implementations of Predictive, Hierarchical Text Input on Consumer Electronics
Reference Patent: US 11,573,939 B2 ("the '939 patent")
Technology Area: Human-Computer Interaction, Database Search, Consumer Electronics UI/UX.
This document discloses a series of technical implementations, variations, and combinations that build upon the core principles of the '939 patent. The purpose of this disclosure is to place these variations into the public domain, thereby establishing them as prior art for any future patent applications in this domain. A person having ordinary skill in the art (POSITA) in software engineering, embedded systems, and user interface design would find these variations to be logical and obvious extensions of the referenced art.
Axis 1: Material & Component Substitution
Derivative 1.1: System with Haptic Feedback Remote Control
Enabling Description: This variation replaces the standard remote control keypad with a device incorporating a programmable haptic engine, such as a Linear Resonant Actuator (LRA) or an eccentric rotating mass (ERM) motor. The television's computer processor is configured to communicate with the remote over a low-latency bidirectional protocol (e.g., Bluetooth 5.2 with Isochronous Channels). As the user navigates the circular menu, the processor sends specific haptic commands corresponding to UI events. For example, moving focus from one menu segment to another triggers a short, sharp "click" vibration (waveform ID 0x01), while attempting to navigate past the end of a list triggers a longer, soft "buzz" (waveform ID 0x02). Selecting an item triggers a distinct confirmation pulse. This provides non-visual feedback, improving usability for visually impaired users or in low-light conditions. The haptic waveform library is stored on the remote's microcontroller and triggered by commands from the TV's processor.
Architectural Diagram:
sequenceDiagram participant User participant HapticRemote participant TV_Processor participant TV_Display User->>HapticRemote: Presses 'Right' on D-Pad HapticRemote->>TV_Processor: Sends Input_Event (Keycode: D-PAD_RIGHT) TV_Processor->>TV_Display: Update UI: Move focus in circular menu TV_Processor->>HapticRemote: Send Haptic_Command (Waveform_ID: 0x01, Duration: 50ms) HapticRemote->>User: Play "click" vibration User->>HapticRemote: Presses 'Select' HapticRemote->>TV_Processor: Sends Input_Event (Keycode: SELECT) TV_Processor->>TV_Display: Update UI: Confirm selection, load next menu TV_Processor->>HapticRemote: Send Haptic_Command (Waveform_ID: 0x0A, Duration: 150ms) HapticRemote->>User: Play "confirmation" vibration
Derivative 1.2: System with Gaze-Tracking and Voice Command Input
Enabling Description: The remote control keypad is substituted with a multi-modal input system. A compact, low-power infrared (IR) camera module and an IR LED array are integrated into the television's bezel. A dedicated co-processor or a software module running on the main processor executes a gaze-tracking algorithm (e.g., based on a Convolutional Neural Network like GazeNet) to determine the user's point of regard on the screen in real-time. As the user's gaze dwells on a segment of the circular menu for a predetermined threshold (e.g., 500 ms), that segment is highlighted. Final selection is triggered by a voice command (e.g., "Select," "Choose," "OK") captured by a far-field microphone array and processed by an onboard natural language understanding (NLU) engine. This implementation frees the user from a physical input device.
Data Flow Diagram:
graph TD A[IR Camera Module] --> B{Gaze Tracking Processor}; C[Far-Field Mic Array] --> D{NLU Engine}; B --> E{Main TV Processor}; D --> E; E --> F[UI Renderer]; F --> G[Television Display]; subgraph Input Subsystem A; C; end subgraph Processing Subsystem B; D; E; F; end subgraph Output G; end E -- Gaze Coordinates --> F; E -- Voice Command Confirmation --> F;
Axis 2: Operational Parameter Expansion
Derivative 2.1: System for High-Frequency Trading (HFT) Data Navigation
Enabling Description: The invention is scaled to operate in a high-performance computing environment for financial data analysis. The "database" is a live, in-memory tick database (e.g., kdb+) storing petabytes of market data. The "television display" is a multi-monitor trading station with a 240Hz refresh rate. The "processor" is a server-grade CPU coupled with an FPGA for hardware-accelerated search. The hierarchical index tree is loaded into the FPGA's block RAM. User input from a specialized keypad is processed with sub-millisecond latency. The circular menu is rendered as a radial graph showing asset classes (Equities, Futures, FX). Successive selections narrow the context to specific exchanges, then to specific ticker symbols. The system resolves a complete selection path in under 50 microseconds to enable real-time drill-down into market microstructure data during active trading.
State Diagram:
stateDiagram-v2 [*] --> Idle Idle --> Navigating_Asset_Class: Input Event Navigating_Asset_Class --> Navigating_Exchange: Selection Confirmed Navigating_Exchange --> Navigating_Ticker: Selection Confirmed Navigating_Ticker --> Displaying_Data: Selection Confirmed Displaying_Data --> Idle: Timeout or 'Back' Event state Navigating_Asset_Class { note right of Navigating_Asset_Class FPGA traverses root nodes of index tree. Latency < 10µs. end note } state Displaying_Data { note right of Displaying_Data Stream real-time Level 2 data for selected ticker. end note }
Axis 3: Cross-Domain Application
Derivative 3.1 (Aerospace): Cockpit Avionics Fault Isolation System
Enabling Description: The system is implemented on an ARINC 661 compliant Cockpit Display System (CDS). The "processor" is a DAL-A certified flight computer. The "remote control" is a Hands on Throttle-And-Stick (HOTAS) control, specifically a 4-way hat switch plus a selection button. The "database" is the aircraft's Central Maintenance Computer (CMC) fault log, which is continuously updated by the Aircraft Condition Monitoring System (ACMS). When a fault is annunciated, the pilot can activate the search interface on a Multi-Function Display (MFD). The circular menu presents top-level ATA chapters (e.g., ATA 27 - Flight Controls, ATA 29 - Hydraulic Power). Subsequent selections navigate through systems and subsystems to the specific Line Replaceable Unit (LRU) that is reporting the fault, finally displaying the relevant electronic checklist (ECL) procedure.
Architectural Diagram:
graph TD subgraph HOTAS A[4-Way Hat Switch] B[Select Button] end subgraph "DAL-A Flight Computer" C[ARINC 661 Server] D[Fault Isolation Logic] E[CMC Database I/F] end subgraph "Cockpit Displays" F[Multi-Function Display] end G[Central Maintenance Computer] A --> D B --> D D <--> C C --> F D <--> E E <--> G
Derivative 3.2 (AgTech): Field-Deployable Genetic Sequencer Interface
Enabling Description: The system is applied to a portable, ruggedized DNA/RNA sequencer used for in-field crop or livestock pathogen identification. The "display" is a 5-inch sunlight-readable LCD. The "remote control" is a set of environmentally sealed physical buttons (Up, Down, Left, Right, Select). The "database" is a specialized, compressed database of known pathogen genomes (e.g., stored on a local SSD). A field technician uses the interface to identify a sample. The first menu layer allows selection of the organism type ("Virus," "Bacteria," "Fungus"). Subsequent selections narrow down by family, genus, and finally present a list of probable species matches based on the sequencing run, with confidence scores. This allows for rapid, on-site diagnostics without requiring a laptop or extensive bioinformatics training.
Flowchart:
graph TD Start((Start Sequencing Run)) --> A{Load Pathogen DB}; A --> B{Display Organism Menu}; B --> C{User Selects Type}; C --> D{Filter DB by Type}; D --> E{Display Family Menu}; E --> F{User Selects Family}; F --> G{Filter DB by Family}; G --> H{...Further Navigation...}; H --> I{Display Probable Species}; I --> J((Show Results & Confidence));
Axis 4: Integration with Emerging Tech
Derivative 4.1 (AI Integration): Context-Aware Predictive Menu
Enabling Description: This variation integrates the system with a lightweight, on-device AI model (e.g., a distilled Transformer or a Gated Recurrent Unit network) running on a neural processing unit (NPU) within the TV's SoC. The model is trained on the user's viewing history, search queries, time of day, and even the genre of the currently playing content (obtained via audio or video content recognition). The processor queries the AI model to get a ranked list of predicted "next query fragments." Instead of presenting a static or simple frequency-ranked list of identifier parts, the circular menu is dynamically populated with the highest-probability fragments from the AI model. For example, if the user is watching a cooking show, the model will cause the system to pre-populate the menu with terms like "Recipe," "Chicken," "Oven," and "Italian."
Sequence Diagram:
sequenceDiagram participant User participant TV_Processor participant AI_NPU participant TV_Display User->>TV_Processor: Initiates Search TV_Processor->>AI_NPU: Request Predictions(Context: 'Cooking Show') AI_NPU-->>TV_Processor: Return Ranked List: ['Recipe', 'Chicken', 'Oven'] TV_Processor->>TV_Display: Generate Circular Menu with AI-ranked items User->>TV_Processor: Selects 'Recipe' TV_Processor->>AI_NPU: Request Predictions(Context: 'Cooking Show', Prefix: 'Recipe') AI_NPU-->>TV_Processor: Return Ranked List: [' for pasta', ' easy', ' vegetarian'] TV_Processor->>TV_Display: Generate next Circular Menu
Axis 5: The "Inverse" or Failure Mode
Derivative 5.1: Graceful Degradation for Networked Database Search
Enabling Description: The system is designed for a media player that primarily searches a cloud-based content library but maintains a small local cache of content or metadata. A network monitoring daemon runs on the processor, constantly checking network latency and connectivity. If the connection to the cloud database is lost or exceeds a latency threshold (e.g., >1000ms), the system enters a "degraded" state. In this state, the search and selection logic is re-routed to a secondary index tree representing only the locally cached content. The UI renderer is simultaneously instructed to apply a different visual theme to the circular menu (e.g., changing its color to amber and displaying a "Local Content Only" watermark), clearly indicating the limited functionality to the user. When network connectivity is restored, the system seamlessly transitions back to using the primary cloud index.
State Diagram:
stateDiagram-v2 [*] --> Online state Online { description "Searching full cloud database" [*] --> Connected Connected --> Degraded : Network Loss / High Latency } state Degraded { description "Searching local cache only" [*] --> Disconnected Disconnected --> Online : Network Restored }
Combination Prior Art Scenarios with Open-Source Standards
Scenario 1: Combination with WebThings API (W3C Standard)
- Description: The television acts as a "Web Thing" gateway, exposing its search functionality via the W3C Web of Things (WoT) Thing Description standard. The selection system of the '939 patent is used to navigate and control other devices in the WoT network. A user can navigate through a menu of available "Things" (e.g., "Living Room Light," "Thermostat"), and subsequent selections map to "Actions" defined in that Thing's Description (e.g., "toggle," "setTemperature"). The selection of an action and its parameters is performed using the circular menu interface. This combines the patented UI method with an open standard for IoT interoperability.
Scenario 2: Combination with Kodi™ Media Center (Open Source Software)
- Description: The patented search and selection method is implemented as an add-on for the open-source Kodi media center software. The add-on hooks into Kodi's JSON-RPC API. It builds its hierarchical index tree by querying the Kodi library for media metadata (movies, TV shows, artists, etc.). The circular menu UI is rendered using Kodi's Python-based skinning engine. Selections made by the user are translated into JSON-RPC commands to filter the library or initiate playback. This constitutes a direct implementation of the invention within a major, pre-existing open-source platform.
Scenario 3: Combination with an Open-Source Voice Assistant (e.g., Mycroft)
- Description: The system is integrated with a Mycroft Core instance running on the television or a connected device. The circular menu navigation can be controlled via voice commands (e.g., "Mycroft, navigate right," "Mycroft, select item"). Furthermore, a Mycroft "skill" is created where the voice assistant can query the hierarchical index directly. A user could say, "Mycroft, search for action movies starting with T," and Mycroft would programmatically traverse the index tree ("Action" -> "T") and display the corresponding circular menu on the screen for the user to complete the selection with the remote control. This creates a hybrid voice/manual navigation system built on open-source AI components.
Generated 5/13/2026, 12:18:49 AM