Patent 10372793
Derivative works
Defensive disclosure: derivative variations of each claim designed to render future incremental improvements obvious or non-novel.
Active provider: Google · gemini-2.5-pro
Derivative works
Defensive disclosure: derivative variations of each claim designed to render future incremental improvements obvious or non-novel.
As a Senior Patent Strategist and Research Engineer, I have analyzed the core claims of U.S. Patent 10,372,793. This document constitutes a defensive disclosure of derivative works and improvements to establish prior art against future patent applications on similar technologies.
Publication Date: May 12, 2026
Subject: Defensive Disclosure and Prior Art for Two-Stage Graphical Hyperlink Navigation Systems
Derivative Works Based on U.S. Patent 10,372,793
The following disclosures describe variations, expansions, and alternative applications of a user interface pattern wherein a user interaction with a first set of textual hyperlink representations causes the display of a second set of graphical hyperlink representations.
1. Derivations via Component & Data Substitution
1.1. Vector-Based and Programmatic Graphical Cues
- Enabling Description: This variation replaces static, pre-downloaded raster image files (e.g., PNG, JPEG) for the second set of hyperlink representations with programmatically rendered vector graphics. The system pre-downloads lightweight Scalable Vector Graphics (SVG) definitions or a JSON object containing drawing commands. Upon a user's hover interaction over a textual category, a client-side rendering engine (such as a JavaScript library like D3.js or the browser's native
<canvas>API) executes these commands to draw the graphical hyperlinks in real-time. This method allows for dynamic styling (e.g., changing colors based on system state), perfect scaling to any resolution without pixelation, and significantly smaller data payloads compared to raster images. The rendering instructions for all graphical cues are fetched in a single initial data structure and stored in client-side memory. - Mermaid Diagram: Data Flow
sequenceDiagram participant User participant ClientBrowser as Browser participant RenderingEngine as JS Engine participant LocalCache as Client Cache User->>ClientBrowser: Hover over textual link 'Category A' ClientBrowser->>RenderingEngine: Trigger onHover event for 'Category A' RenderingEngine->>LocalCache: Request vector drawing instructions for 'Category A' LocalCache-->>RenderingEngine: Return JSON/SVG data RenderingEngine->>ClientBrowser: Execute drawing commands on HTML Canvas ClientBrowser->>User: Display rendered vector graphics User->>ClientBrowser: Click on a rendered graphic ClientBrowser->>User: Navigate to hyperlink destination
1.2. Haptic and Auditory Cue Integration for Non-Visual Interaction
- Enabling Description: This embodiment extends the user interaction model beyond purely visual cues. The "hover" event is mapped to multiple modalities. In a mobile device or a system with a haptic feedback engine (e.g., Apple's Taptic Engine), hovering over a textual link triggers a specific vibration pattern. Simultaneously, an audio engine plays a short, distinct sound cue. The display of graphical icons is then supplemented with a corresponding set of haptic and audio cues for each icon. For example, hovering over the "Settings" icon would produce a gear-like click sound and a sharp haptic pulse. This provides an accessibility enhancement and allows for operation in eyes-free contexts. The haptic definitions (e.g., frequency, amplitude, duration) and audio files are pre-downloaded along with the graphical assets.
- Mermaid Diagram: State Machine
stateDiagram-v2 [*] --> Idle Idle --> Hovering: MouseOver text link Hovering --> Idle: MouseOut text link state Hovering { [*] --> Rendering Rendering --> DisplayingGraphics: Graphics rendered Rendering --> PlayingAudio: Audio cue triggered Rendering --> TriggeringHaptics: Haptic pattern triggered DisplayingGraphics --> InteractionReady PlayingAudio --> InteractionReady TriggeringHaptics --> InteractionReady } InteractionReady --> Navigating: Click on graphic Navigating --> [*]: Navigation complete
2. Derivations via Operational Parameter Expansion
2.1. High-Density Navigation with Fisheye Lens UI
- Enabling Description: To manage an extremely large second set of graphical hyperlinks (e.g., thousands of items), this variation implements a fisheye lens or magnification-based user interface. Upon hovering over a textual category, the graphical cues are displayed in a compact grid or radial layout. The cue directly under the user's cursor, along with its immediate neighbors, is rendered at full size and high resolution. Cues further away from the cursor are rendered at a progressively smaller scale and lower level-of-detail (LOD). As the user moves the cursor across the set, the magnified "focal point" follows, smoothly scaling the graphical cues up and down. This allows a vast number of options to be presented in a limited screen area while ensuring the item of interest is always clear. All LOD versions of the graphics are pre-downloaded or generated on-the-fly from a single high-resolution source.
- Mermaid Diagram: Component Architecture
graph TD A[User Input: Cursor Position] --> B{Layout Engine}; C[Pre-downloaded Graphical Assets] --> B; B --> D{Magnification Algorithm}; D -- For each graphic -- > E{Calculate Distance from Cursor}; E --> F{Determine Scale & LOD}; F --> G[Render Graphic Instance]; G --> H[Composite View]; H --> I[Display to User];
2.2. Offline-First PWA Implementation for Intermittent Connectivity
- Enabling Description: This system is architected as a Progressive Web App (PWA) with an offline-first strategy. A service worker script is installed on the client's browser during the first visit. This service worker intercepts network requests and aggressively caches all necessary assets, including the HTML, CSS, JavaScript, and the data structure containing all textual and graphical hyperlink information. This data is stored persistently on the client device using the IndexedDB API. When the user interacts with a textual link, the service worker serves the corresponding graphical cues directly from the local IndexedDB, resulting in instantaneous display regardless of network connectivity. If the application is offline, navigation to external hyperlinks is queued and executed once connectivity is restored.
- Mermaid Diagram: Sequence Diagram
sequenceDiagram participant User participant Browser participant ServiceWorker as Service Worker participant IndexedDB participant Network User->>Browser: Hover on text link Browser->>ServiceWorker: onHover event ServiceWorker->>IndexedDB: Fetch graphical assets for category IndexedDB-->>ServiceWorker: Return assets from local storage ServiceWorker-->>Browser: Provide assets Browser->>User: Display graphics instantly Note over User, Network: User is offline User->>Browser: Click on graphic Browser->>ServiceWorker: Request navigation ServiceWorker->>ServiceWorker: Queue navigation request Note over User, Network: User comes online ServiceWorker->>Network: Execute queued navigation
3. Derivations via Cross-Domain Application
3.1. Aerospace: Avionics Sub-System Selection
- Enabling Description: In a glass cockpit avionics display, a pilot interacts with a primary flight display (PFD). Textual menu items for major systems ("NAV," "COMM," "SYS," "FPLN") are persistently displayed. The pilot uses a gaze-tracking sensor as a cursor. When the pilot's gaze fixates on ("hovers" over) the "COMM" text for more than 500ms, a secondary set of graphical icons appears, representing available communication radios (COMM1, COMM2, SAT, HF). Each icon visually indicates its current status (e.g., active frequency, signal strength). The pilot then selects a radio icon using a hands-on-throttle-and-stick (HOTAS) button, which brings up the tuning interface for that radio. All graphical icons are stored in the flight management computer's non-volatile flash memory.
- Mermaid Diagram: Flowchart
graph TD A[Pilot Gaze Fixates on 'COMM'] --> B{Dwell Time > 500ms?}; B -- Yes --> C[Display Radio Icons: COMM1, COMM2, SAT]; B -- No --> A; C --> D{HOTAS Button Press on Icon?}; D -- Yes, on COMM1 --> E[Open COMM1 Tuning Interface]; D -- No --> C; C --> F[Gaze Moves Away]; F --> G[Hide Radio Icons]; G --> A;
3.2. AgTech: IoT Sensor Data Visualization on Field Maps
- Enabling Description: A farm operator views a digital map of their property on a tablet. Each field is labeled with textual identifiers ("Field A," "Orchard B"). When the operator taps and holds ("hovers") on the text "Field A," a palette of graphical icons appears overlaid on that field. These icons represent the types of IoT sensors deployed there: a water droplet for soil moisture, a leaf for nutrient levels, and a camera for recent drone imagery. Each icon is dynamically updated to reflect the latest sensor reading (e.g., the droplet icon is blue if moisture is adequate, red if low). Tapping an icon navigates to a detailed time-series data dashboard for that specific sensor. The icon set is standard, but their data-driven state is updated via a real-time MQTT data stream.
- Mermaid Diagram: Architecture
erDiagram FARM { string Name } FIELD { string ID string Name } SENSOR_TYPE { string ID string IconSVG } SENSOR_INSTANCE { string ID string LastReading } FARM ||--o{ FIELD : has FIELD ||--|{ SENSOR_INSTANCE : contains SENSOR_INSTANCE }|--|| SENSOR_TYPE : is_of_type subgraph "UI Interaction" direction LR A(Tap-Hold on FIELD ID) --> B{Fetch SENSOR_INSTANCEs}; B --> C{For each, get SENSOR_TYPE}; C --> D{Render IconSVG with LastReading}; end
3.3. Augmented Reality for Industrial Maintenance
- Enabling Description: A technician wearing an AR headset (e.g., a HoloLens) views a physical manufacturing robot. The AR system uses computer vision to recognize the machine and overlays textual labels on its core components ("Actuator Arm," "Control Panel"). When the technician's gaze remains fixed on the "Control Panel" label, a contextual menu of graphical icons materializes in 3D space next to the label. The icons represent available actions: a wrench for maintenance logs, a graph for live diagnostics, and a book for the operating manual. The technician performs an air-tap gesture on the graph icon, causing a real-time plot of the robot's power consumption to be displayed as a floating hologram.
- Mermaid Diagram: Sequence Diagram
sequenceDiagram participant Technician participant AR_Headset as AR Headset participant CV_Engine as Computer Vision Engine participant CMS_Backend as Content Management System AR_Headset->>CV_Engine: Stream video of robot CV_Engine-->>AR_Headset: Identify 'Control Panel' at (x,y,z) AR_Headset->>Technician: Display text label 'Control Panel' Technician->>AR_Headset: Gaze-hover on label AR_Headset->>CMS_Backend: Request contextual actions for 'Control Panel' CMS_Backend-->>AR_Headset: Return actions [Maintenance, Diagnostics, Manual] with icons AR_Headset->>Technician: Display graphical icons in 3D space Technician->>AR_Headset: Perform 'Air Tap' gesture on 'Diagnostics' icon AR_Headset->>Technician: Display holographic diagnostics graph
4. Derivations via Integration with Emerging Technologies
4.1. AI-Personalized Graphical Link Ranking
- Enabling Description: This system integrates a client-side machine learning model (e.g., TensorFlow.js) to personalize the second set of graphical links. While the pool of all possible graphical links for a category is pre-downloaded, the specific set displayed, and their order, is determined in real-time by the AI model. The model is trained on the user's past click-through behavior, time of day, and current browsing context. For example, when a user hovers over "Business News," the model predicts which three to five publications they are most likely to click on at that moment and displays only those logos, ranked by probability. This reduces clutter and adapts the interface to individual user habits.
- Mermaid Diagram: Data Flow
graph TD A[User Hovers on 'Business News'] --> B{Trigger Event}; B --> C[Gather Context: Time, Past Clicks]; D[Pre-downloaded Pool of All Logos] --> E{Client-Side AI Model}; C --> E; E --> F[Generate Ranked List of Logos]; F --> G[Display Top 5 Logos]; G --> H{User Clicks Logo}; H --> I[Navigate to Destination]; H --> J[Feed Click Data Back to AI Model for Retraining]; J --> E;
4.2. Blockchain-Verified Authenticity Cues
- Enabling Description: In an e-commerce or digital asset platform, this UI pattern is used to convey authenticity. A user hovers over a product category, such as "Luxury Watches." The system displays the logos of various authorized dealers. Each logo is a graphical hyperlink, and overlaid on its corner is a dynamic badge. Upon hover, the client's browser initiates a call to a smart contract on a public blockchain (e.g., Ethereum) using the dealer's public key. If the smart contract confirms the dealer's "authorized" status is valid and current, the badge on the logo turns green. If not, it turns red or disappears. This provides real-time, tamper-proof verification of an entity's credentials directly within the navigation element.
- Mermaid Diagram: Sequence Diagram
sequenceDiagram participant User participant Browser participant DApp_Frontend as DApp Frontend participant Blockchain_Node as Blockchain Node User->>Browser: Hover over 'Luxury Watches' Browser->>DApp_Frontend: Display dealer logos with 'Pending' badge DApp_Frontend->>Blockchain_Node: Call smartContract.isAuthorized(dealer_ID) Blockchain_Node-->>DApp_Frontend: Return boolean status alt Status is True DApp_Frontend->>Browser: Update badge for dealer_ID to 'Verified Green' else Status is False DApp_Frontend->>Browser: Update badge for dealer_ID to 'Unverified Red' end
5. Combination with Open-Source Standards
5.1. Combination with Web Components Standard
- Enabling Description: The entire two-stage navigation element is implemented as a self-contained, reusable Web Component named
<hover-menu>. It is defined using the Custom Elements API. The textual categories are passed declaratively as child elements, and amanifest-urlattribute points to a JSON file containing the graphical link data. All internal structure, styling, and logic (including the hover-to-display mechanism and pre-caching) are encapsulated within the component's Shadow DOM. This makes the element portable across any web framework and immune to CSS conflicts from the parent page. - Example Usage:
<hover-menu manifest-url="/data/nav-links.json"> <h3 slot="category">World News</h3> <h3 slot="category">Business News</h3> <h3 slot="category">Stock Research</h3> </hover-menu> - Mermaid Diagram: Class Diagram
classDiagram class HTMLElement { <<interface>> } class HoverMenuElement { +manifestUrl: string #shadowRoot: ShadowRoot #attachShadow() #connectedCallback() #fetchManifest() #render() } HTMLElement <|-- HoverMenuElement
5.2. Combination with ActivityPub Protocol
- Enabling Description: In a client for a decentralized social network (Fediverse), user profiles display a list of textual metadata (e.g., "Interactions," "Following," "Followers"). When a viewer hovers their cursor over the "Interactions" text, the client displays a set of standardized graphical icons representing common ActivityPub actions: "Reply," "Boost," "Like," "Mention." These icons are not hyperlinks to websites but are functional UI elements. Clicking the "Boost" icon constructs a valid ActivityPub
Announceactivity in JSON-LD format and POSTs it to the viewer's outbox endpoint on their home server, effectively sharing the profile. - Mermaid Diagram: Flowchart
graph TD A[Hover on 'Interactions' text] --> B[Display graphical action icons: Reply, Boost, Like]; B --> C{User clicks 'Boost' icon}; C --> D[Construct ActivityPub 'Announce' Object]; D --> E{JSON-LD Payload}; E --> F[POST to user's outbox URL]; F --> G[Action is federated];
5.3. Combination with GraphQL
- Enabling Description: This variation uses GraphQL to optimize data fetching. Instead of pre-downloading all graphical links for all categories, the initial page load fetches only the textual category data. The client application then uses a GraphQL client (like Apollo Client) to intelligently prefetch data for the graphical links. It can use heuristics, such as prefetching the links for the first three categories, or more advanced techniques like link prefetching on
mousedownor when the link enters the viewport. The key is that theonHoverevent triggers the display of data that is already present in the client's normalized cache, providing the same perceived performance as the original "pre-download" method but with a more flexible, on-demand data-fetching architecture that reduces initial load time. - Mermaid Diagram: Component Interaction
graph TD subgraph Initial Load A[Page Render] --> B[GraphQL Query: GetCategories] B --> C[Store Categories in Cache] C --> D[Render Textual Links] end subgraph User Interaction D -- Hover --> E{Trigger onHover}; E --> F[GraphQL Query: GetGraphicsForCategory]; F -- Data served from cache if available --> G[Display Graphical Links]; E -- Heuristic Prefetching --> F end
Generated 5/12/2026, 6:50:12 PM