Patent 10403051

Derivative works

Defensive disclosure: derivative variations of each claim designed to render future incremental improvements obvious or non-novel.

Active provider: Google · gemini-2.5-pro

Derivative works

Defensive disclosure: derivative variations of each claim designed to render future incremental improvements obvious or non-novel.

✓ Generated

DEFENSIVE DISCLOSURE AND PRIOR ART

Reference: Methods and Systems based on US Patent 10,403,051
Publication Date: May 9, 2026
Field: Augmented Reality, Computer Graphics, Network Computing

This document discloses derivative inventions, expansions, and combinations of the concepts described in US Patent 10,403,051 ("Interference based augmented reality hosting platforms"). The purpose of this disclosure is to place these concepts in the public domain, thereby establishing them as prior art for any future patent applications.

Core Technology Background

The core technology involves an Augmented Reality (AR) hosting platform that determines which AR objects to display and how to display them based on a calculated "interference" among elements in a real-world scene. Properties of recognized real-world objects and existing virtual objects are used in a function to derive an interference value. This value dictates whether a new AR object's presence is enhanced (constructive interference) or suppressed (destructive interference).


Derivative Disclosures

1. Material & Component Substitution

1.1. Specialized Co-Processor Architecture for Interference Calculation
  • Enabling Description: The system described in US 10,403,051 is implemented using a heterogeneous computing architecture to accelerate performance and reduce power consumption. The "object recognition engine" is offloaded from a general-purpose CPU to a dedicated Neuromorphic Processing Unit (NPU). The NPU utilizes a Spiking Neural Network (SNN) to perform low-latency, event-driven recognition of target objects from the mobile device's sensor stream. The resulting object attribute data is then passed to a Quantum Annealing co-processor. The "interference function" is formulated as a Quadratic Unconstrained Binary Optimization (QUBO) problem, where the attributes of scene elements are variables. The quantum annealer finds the ground state of this problem, which corresponds to the optimal set of AR object presentation parameters (e.g., visibility, scale, interaction availability), and returns this result to the hosting platform for transmission to the AR device.
  • Mermaid Diagram:
    graph TD
        A[Mobile Device: Sensor Stream] --> B{NPU Co-Processor};
        B -- Recognized Object Attributes --> C{Quantum Annealer};
        D[Interference Function as QUBO] --> C;
        C -- Optimal AR Parameters --> E[Hosting Platform];
        E --> F[AR Content Repository];
        F -- Relevant AR Objects --> E;
        E -- Configured AR Experience --> A;
    
1.2. Brain-Computer Interface (BCI) as a Scene Input
  • Enabling Description: The "digital representation of a scene" is augmented with neural data from a non-invasive BCI worn by the user (e.g., an electroencephalography headset). The BCI captures event-related potentials (ERPs), specifically the P300 component, which indicates cognitive recognition of a salient stimulus. When the user views the scene, the BCI data stream is synchronized with the camera feed. The hosting platform's object recognition engine uses both the visual data and the P300 signals to identify which real-world elements are not just present, but are also the focus of the user's attention. These neurologically-flagged elements are given a significantly higher weight in the interference function, creating a user-salience-driven AR experience. For instance, two people talking may destructively interfere with a pop-up advertisement, but if the user's BCI indicates they are focusing on a product on a shelf, that object's weight constructively interferes to enhance product-related AR content.
  • Mermaid Diagram:
    sequenceDiagram
        participant User
        participant BCI_Headset
        participant AR_Device
        participant Hosting_Platform
    
        User->>AR_Device: Views Scene
        AR_Device->>Hosting_Platform: Transmits Video Stream
        User->>BCI_Headset: Neurological Response to Scene
        BCI_Headset->>Hosting_Platform: Transmits ERP Data (P300)
        Hosting_Platform->>Hosting_Platform: Correlate Video Objects with ERP Spikes
        Hosting_Platform->>Hosting_Platform: Calculate Salience-Weighted Interference
        Hosting_Platform->>AR_Device: Send AR Objects with adjusted presence
        AR_Device->>User: Display Personalized AR Experience
    

2. Operational Parameter Expansion

2.1. Nanoscale AR for Molecular Dynamics Simulation
  • Enabling Description: The interference method is applied to real-time visualization of molecular dynamics simulations. The "AR device" is a high-resolution display linked to a simulation running on a supercomputing cluster. The "scene" is a 3D rendering of a molecular structure. The "elements" are individual atoms and functional groups. Their physical properties (e.g., electrostatic charge, van der Waals radius, instantaneous velocity, quantum spin state) serve as attributes for the interference calculation. The interference function models reaction probability. A region where atomic properties constructively interfere (e.g., favorable charge and proximity for a covalent bond) causes an AR overlay to appear, visualizing the probable new bond. Conversely, regions with repulsive forces destructively interfere, suppressing any visualization of potential bonds to reduce visual clutter for the researcher.
  • Mermaid Diagram:
    flowchart LR
        subgraph Supercomputer
            A[Molecular Dynamics Simulation]
            B[Attribute Extractor: Charge, Spin, etc.]
            C[Interference Engine]
        end
        subgraph Visualization
            D[3D Molecular Scene]
            E[AR Overlay Renderer]
        end
        A --> B;
        B -- Atom Attributes --> C;
        C -- Interference Results --> E;
        D -- Base Scene --> E;
        E --> F[Researcher's Display];
    
2.2. Industrial-Scale AR for Smart Factory Logistics
  • Enabling Description: In a smart factory, thousands of IoT sensors (LiDAR, UWB positioning, machine status APIs) provide a continuous data stream that forms the "digital representation of the scene." The hosting platform, running on an edge computing cluster, recognizes "elements" such as Autonomous Mobile Robots (AMRs), assembly stations, and inventory pallets. The interference function is a real-time logistics optimization algorithm. The velocity vector and destination of AMR-A (element 1) constructively interfere with the open status of Assembly Station-B (element 2) to enhance the visibility of a green, dashed AR line on the factory floor, visible to human supervisors via an AR headset. Simultaneously, the trajectory of AMR-C (element 3) creates destructive interference, suppressing that same path to avoid collisions, rendering the path invisible or red.
  • Mermaid Diagram:
    stateDiagram-v2
        [*] --> Idle
        Idle --> Calculating: High-priority task arrives
        Calculating --> Rendering: Interference solution found
        Rendering --> Idle: AR overlay updated
        state Calculating {
            state AMR_Data <<fork>>
            state Station_Data
            state Inventory_Data
            AMR_Data --> Interference_Calc
            Station_Data --> Interference_Calc
            Inventory_Data --> Interference_Calc
            Interference_Calc --> Solution_Found
        }
    

3. Cross-Domain Application

3.1. Aerospace: Adaptive Pilot Interface
  • Enabling Description: An AR visor system in a cockpit uses the interference principle to manage pilot cognitive load. Real-world "elements" are derived from onboard systems: terrain data (from a terrain database), other aircraft (from TCAS), and weather systems (from NEXRAD data). The pilot's biometric state (from a heart rate and eye-tracking sensor) is also a key element. During normal flight, these elements interfere to provide a rich data overlay. In a critical situation (e.g., GPWS "terrain, terrain" alert), the high-priority terrain element and the pilot's elevated heart rate create strong destructive interference against non-critical elements like distant aircraft or communication frequencies, causing their AR labels to fade out. This leaves a decluttered display showing only the most critical information for the immediate maneuver.
  • Mermaid Diagram:
    classDiagram
        class CockpitARSystem {
            +processScene()
            -calculateInterference()
        }
        class SceneElement {
            <<abstract>>
            +attribute: vector
        }
        class Aircraft {
            +velocity
            +altitude
        }
        class Terrain {
            +elevation
            +proximity
        }
        class PilotBiometrics {
            +heartRate
            +cognitiveLoad
        }
        class ARObject {
            +presence: float
            +render()
        }
        CockpitARSystem *-- "many" SceneElement
        SceneElement <|-- Aircraft
        SceneElement <|-- Terrain
        SceneElement <|-- PilotBiometrics
        CockpitARSystem *-- "many" ARObject
    
3.2. AgTech: Differentiated Crop Treatment
  • Enabling Description: An AR overlay on a drone or smart glasses used by a farmer visualizes crop health. The "scene" is the field view. "Elements" are individual plants, identified via image recognition, and sub-soil conditions reported by wireless moisture and nutrient sensors. The attributes are NDVI score (plant health), soil nitrogen level, and moisture level. The interference function is designed to identify plants requiring intervention. A plant with a low NDVI score (element 1 attribute) and low soil moisture (element 2 attribute) constructively interfere to produce a bright blue AR water droplet icon over that specific plant. A neighboring plant with a high NDVI score destructively interferes with the system, suppressing any icon over it, so the farmer is only shown actionable information.
  • Mermaid Diagram:
    graph TD
        A[Drone Camera Feed] --> C{Object Recognition Engine};
        B[Soil Sensor Network Data] --> C;
        C -- Plant Health & Soil Attributes --> D{Interference Calculator};
        D -- Result: Show/Hide Icon --> E[AR Overlay Generator];
        E --> F[Farmer's AR Display];
    

4. Integration with Emerging Technologies

4.1. AI-Driven Reinforcement Learning for Interference Function
  • Enabling Description: The "interference function" is not a static, developer-defined set of rules but is instead a neural network model trained via reinforcement learning (RL). The hosting platform observes user interactions with the presented AR objects (e.g., clicks, dismissals, gaze duration) as feedback. The RL agent's state is the set of all element attributes in the scene, its action is the set of parameters for rendering AR objects, and its reward is a function of positive user engagement. Over time, the agent learns a policy that dynamically adjusts the interference calculation to maximize user engagement. For example, it might learn that for a specific user, the presence of a pet (real-world element) should destructively interfere with work-related AR notifications, a rule that was not explicitly programmed.
  • Mermaid Diagram:
    sequenceDiagram
        participant User
        participant AR_Device
        participant RL_Agent
    
        loop Training Loop
            AR_Device->>RL_Agent: Observe Scene State (S)
            RL_Agent->>RL_Agent: Choose Action (A) - Set AR params
            RL_Agent->>AR_Device: Configure AR Overlay
            User->>AR_Device: Interact with AR Objects
            AR_Device->>RL_Agent: Report Interaction as Reward (R)
            RL_Agent->>RL_Agent: Update Policy based on (S, A, R)
        end
    
4.2. Blockchain-Verified Digital Twins for AR Objects
  • Enabling Description: The system is integrated with a blockchain to ensure the authenticity of objects in high-value contexts like supply chain management or art exhibitions. Every real-world item has a corresponding "digital twin" represented as a Non-Fungible Token (NFT) on a public ledger (e.g., using the ERC-1155 standard). The AR device recognizes a real-world object (e.g., a pharmaceutical bottle via its QR code) and queries the blockchain for its corresponding NFT. The NFT's metadata, which contains immutable attributes like manufacturing date and chain of custody, is pulled into the interference calculation. An authentic product's NFT record provides strong constructive interference for an "Authentic" AR label. A product with a missing or fraudulent blockchain history generates destructive interference, suppressing the "Authentic" label and instead displaying a "Warning: Unverified" AR object.
  • Mermaid Diagram:
    erDiagram
        REAL_WORLD_OBJECT {
            string objectID PK
        }
        AR_PLATFORM {
            string sessionID PK
        }
        BLOCKCHAIN_NFT {
            string objectID PK
            string authentic_attributes
        }
        AR_OBJECT {
            string objectID
            string visual_representation
        }
        REAL_WORLD_OBJECT ||--o{ AR_PLATFORM : "is recognized by"
        AR_PLATFORM ||--|{ BLOCKCHAIN_NFT : "verifies against"
        AR_PLATFORM ||--|{ AR_OBJECT : "generates based on interference"
    

5. The "Inverse" or Failure Mode

5.1. Graceful Degradation Mode for Low-Power/Low-Bandwidth Operation
  • Enabling Description: When the AR device detects a state below a predefined threshold (e.g., battery < 15% or network latency > 500ms), it enters a "Graceful Degradation Mode." It sends a flag to the hosting platform. In response, the platform switches from a full interference calculation to a simplified, pre-computed context model. Object recognition is limited to a small set of high-priority markers (e.g., QR codes for navigation). The interference function is bypassed; instead, a simple lookup table maps the recognized marker to a single, static AR object. The AR object itself is a low-polygon model with a basic texture, transmitted in a highly compressed format (e.g., Draco) to minimize bandwidth. This fail-safe ensures that essential functionality is maintained while conserving resources.
  • Mermaid Diagram:
    stateDiagram-v2
        state "Full Experience Mode" as Full
        state "Graceful Degradation Mode" as Degraded
    
        [*] --> Full
        Full --> Degraded: Battery < 15% OR Latency > 500ms
        Degraded --> Full: Battery > 20% AND Latency < 300ms
    
        state Full {
            ObjectRecognition: Full Scene Analysis
            Interference: Multi-element Calculation
            AR_Content: High-fidelity 3D Models
        }
        state Degraded {
            ObjectRecognition: QR Code Only
            Interference: Bypassed (Lookup Table)
            AR_Content: Compressed 2D Icons
        }
    

Combination Prior Art with Open-Source Standards

  1. Combination with WebXR and A-Frame: An implementation where the hosting platform acts as a backend API for a web-based AR application built with the A-Frame framework. The device, running a standard web browser, uses the WebXR Device API to access the camera. A JavaScript client captures video frames and sends them to the server. The server, implementing the interference-based logic, responds with a JSON object describing the scene's relevant AR objects and their interference-modulated properties (e.g., {"object_id": "coupon_1", "scale": "1.5", "opacity": "0.9"}). The A-Frame application dynamically creates or updates <a-entity> components in the DOM, binding their properties (e.g., scale, material.opacity) to the values received from the server, thus rendering the interference-based experience entirely within a web standard.

  2. Combination with OpenCV and ONNX: The object recognition engine is a standardized, portable deep learning model in the Open Neural Network Exchange (ONNX) format. This allows the recognition workload to be run on various hardware (server-side CPU, GPU, or even client-side via ONNX.js). The mobile device captures an image, and a pre-trained ONNX model (e.g., YOLOv7) performs object detection. The resulting bounding boxes and class probabilities are sent to the hosting platform. This standardized payload of recognized objects is then used as the input for the interference function, decoupling the core interference logic from a specific computer vision implementation.

  3. Combination with RDF, SPARQL, and Schema.org: The attributes of real-world and virtual elements are structured using the Schema.org vocabulary and serialized as RDF triples. A real-world business recognized in a scene is an RDF node of type schema:LocalBusiness with properties like schema:geo and schema:openingHours. The user's profile is also an RDF graph with schema:Person properties. The interference function is a set of SPARQL CONSTRUCT queries run against this combined graph. A query might construct a new "AR Offer" node only if a schema:Product's properties align with a schema:Person's interests and the schema:LocalBusiness is currently open, demonstrating a formal, semantic, and standardized method for calculating interference.

Generated 5/9/2026, 12:48:29 PM