Patent 11100163

Derivative works

Defensive disclosure: derivative variations of each claim designed to render future incremental improvements obvious or non-novel.

Active provider: Google · gemini-2.5-pro

Derivative works

Defensive disclosure: derivative variations of each claim designed to render future incremental improvements obvious or non-novel.

✓ Generated

Defensive Disclosure Document

Publication Date: May 9, 2026
Subject: Derivative Works and Improvements for Location-Based Information and Advertising Systems
Reference Patent: U.S. Patent 11,100,163 ("Photographic memory")

This document is intended to enter the public domain as prior art. It discloses a series of derivative works, improvements, and alternative embodiments related to the system and method described in U.S. Patent 11,100,163. The purpose of this disclosure is to preclude patentability of these and other obvious variations.


Analysis of Core Claim (based on Claim 1 of US 11,100,163)

The fundamental inventive concept revolves around a system that:

  1. Receives a location from a mobile electronic device.
  2. Stores location-based travel information and advertisements in a database.
  3. Retrieves location-based travel information based on the device's location.
  4. Retrieves a location-based advertisement based on both the device's location and at least one spoken keyword.
  5. Presents the retrieved advertisement to the user.

Derivative Variations

Axis 1: Material & Component Substitution

Derivative 1.1: Solid-State Lidar for Location and Environmental Context

  • Enabling Description: The system substitutes GPS-based location data with high-resolution, low-power solid-state LiDAR sensors embedded in the mobile device. These sensors provide hyper-precise location data (sub-centimeter accuracy) and generate a real-time 3D point cloud of the immediate environment. The server receives this point cloud data along with the spoken keyword. The "location-based advertisement" is then selected based not only on the user's geographic coordinates but also on the recognized objects in their immediate vicinity (e.g., recognizing a specific model of a car could trigger an ad for a competing brand's dealership nearby). The advertisement retrieval algorithm uses a 3D object recognition neural network to process the point cloud data in conjunction with the speech-to-text output.
  • Mermaid Diagram:
    sequenceDiagram
        participant User
        participant MobileDevice
        participant Server
        participant AdDatabase
        User->>MobileDevice: Speaks keyword "car"
        MobileDevice->>MobileDevice: Captures audio and LiDAR point cloud
        MobileDevice->>Server: Transmits audio and point cloud data
        Server->>Server: Processes audio to text ("car")
        Server->>Server: Processes point cloud (recognizes competitor vehicle model)
        Server->>AdDatabase: Queries for ads based on lat/long, keyword "car", and competitor model
        AdDatabase-->>Server: Returns targeted dealership ad
        Server-->>MobileDevice: Sends advertisement
        MobileDevice-->>User: Displays ad
    

Derivative 1.2: Piezoelectric Microphones for Ultrasonic Keyword Triggering

  • Enabling Description: The system replaces standard MEMS microphones with piezoelectric transducers capable of detecting both audible speech and inaudible ultrasonic frequencies. Advertisers can embed ultrasonic beacons in their physical locations (e.g., retail stores, billboards). The mobile device continuously monitors for these beacons. When a beacon is detected, the device enters a "high-alert" state for keyword recognition. A spoken keyword in the presence of an ultrasonic beacon triggers a higher-priority ad request to the server, which cross-references the beacon's unique ID with the keyword and the device's GPS location to serve a hyper-contextual advertisement. This reduces power consumption as continuous audio processing is not required.
  • Mermaid Diagram:
    stateDiagram-v2
        [*] --> Idle
        Idle --> ListeningForBeacon: Power On
        ListeningForBeacon --> KeywordMonitoring: Ultrasonic Beacon Detected
        KeywordMonitoring --> Idle: Beacon Lost
        KeywordMonitoring --> AdRetrieval: Spoken Keyword Detected
        AdRetrieval --> AdPresented: Ad Received from Server
        AdPresented --> Idle: Ad Displayed/Finished
    

Derivative 1.3: Graphene-Based RF Antenna for Low-Power Geolocation

  • Enabling Description: This variation substitutes the standard GPS chipset with a flexible, graphene-based radio-frequency (RF) antenna. This antenna is capable of passively harvesting energy from ambient RF signals (e.g., Wi-Fi, cellular) and using signal triangulation and fingerprinting against a known database of RF sources to determine location. This component substitution dramatically reduces the device's power consumption for location tracking. The server-side logic remains similar, but the location data received is based on RF signal strength vectors rather than satellite time-of-flight data. The location-based advertisement retrieval is then triggered by the combination of this low-power location data and a spoken keyword.
  • Mermaid Diagram:
    flowchart TD
        A[Mobile Device with Graphene Antenna] --> B{Scan Ambient RF Signals};
        B --> C[Transmit Signal Fingerprint to Server];
        D[User Speaks Keyword] --> E{Process Audio};
        E --> F[Transmit Keyword to Server];
        G[Server] --> H{Receive Fingerprint and Keyword};
        H --> I[Compare Fingerprint to RF Map Database];
        I --> J[Determine Location];
        J --> K{Query Ad Database with Location + Keyword};
        K --> L[Retrieve Targeted Ad];
        L --> M[Send Ad to Mobile Device];
        M --> N[Display Ad];
        C --> H;
        F --> H;
    

Axis 2: Operational Parameter Expansion

Derivative 2.1: Nanoscale Acoustic Sensor Network

  • Enabling Description: The invention is scaled down to a network of nanoscale acoustic sensors deployed within a microfluidic chip for lab-on-a-chip applications. The "mobile device" is a diagnostic instrument. The "location" is a specific coordinate on the microfluidic chip. Spoken keywords are replaced by specific acoustic signatures generated by chemical reactions at different locations on the chip. When a target reaction's acoustic signature is detected at a specific location, the system retrieves "advertisements" which are actually commands to dispense a different reagent at an adjacent location on the chip to initiate a subsequent reaction.
  • Mermaid Diagram:
    graph LR
        subgraph MicrofluidicChip
            A(Location X1:Y1) -- a_signature1 --> B(Acoustic Sensor);
            C(Location X2:Y2) -- a_signature2 --> B;
        end
        B --> D{Diagnostic Instrument Server};
        D -- "signature1 at X1:Y1" --> E{Reagent Database};
        E -- "Retrieve command for Reagent B" --> D;
        D --> F(Reagent Dispenser);
        F -- "Dispense Reagent B at X1:Y2" --> C;
    

Derivative 2.2: Planetary-Scale Operation for Asteroid Mining

  • Enabling Description: The system is scaled up for use in autonomous asteroid mining operations. The "mobile electronic devices" are a fleet of autonomous mining drones. "Location" is determined by a deep-space network providing coordinates relative to the asteroid's surface. "Spoken keywords" are replaced by spectral analysis readings from the drones' sensors (e.g., a reading indicating high concentrations of platinum). When a drone reports a "keyword" (high platinum reading) at a specific "location", the central server retrieves "advertisements" which are updated mining priority maps and trajectory commands for other nearby drones, directing them to the resource-rich location.
  • Mermaid Diagram:
    sequenceDiagram
        participant DroneA
        participant CentralServer
        participant DroneB
        participant DroneC
        DroneA->>DroneA: Scans asteroid surface at Loc1
        DroneA->>CentralServer: Reports high platinum signature (keyword) at Loc1
        CentralServer->>CentralServer: Validates reading and updates resource map
        CentralServer->>DroneB: Sends new trajectory command to Loc1
        CentralServer->>DroneC: Sends new trajectory command to Loc1
        DroneB->>DroneB: Adjusts course to Loc1
        DroneC->>DroneC: Adjusts course to Loc1
    

Axis 3: Cross-Domain Application

Derivative 3.1: Aerospace - Predictive Maintenance in Jet Engines

  • Enabling Description: An array of acoustic sensors is placed within a jet engine nacelle. The "location" is the specific sensor's position within the engine. The "spoken keyword" is a specific acoustic frequency pattern that is a known precursor to turbine blade fatigue. When a sensor detects this pattern at its location, a maintenance alert ("advertisement") is transmitted to the flight crew and ground control, providing the exact location of the potential fault and suggesting immediate inspection or a reduction in engine power.
  • Mermaid Diagram:
    flowchart TD
        A[Acoustic Sensor at Turbine Stage 3] --> B{Monitoring for Fatigue Signature};
        B -- Signature Detected --> C[Onboard Avionics Server];
        C --> D{Query Maintenance Database};
        D -- "Signature + Location = High Priority Alert" --> C;
        C --> E[Transmit Alert to Cockpit Display];
        C --> F[Transmit Alert via Satellite to Ground Control];
    

Derivative 3.2: AgTech - Pest Detection in Precision Agriculture

  • Enabling Description: An agricultural drone is the "mobile device," equipped with hyperspectral cameras and microphones. The "location" is the GPS coordinate within a large field. The "spoken keyword" is a combination of the specific bio-acoustic signature of a harmful insect (e.g., locust) and a corresponding stress signature in the vegetation's spectral reflectance. When the drone detects this combined signature at a location, the server retrieves an "advertisement" which is a precise prescription map for a targeted pesticide application, dispatched to an autonomous sprayer drone.
  • Mermaid Diagram:
    graph TD
        subgraph Farm Field
            Drone[Survey Drone] -- Scans at GPS_Coord --> Data{Collects Acoustic & Spectral Data};
        end
        Data --> Server{AgCloud Server};
        Server --> Analysis{Correlate Data: Locust sound + Plant stress signature?};
        Analysis -- Yes --> Action[Query Action Database];
        Action --> Prescription[Generate Targeted Spray Prescription for GPS_Coord];
        Prescription --> Sprayer[Dispatch Sprayer Drone];
    

Derivative 3.3: Consumer Electronics - Smart Home Proactive Assistance

  • Enabling Description: The system is integrated into a home's local network of smart speakers and sensors. The "mobile device" is any smart speaker. The "location" is the room the speaker is in (e.g., "Kitchen"). The "spoken keyword" is a non-command phrase indicating intent, like "Wow, it's getting dark early." The home automation server receives this keyword and location, cross-references it with the time of day and the status of the home's smart blinds, and retrieves an "advertisement" which is a proactive action: automatically closing the kitchen blinds and turning on the lights.
  • Mermaid Diagram:
    sequenceDiagram
        participant User
        participant KitchenSpeaker
        participant HomeServer
        participant SmartBlinds
        participant SmartLights
        User->>KitchenSpeaker: "Wow, it's getting dark early."
        KitchenSpeaker->>HomeServer: Transmits text="...", location="Kitchen"
        HomeServer->>HomeServer: Analyzes intent, checks time & device status
        HomeServer->>SmartBlinds: Command: Close blinds
        HomeServer->>SmartLights: Command: Turn on lights
    

Axis 4: Integration with Emerging Tech

Derivative 4.1: AI-Driven Predictive Ad Retrieval

  • Enabling Description: An AI model on the server analyzes a user's historical location data, speech patterns, and time-of-day habits. The system moves from being reactive to predictive. Based on the AI's prediction that a user is likely to mention a keyword (e.g., "coffee") when they are near a specific location at a certain time, the server pre-fetches a relevant advertisement from the database and caches it on the mobile device. When the user then speaks the keyword, the ad is presented instantly from the local cache, eliminating network latency.
  • Mermaid Diagram:
    flowchart TD
        A[AI Model on Server] --> B{Analyze User History: Location, Speech, Time};
        B --> C{Predict: User likely to say 'coffee' at 8 AM near location X};
        C --> D[Pre-fetch coffee ad for location X];
        D --> E[Push ad to mobile device cache];
        F[User at 8 AM near location X] --> G{Speaks "coffee"};
        G --> H{Mobile Device};
        H --> I[Instantly display ad from local cache];
    

Derivative 4.2: IoT Sensor Fusion for Keyword Context

  • Enabling Description: The mobile device acts as a gateway for a user's personal IoT sensor data (e.g., heart rate from a smartwatch, ambient temperature from a sensor). When the user speaks a keyword, this contemporaneous IoT data is bundled with the location and audio data and sent to the server. For example, the spoken keyword "water" combined with a high heart rate and high ambient temperature at a park location would cause the server to retrieve an advertisement for a nearby store selling cold sports drinks, rather than a generic ad for bottled water.
  • Mermaid Diagram:
    graph TD
        subgraph UserContext
            A(Smartwatch) -- Heart Rate: 120bpm --> C(Mobile Device);
            B(Environment Sensor) -- Temp: 85F --> C;
        end
        C -- Location: Park & Keyword: "Water" --> D{Server};
        D --> E{Fuse Data: High HR + High Temp + "Water" in Park};
        E --> F[Query Ad DB for 'cold sports drink'];
        F --> G[Retrieve and Send Ad];
    

Derivative 4.3: Blockchain for Ad Verification and User Rewards

  • Enabling Description: Every ad request (comprising a timestamp, anonymized location hash, and keyword category) is recorded as a transaction on a private blockchain. When an ad is successfully presented to the user, this is also recorded. This provides an immutable, auditable trail for advertisers to verify ad delivery. Furthermore, users are rewarded with cryptocurrency tokens, recorded on the blockchain, for opting in and interacting with the advertisements, creating a transparent and verifiable rewards system.
  • Mermaid Diagram:
    sequenceDiagram
        participant MobileDevice
        participant Server
        participant Blockchain
        participant Advertiser
        MobileDevice->>Server: Request ad (LocationHash, Keyword)
        Server->>Blockchain: Record AdRequest Transaction
        Server->>MobileDevice: Send Ad
        MobileDevice->>Server: Report AdPresented
        Server->>Blockchain: Record AdPresented Transaction
        Blockchain->>MobileDevice: Issue UserReward Token
        Advertiser->>Blockchain: Audit Ad Delivery Ledger
    

Axis 5: The "Inverse" or Failure Mode

Derivative 5.1: Privacy-Preserving Low-Power Mode

  • Enabling Description: The system is designed to operate in a "privacy-first" limited functionality mode. In this mode, all audio processing is done locally on the device using a low-power neural processing unit (NPU). Only a non-reversible hash of a recognized keyword, combined with a quantized (low-precision) location, is sent to the server. The server can match this anonymized data to a broad category of advertisement (e.g., "food" in "downtown area") but receives no personally identifiable information. This allows for coarse-grained ad targeting while protecting user privacy and operating with minimal battery drain. The device is designed to fail-safe into this mode if it detects a potential network security threat.
  • Mermaid Diagram:
    flowchart TD
        A[Mobile Device] --> B{Continuous On-Device Keyword Spotting};
        B -- Keyword "Pizza" detected --> C[Generate Keyword Hash: 0xAbC...];
        A --> D{GPS};
        D --> E[Quantize Location: 40.7N, 74W -> "Midtown"];
        F[Send Anonymized Packet] --> G[Server];
        C --> F;
        E --> F;
        G --> H{Query Ad DB with Hash + Quantized Location};
        H --> I[Retrieve Generic "Midtown Pizza" Ad];
        I --> A;
    

Combination Prior Art Scenarios

1. Combination with the WebRTC Open Standard:

  • Description: The system leverages the WebRTC (Web Real-Time Communication) standard to create a peer-to-peer network between users in close proximity. A user's spoken keyword ("sushi") is broadcast directly to nearby devices via a local WebRTC data channel. A participating local restaurant's device (e.g., a tablet) can receive this anonymous broadcast and respond directly with a peer-to-peer "advertisement" in the form of a digital coupon, bypassing a central server for hyper-local ad delivery. This combines the patent's concept with an open standard for decentralized communication.

2. Combination with the GTFS (General Transit Feed Specification) Open Standard:

  • Description: The system integrates with publicly available GTFS data from a city's transit authority. When a user is at a bus stop (location) and speaks a keyword related to their destination ("library"), the server retrieves the relevant travel information (the next bus arrival time from the GTFS feed) and also retrieves a location-based advertisement for a coffee shop located next to the destination library, whose location is also known from open mapping data.

3. Combination with the RISC-V Open Instruction Set Architecture (ISA):

  • Description: The mobile device's processor is based on the open-source RISC-V ISA. A custom instruction is added to the processor design specifically for "Acoustic Keyword Hashing." This hardware-level instruction allows the device to listen for spoken keywords and convert them into a secure hash using extremely low power, a task that would be less efficient in software. This custom, open-standard-based hardware directly enables the "Privacy-Preserving Low-Power Mode" described in Derivative 5.1, combining the patent's method with an open hardware standard for efficient implementation.

Generated 5/9/2026, 6:49:44 AM