Patent 9602649

Derivative works

Defensive disclosure: derivative variations of each claim designed to render future incremental improvements obvious or non-novel.

Active provider: Google · gemini-2.5-pro

Derivative works

Defensive disclosure: derivative variations of each claim designed to render future incremental improvements obvious or non-novel.

✓ Generated

Defensive Disclosure and Prior Art Generation

Document ID: DD-20260426-09602649
Publication Date: April 26, 2026
Subject Patent: U.S. Patent 9,602,649 B2 ("Event disambiguation")
Purpose: This document is intended to enter the public domain as prior art. It discloses a series of derivative inventions, technical variations, and new applications related to the core concepts described in U.S. Patent 9,602,649. The disclosures herein are intended to be enabling for a Person Having Ordinary Skill in the Art (PHOSITA), thereby rendering obvious or non-novel any future patent claims that substantially recite the concepts described.


Axis 1: Material & Component Substitution

Derivative 1.1: Proximity Disambiguation using Non-Audible Acoustic Spectrums

  • Enabling Description: This variation replaces the standard audible-spectrum microphones (recording units) with ultrasonic or infrasonic transducers. The "sensory identifier" is a specific high-frequency ultrasonic chirp (>25kHz) or a low-frequency infrasonic pulse (<20 Hz) emitted by one of the devices. The devices then record the ambient ultrasonic or infrasonic "background noise" (e.g., from HVAC systems, machinery, or structural vibrations) within a time interval relative to the trigger. The comparison unit performs a cross-correlation on these non-audible frequency samples. This method is advantageous in environments with high audible noise or where a silent trigger is required. The decision unit confirms proximity only if the spectral fingerprints in the non-audible domain are sufficiently similar.

  • Diagram:

    sequenceDiagram
        participant DeviceA
        participant DeviceB
        DeviceA->>DeviceA: Emit Ultrasonic Chirp (Trigger >25kHz)
        Note over DeviceA, DeviceB: Both devices detect chirp
        DeviceA->>DeviceA: Record ambient ultrasonic noise (t-1 to t-0.1)
        DeviceB->>DeviceB: Record ambient ultrasonic noise (t-1 to t-0.1)
        DeviceA->>DeviceB: Transmit ultrasonic noise sample
        DeviceB->>DeviceB: Compare Sample_A and Sample_B
        alt Samples are >95% correlated
            DeviceB->>DeviceB: Proximity Confirmed
        else Samples are not correlated
            DeviceB->>DeviceB: Proximity Failed
        end
    

Derivative 1.2: Hybrid Sensing using Inertial and Acoustic Components

  • Enabling Description: The detection of the sensory identifier is decoupled from the acoustic recording medium. The "detection unit" is implemented as a high-sensitivity MEMS accelerometer. The trigger event is a physical tap, but it is detected as a specific shock and vibration signature by the accelerometer, not as an acoustic event by the microphone. This provides a more power-efficient and precise trigger, as the accelerometer can be in a low-power listening state. Upon accelerometer trigger, the standard audible-spectrum microphone (the "recording unit") is activated to capture the ambient background audio for comparison, as described in the original patent. This hybrid approach prevents the trigger sound itself from contaminating the ambient background sample.

  • Diagram:

    flowchart TD
        subgraph DeviceA
            A1(Low-Power MEMS Accelerometer) --> A2{Tap Detected?};
            A2 -- Yes --> A3(Activate Microphone);
            A3 --> A4(Record Ambient Audio Sample);
            A4 --> A5(Transmit Sample);
        end
        subgraph DeviceB
            B1(Low-Power MEMS Accelerometer) --> B2{Tap Detected?};
            B2 -- Yes --> B3(Activate Microphone);
            B3 --> B4(Record Ambient Audio Sample);
        end
        A5 --> B6(Comparison Unit on DeviceB);
        B4 --> B6;
        B6 --> B7{Samples Match?};
        B7 -- Yes --> B8(Proximity Confirmed);
        B7 -- No --> B9(Proximity Denied);
    

Axis 2: Operational Parameter Expansion

Derivative 2.1: Nanoscale Proximity Detection for Swarm Robotics in Viscous Fluids

  • Enabling Description: This variation scales the invention down to the micro/nanoscale for coordinating swarms of robotic agents in a liquid medium. The "devices" are micro-electromechanical systems (MEMS) or nanobots. The "audio signal" is replaced by high-frequency phonons (vibrational waves) propagating through the fluid, detected by piezoelectric nano-transducers. The "trigger" is a specific vibrational frequency pattern generated by a lead bot. The comparison unit analyzes the recorded background thermal and mechanical vibrations in the fluid from a common time interval. A high correlation confirms that the bots are operating within the same localized fluid dynamic environment, enabling coordinated action without direct communication.

  • Diagram:

    stateDiagram-v2
        [*] --> Idle
        Idle --> Triggered: Leader Bot emits phonon pulse
        Triggered --> Recording: Record background fluidic vibrations
        Recording --> Comparing: Exchange vibration fingerprints
        Comparing --> Proximate: If cross-correlation > threshold
        Comparing --> NotProximate: If cross-correlation < threshold
        Proximate --> CoordinatedAction
        NotProximate --> Idle
    

Derivative 2.2: Proximity Disambiguation in Extreme High-Noise Industrial Environments

  • Enabling Description: The method is adapted for use in environments with ambient noise levels exceeding 120 dB, such as steel foundries or engine test cells. The recording units use directional microphones or microphone arrays with adaptive noise cancellation (ANC) capabilities. The trigger is a multi-tone audio sequence with frequencies chosen to be outside the dominant spectrum of the industrial noise. Before comparison, the audio samples are processed by a digital signal processor (DSP) which applies a pre-characterized noise profile of the environment to filter out predictable, high-amplitude background noise. The comparison is then performed on the residual, non-stationary acoustic signals, which are more indicative of a shared local environment.

  • Diagram:

    graph LR
        A[Device 1: Raw Audio Sample] --> B(DSP Filter 1);
        C[Device 2: Raw Audio Sample] --> D(DSP Filter 2);
        E[Pre-characterized Noise Profile] --> B;
        E --> D;
        B -- Filtered Sample 1 --> F(Comparison Unit);
        D -- Filtered Sample 2 --> F;
        F --> G{Similarity > Threshold?};
        G --> H[Confirm Proximity];
    

Axis 3: Cross-Domain Application

Derivative 3.1: Aerospace - Drone Swarm Formation Integrity

  • Enabling Description: This invention is applied to verify the spatial formation of an autonomous drone swarm. A designated "leader" drone executes a specific, abrupt maneuver (e.g., a rapid propeller pitch modulation) that creates a unique acoustic signature. This signature is the "sensory identifier" trigger. All drones in the swarm detect this signature. They then compare samples of the ambient aerodynamic and atmospheric noise recorded in a time interval immediately preceding the trigger. A high correlation of this "wind noise" profile confirms that the drones are flying in close formation through the same air mass, allowing the swarm to validate its integrity against GPS spoofing or sensor drift.

  • Diagram:

    sequenceDiagram
        participant LeaderDrone
        participant SwarmDrone1
        participant SwarmDrone2
        LeaderDrone->>LeaderDrone: Execute Propeller Pitch Modulation (Trigger)
        SwarmDrone1->>SwarmDrone1: Detect Trigger
        SwarmDrone2->>SwarmDrone2: Detect Trigger
        SwarmDrone1->>LeaderDrone: Send pre-trigger wind noise sample
        SwarmDrone2->>LeaderDrone: Send pre-trigger wind noise sample
        LeaderDrone->>LeaderDrone: Compare noise samples from Drone1, Drone2
        alt High Correlation
            LeaderDrone->>SwarmDrone1: Formation Integrity OK
            LeaderDrone->>SwarmDrone2: Formation Integrity OK
        else Low Correlation
            LeaderDrone->>LeaderDrone: Alert! Formation Breach
        end
    

Derivative 3.2: AgTech - Dynamic Livestock Grouping and Health Monitoring

  • Enabling Description: The devices are smart ear tags attached to cattle in a large feedlot. A central control system broadcasts a specific ultrasonic chirp over a targeted area, which serves as the trigger. The tags that detect the chirp then compare their recorded ambient audio from the moments just before the trigger. This audio contains sounds of movement, mastication, and vocalizations. By identifying tags with highly similar audio backgrounds, the system can dynamically identify which animals are clustered together. This data is used to model social behavior, track disease vectors, or confirm that a specific group of animals has visited a feeding or watering station.

  • Diagram:

    erDiagram
        FARM_AREA ||--|{ CATTLE_TAG : contains
        CATTLE_TAG {
            string TagID
            audio AudioSample
            datetime TriggerTimestamp
        }
        CATTLE_GROUP ||--|{ CATTLE_TAG : groups
        CATTLE_GROUP {
            int GroupID
            string Status
        }
        FARM_AREA {
            string AreaID
            string Location
        }
        CATTLE_GROUP }o--|| FARM_AREA : located_in
    

Derivative 3.3: Consumer Electronics - Zero-Configuration Smart Home Room Grouping

  • Enabling Description: The method is used to automatically assign new smart home devices (e.g., bulbs, plugs) to a "room" controlled by a smart speaker (e.g., Amazon Echo, Google Home). During setup, the user places the new device in the desired room and initiates a pairing mode. The smart speaker in that room emits a distinct audio chirp (the trigger). Any new devices that hear the chirp then compare their recorded ambient room audio (e.g., background conversation, television sound, air conditioner hum) with the audio recorded by the smart speaker. If the audio environments match, the speaker automatically configures the new device as part of its room group, eliminating the need for manual configuration via an app.

  • Diagram:

    flowchart TD
        A[User starts pairing mode for New Smart Bulb] --> B(Smart Speaker emits 'Room-ID' chirp);
        C[New Smart Bulb] -- Hears Chirp --> D{Record Ambient Audio};
        E[Smart Speaker] -- Hears Chirp --> F{Record Ambient Audio};
        D -- Bulb's Audio Sample --> G[Comparison on Speaker];
        F -- Speaker's Audio Sample --> G;
        G --> H{Samples Match?};
        H -- Yes --> I[Speaker auto-assigns Bulb to 'Living Room' group];
        H -- No --> J[Pairing Failed];
    

Axis 4: Integration with Emerging Tech

Derivative 4.1: AI-Driven Adaptive Proximity Detection

  • Enabling Description: The "comparison unit" and "decision unit" are replaced by a trained machine learning model, such as a Siamese neural network. The network is trained on pairs of audio snippets from thousands of different environments. It learns to produce a "similarity score" that is far more robust than simple cross-correlation. Furthermore, the model can classify the acoustic environment (e.g., "office," "street," "vehicle") and dynamically adjust the similarity threshold required for a positive match. This AI-driven approach allows the system to perform reliably in a wide range of contexts without manual tuning. The "detection unit" can also be AI-based, trained to recognize complex events like a specific spoken phrase or a musical cue as a trigger.

  • Diagram:

    graph TD
        subgraph Training_Phase
            A[Paired Audio Samples] --> B(Siamese Network);
            C[Environment Labels] --> B;
            B --> D[Trained Similarity Model];
        end
        subgraph Inference_Phase
            E[Device 1 Audio Sample] --> F(Trained Similarity Model);
            G[Device 2 Audio Sample] --> F;
            F --> H[Similarity Score];
            F --> I[Environment Classification];
            I --> J{Adjust Threshold};
            J & H --> K{Score > Dynamic Threshold?};
            K --> L[Proximity Decision];
        end
    

Derivative 4.2: Blockchain-Verified Proximity for Chain of Custody

  • Enabling Description: This variation provides an immutable, auditable record of a proximity event. When two devices (e.g., a courier's smartphone and a high-value package's smart tag) successfully confirm proximity using the audio disambiguation method, they perform a cryptographic handshake. They collaboratively generate a hash of the shared ambient audio fingerprint, the trigger timestamp, and their unique digital identifiers. This hash is then submitted as a transaction to a distributed ledger (blockchain). This creates a tamper-proof "proof-of-proximity" record, which can be used to automate and secure chain-of-custody handoffs, verifying that two assets were in the same place at the same time.

  • Diagram:

    sequenceDiagram
        participant CourierDevice
        participant PackageTag
        participant Blockchain
        CourierDevice->>PackageTag: Initiate Proximity Check (Tap Trigger)
        Note over CourierDevice, PackageTag: Exchange & Compare Ambient Audio
        PackageTag-->>CourierDevice: Proximity Confirmed
        CourierDevice->>CourierDevice: Generate Hash(AudioFP, Timestamp, IDs)
        PackageTag->>PackageTag: Generate Hash(AudioFP, Timestamp, IDs)
        Note over CourierDevice, PackageTag: Verify Hashes Match
        CourierDevice->>Blockchain: Submit Transaction(ProximityRecordHash)
        Blockchain-->>CourierDevice: Transaction Confirmed
    

Axis 5: The "Inverse" or Failure Mode

Derivative 5.1: Graceful Degradation for Low-Confidence Proximity

  • Enabling Description: This version of the invention is designed to fail safely rather than completely. The decision unit operates on a tiered confidence model.

    • Tier 1 (High Confidence): Sensory identifiers match AND ambient audio correlation is >98%. All device functions are enabled.
    • Tier 2 (Medium Confidence): Identifiers match, but audio correlation is between 90-98% (e.g., due to echo or obstruction). The devices pair in a "limited-functionality" mode, allowing only low-sensitivity data exchange and displaying a warning to the user.
    • Tier 3 (No Confidence): Identifiers match, but audio correlation is <90%. Pairing fails completely.
      This prevents a total failure in ambiguous situations and provides a more nuanced security model.
  • Diagram:

    stateDiagram-v2
        state "Matching" as M
        M --> HighConfidence: Audio Corr. > 98%
        M --> MediumConfidence: 90% < Audio Corr. < 98%
        M --> Failed: Audio Corr. < 90%
        state "High Confidence" {
            [*] --> FullFunctionality
        }
        state "Medium Confidence" {
             [*] --> LimitedFunctionality
             LimitedFunctionality --> UserAlert
        }
    

Combination Prior Art with Open Standards

Combination 1: WebRTC (Web Real-Time Communication)

  • Enabling Description: The method is implemented within a web browser using standard WebRTC APIs to establish a secure, peer-to-peer data connection. A web application on a first device uses the getUserMedia() API to access the microphone and the Web Audio API to generate a trigger tone. A second device, running the same web app, also uses getUserMedia() to listen for the trigger and record ambient audio. The audio samples are exchanged over the initial, untrusted WebRTC data channel. Only after the ambient audio is verified as matching is the data channel "promoted" to a trusted state for exchanging sensitive information, thus securing a standard peer-to-peer web protocol against remote man-in-the-middle attacks.

Combination 2: Bluetooth Beacons (Eddystone/iBeacon)

  • Enabling Description: The sensory identifier is provided by a standard Bluetooth Low Energy (BLE) beacon (e.g., Eddystone or iBeacon). Multiple smartphones in the vicinity receive the beacon's universally unique identifier (UUID) broadcast. The reception of the UUID acts as the trigger. The phones then use the '649 method to compare ambient audio. A server can then conclude with high certainty that all phones with matching audio fingerprints were not just within the ~50m range of the beacon, but were in the same specific acoustic space (e.g., standing together in a specific room or aisle where the beacon is located), providing a hyper-location context that BLE alone cannot.

Combination 3: NTP (Network Time Protocol)

  • Enabling Description: For a group of networked devices that are not in physical contact, the trigger is a globally synchronized time event. All devices use the open NTP standard to synchronize their internal clocks with a high degree of precision (<10ms). A pre-agreed upon future timestamp (e.g., 14:00:00.000 UTC) is designated as the trigger. At precisely this moment, all participating devices save a fingerprint of the audio recorded in the preceding second. They then exchange these fingerprints over the network. This allows a server to identify clusters of devices that share a common acoustic environment without any physical interaction or audible trigger, based purely on a shared, precisely-timed event.

Generated 5/10/2026, 12:38:17 AM