Patent 8374358

Derivative works

Defensive disclosure: derivative variations of each claim designed to render future incremental improvements obvious or non-novel.

Active provider: Google · gemini-2.5-pro

Derivative works

Defensive disclosure: derivative variations of each claim designed to render future incremental improvements obvious or non-novel.

✓ Generated

Defensive Disclosure Document for US Patent 8,374,358

Publication Date: May 9, 2026
Subject: Derivative Works and Obvious Variations of a Method for Determining a Noise Reference Signal for Noise Compensation and/or Noise Reduction.
Reference Patent: US 8,374,358 B2 ("the '358 patent")

This document discloses a series of methods, systems, and applications that build upon, vary, or represent alternative embodiments of the core inventive concept described in the '358 patent. The purpose of this disclosure is to place these variations into the public domain, thereby establishing them as prior art for future patent applications. The core concept of the '358 patent involves using two separate adaptive filters on two audio signals and combining their outputs to generate a noise reference signal, where the filters are adapted to minimize a wanted signal component.


Axis 1: Material & Component Substitution

Derivative 1.1: Analog Implementation with MEMS Piezoelectric Adaptive Filters

  • Enabling Description: The digital signal processor (DSP) and discrete adaptive filters of the '358 patent are replaced with an analog, low-power implementation using Micro-Electro-Mechanical Systems (MEMS). Two MEMS microphone inputs feed their signals to two corresponding MEMS adaptive filters. Each filter comprises a silicon cantilever array coated with a piezoelectric film (e.g., lead zirconate titanate, PZT). A low-power microcontroller executes the adaptation logic by applying a variable DC bias voltage to the cantilevers. This voltage modifies the stiffness and resonant frequency of each cantilever, thereby changing the filter's overall transfer function. The analog outputs from the two MEMS filters are then differenced by an operational amplifier to produce the noise reference signal. The microcontroller calculates the adaptation criterion (e.g., minimizing output energy during speech) and updates the control voltages accordingly. This architecture significantly reduces power consumption and latency, making it suitable for battery-powered hearables and edge devices.
  • Mermaid Diagram:
    graph TD
        subgraph System Architecture
            Mic1[MEMS Mic 1] --> AF1[MEMS Piezoelectric Filter 1];
            Mic2[MEMS Mic 2] --> AF2[MEMS Piezoelectric Filter 2];
            AF1 --> Sub[Analog Subtractor];
            AF2 --> Sub;
            Sub --> NRS[Noise Reference Signal];
            NRS --> MCU[Microcontroller Unit Adaptation Logic];
            MCU -->|Control Voltage V1| AF1;
            MCU -->|Control Voltage V2| AF2;
        end
    

Derivative 1.2: Optical Correlator for Ultrafast Filter Adaptation

  • Enabling Description: The digital adaptation algorithm (e.g., NLMS) is replaced by an optical processing unit for near-instantaneous computation of filter coefficients. The two input audio signals are converted to optical signals via acousto-optic modulators (AOMs). These modulated light beams are passed through spatial light modulators (SLMs) which physically represent the filter coefficients H1 and H2. The resulting beams are interfered on a photodetector array, yielding the noise reference signal. A portion of the input and output light is diverted to an optical correlator, which uses Fourier-transforming lenses to compute the cross-correlation between the noise reference and input signals. The resulting optical pattern provides the error gradient used to update the SLMs, thereby adapting the filters. This method is suited for extremely wideband signals where digital adaptation would be a computational bottleneck.
  • Mermaid Diagram:
    flowchart LR
        subgraph Signal Path
            A[Audio In 1] --> AOM1[Acousto-Optic Modulator 1]
            B[Audio In 2] --> AOM2[Acousto-Optic Modulator 2]
            AOM1 --> SLM1[Spatial Light Modulator H1] --> BeamCombiner
            AOM2 --> SLM2[Spatial Light Modulator H2] --> BeamCombiner
            BeamCombiner --> Photodetector --> NRS[Noise Ref Out]
        end
        subgraph Adaptation Path
            NRS --> OptCorr[Optical Correlator]
            AOM1 --> OptCorr
            AOM2 --> OptCorr
            OptCorr -->|Control Signal| SLM1
            OptCorr -->|Control Signal| SLM2
        end
    

Axis 2: Operational Parameter Expansion

Derivative 2.1: Cryogenic Superconducting Implementation for Quantum-Level Sensing

  • Enabling Description: For applications requiring extreme sensitivity, such as quantum computing or deep-space radio astronomy, the system is implemented using superconducting electronics operating at cryogenic temperatures (e.g., 4 Kelvin). The inputs are detected by superconducting sensors (e.g., SQUIDs). The adaptive filters are constructed from Josephson junction arrays, with coefficients stored as magnetic flux quanta in superconducting loops. The adaptation algorithm is performed by a quantum annealing processor that adjusts the flux quanta to find the global minimum of the wanted signal power. The subtraction is executed by a Superconducting Quantum Interference Filter (SQIF). This implementation eliminates thermal noise, achieving signal-to-noise ratios impossible at room temperature.
  • Mermaid Diagram:
    sequenceDiagram
        participant Sensor1 as Superconducting Sensor 1
        participant Sensor2 as Superconducting Sensor 2
        participant JJA1 as Josephson Junction Array (H1)
        participant JJA2 as Josephson Junction Array (H2)
        participant SQIF as Superconducting Subtractor
        participant QAP as Quantum Annealing Processor
    
        Sensor1->>JJA1: Signal 1
        Sensor2->>JJA2: Signal 2
        JJA1->>SQIF: Filtered Signal 1
        JJA2->>SQIF: Filtered Signal 2
        SQIF-->>QAP: Noise Reference Signal (U)
        QAP->>JJA1: Update Flux Quanta (Coefficients)
        QAP->>JJA2: Update Flux Quanta (Coefficients)
    

Derivative 2.2: Industrial-Scale Application for Seismic Wave Cancellation

  • Enabling Description: The principle is scaled up for geophysical applications. A primary seismometer array (first signal) is placed at a monitoring site, and a secondary array (second signal) is placed near a known noise source (e.g., a highway). The multichannel time-series data from these arrays are processed on a high-performance computing (HPC) cluster. The dual adaptive filters are high-order IIR filters designed to model the complex transfer functions of seismic waves through heterogeneous geological strata. The system adapts the filters to generate a "noise reference wavefield," effectively canceling the correlated noise from the urban source to enhance the detection of faint earthquake signals or clandestine underground tests. The adaptation uses a block-based Recursive Least Squares (RLS) algorithm to handle the long impulse responses inherent in seismic data.
  • Mermaid Diagram:
    graph TD
        A[Seismometer Array 1 (Primary)] --> HPC{High-Performance Cluster};
        B[Seismometer Array 2 (Secondary)] --> HPC;
        subgraph HPC
            A_data[Data from Array 1] --> H2[Adaptive Filter H2];
            B_data[Data from Array 2] --> H1[Adaptive Filter H1];
            H2_out[Filtered Primary] --> Sub[Combiner];
            H1_out[Filtered Secondary] --> Sub;
            Sub --> NRS[Cleaned Seismic Signal];
            NRS --> Adapt[Adaptation Logic];
            Adapt --> H1;
            Adapt --> H2;
        end
        HPC --> Output[Geophysical Analysis];
    

Axis 3: Cross-Domain Application

Derivative 3.1: AgTech - Filtering EMI from Soil Moisture Sensor Data

  • Enabling Description: In precision agriculture, the method is used to remove electrical noise from soil moisture probe readings. A primary sensor (S1) is a capacitance probe whose signal contains the true moisture value plus electromagnetic interference (EMI) from irrigation pumps. A secondary sensor (S2) is a simple antenna or induction coil placed to primarily capture this ambient EMI. The dual adaptive filter algorithm, running on the sensor node's embedded controller, processes the signals from S1 and S2. It adapts filters H1 and H2 to generate a noise reference signal U where the probe's own operating signal is nulled out. This pure EMI reference is then subtracted from the primary probe reading to yield a more accurate soil moisture measurement.
  • Mermaid Diagram:
    graph TD
        subgraph In-Field Sensor Node
            Moisture[Soil Moisture] --> Probe[Capacitive Probe S1]
            EMI[Electrical Noise] --> Probe
            EMI --> Antenna[EMI Antenna S2]
            Probe -- Capacitive Coupling --> Antenna
            Probe --> FilterH2[Adaptive Filter H2]
            Antenna --> FilterH1[Adaptive Filter H1]
            FilterH1 -- (-) --> Subtractor
            FilterH2 -- (+) --> Subtractor
            Subtractor --> NoiseRef[EMI Reference]
            Probe --> FinalSub(-)
            NoiseRef -- Adaptively Filtered --> FinalSub(+)
            FinalSub --> CleanData[Accurate Moisture Reading]
        end
    

Derivative 3.2: Consumer Electronics - Isolating Fetal Heartbeat

  • Enabling Description: The technique is applied in an at-home fetal heartbeat monitor to separate the faint fetal heartbeat from the overpowering maternal heartbeat. Two acoustic sensors are placed on the abdomen. Sensor 1 (S1) is positioned to best capture the fetal heartbeat. Sensor 2 (S2) is positioned to capture the maternal heartbeat as strongly as possible. The dual-filter system is adapted to treat the maternal heartbeat as the "unwanted" signal to be canceled from a reference. The filters H1 and H2 adapt to create a "maternal heartbeat reference signal" (U) by minimizing the fetal component within it. This clean reference of the mother's heartbeat is then adaptively subtracted from S1 to isolate and enhance the much fainter fetal heartbeat.
  • Mermaid Diagram:
    sequenceDiagram
        participant MH as Maternal Heart
        participant FH as Fetal Heart
        participant S1 as Sensor 1 (Fetal Focus)
        participant S2 as Sensor 2 (Maternal Focus)
        participant Processor
        participant Output
    
        MH->>S1: Strong Signal
        MH->>S2: Very Strong Signal
        FH->>S1: Faint Signal
        FH->>S2: Very Faint Signal
    
        S1->>Processor: Process with Filter H1
        S2->>Processor: Process with Filter H2
        Processor->>Processor: Combine to create Maternal Heartbeat Reference (U)
        Note right of Processor: Filters adapted to minimize Fetal Heartbeat in U
        Processor->>Processor: Subtract U from S1
        Processor->>Output: Isolated Fetal Heartbeat
    

Axis 4: Integration with Emerging Tech

Derivative 4.1: AI-Driven Contextual Adaptation Control

  • Enabling Description: A machine learning model, such as a convolutional neural network (CNN), is used to supervise the adaptation process. The raw audio signals are fed into the CNN, which is pre-trained to classify the acoustic environment ('car', 'cafe') and signal content ('speech', 'music', 'transient'). The CNN's output is a control vector that dynamically adjusts the parameters of the dual-filter system's adaptation logic. For example, it can change the adaptation step-size, increase filter length in reverberant conditions, or freeze adaptation entirely during double-talk or when only stationary noise is present. This AI supervisor makes the noise cancellation far more robust to real-world conditions.
  • Mermaid Diagram:
    flowchart TD
        Mic1 --> DualFilter[Dual Adaptive Filter System];
        Mic2 --> DualFilter;
        Mic1 --> CNN[Context-Aware CNN];
        Mic2 --> CNN;
        CNN -- Control Vector (μ, Filter Length, etc.) --> DualFilter;
        DualFilter --> NoiseRef;
    

Derivative 4.2: Distributed Noise Cancellation via IoT Network

  • Enabling Description: The two audio inputs are sourced from spatially separate IoT devices. A user's wearable device provides the first microphone signal (S1), containing their voice. A network of stationary IoT sensors (e.g., smart speakers) in the environment provides the second audio signal (S2). An edge or cloud server receives both streams, using Network Time Protocol (NTP) for synchronization. The server selects the IoT sensor closest to the user as S2 and executes the dual-filter algorithm. This use of spatially diverse signals allows for superior noise reference generation, which is then used to clean the user's voice for a command or call.
  • Mermaid Diagram:
    graph TD
        subgraph Edge_Cloud
            Processor(Dual Filter Processor)
        end
        subgraph Smart_Environment
            User[User with Wearable Mic S1] --> |Audio Stream 1| Processor
            IoT1[IoT Device Mic S2] --> |Audio Stream 2| Processor
            IoT2[IoT Device]
            IoT3[IoT Device]
        end
        Processor --> CleanAudio[Cleaned User Speech]
    

Derivative 4.3: Blockchain for Verifiable Noise Cancellation Provenance

  • Enabling Description: For secure or forensic applications, a blockchain provides an immutable audit trail of the noise cancellation process. At each time block, a hash is generated from the raw input signals, the filter coefficient vectors for H1 and H2, and the final output signal. This hash, along with a timestamp and device identifiers, is committed as a transaction on a private blockchain. This allows any third party to independently verify that the noise cancellation was performed correctly using the recorded coefficients and that the wanted signal was not tampered with, which is critical for the admissibility of recorded evidence.
  • Mermaid Diagram:
    stateDiagram-v2
        [*] --> Processing
        Processing --> Hashing: At each time block (k)
        state Hashing {
            direction LR
            S1_k: Raw Signal 1
            S2_k: Raw Signal 2
            H1_k: Filter 1 Coeffs
            H2_k: Filter 2 Coeffs
            --> SHA256: Hash Generation
        }
        Hashing --> Blockchain: Commit Transaction
        Blockchain --> Processing: Next time block (k+1)
        state Blockchain {
            direction LR
            Block_N: [Hash_k, Timestamp]
            --> Block_N_1
        }
    

Axis 5: The "Inverse" or Failure Mode

Derivative 5.1: Failsafe Divergence Detection and Bypass Mode

  • Enabling Description: The system includes a monitoring module to ensure stability. This module continuously calculates the short-term energy of the output signal and checks the L2-norm of the filter coefficient vectors. If the output energy exceeds the input energy for a sustained period, or if the coefficient norms exceed a threshold, the filters are presumed to be unstable. Upon detection, a digital multiplexer immediately bypasses the entire noise cancellation block, routing the original primary signal directly to the output. An indicator flag is set, and the filter coefficients are reset before adaptation is carefully re-initiated.
  • Mermaid Diagram:
    graph TD
        S1[Primary Mic In] --> MUX[Bypass Mux]
        subgraph NC_Processor
            S1 --> DualFilterSystem
            S2[Secondary Mic In] --> DualFilterSystem
            DualFilterSystem --> NoiseRef
            S1 --> Subtractor
            NoiseRef --> Subtractor
            Subtractor --> CleanOut[Clean Audio Out]
            CleanOut --> DivergenceMonitor
            DualFilterSystem -- Filter Coeffs --> DivergenceMonitor
            DivergenceMonitor -- Instability_Detected! --> MUX
        end
        MUX -- select --> SystemOut[Final Audio Out]
        CleanOut --> MUX
    

Derivative 5.2: Low-Power Mode with Frozen Coefficients

  • Enabling Description: For battery-constrained devices, the system features a dual-mode operation. It begins in a full-power 'Training Phase' where the dual adaptive filters converge. Once the rate of change of the filter coefficients drops below a delta threshold, the system enters a 'Low-Power Phase.' In this mode, the adaptation logic is power-gated, and the converged filter coefficients for H1 and H2 are frozen. The system now acts as a fixed spatial blocker, providing moderate noise reduction without the computational cost of continuous adaptation. A significant change in signal statistics or a user command can trigger a return to the 'Training Phase'.
  • Mermaid Diagram:
    stateDiagram-v2
        [*] --> Training
        Training: Full-power adaptation of H1, H2
        Training --> LowPower: Convergence criteria met
        LowPower: Adaptation logic disabled. H1, H2 are fixed.
        LowPower --> Training: Re-train trigger
    

Combination Prior Art Scenarios with Open-Source Standards

1. Combination with WebRTC Standard for Echo-Robust Noise Suppression:

  • Enabling Description: A web browser implementing the WebRTC standard integrates the dual-filter method into its audio processing pipeline. The user's local microphone provides the first signal (S1). The audio stream being played out to the user's speakers (the far-end audio) is used as the second signal (S2). The dual-filter algorithm, implemented in WebAssembly, adapts to create a reference signal U that contains only the ambient background noise, having canceled out both the near-end user's speech and the far-end echo. This high-fidelity, echo-free noise reference is then used by a subsequent Wiener filter to clean the user's speech before transmission.

2. Combination with VAD from the Opus Codec to Gate Adaptation:

  • Enabling Description: The adaptation logic of the dual-filter system is gated by a Voice Activity Detection (VAD) module using the open-source algorithm from the Opus codec. The VAD analyzes the primary input signal and provides a binary output indicating speech presence. The adaptation step-size μ for filters H1 and H2 is set to its operational value only when the VAD indicates speech is present. When VAD indicates silence, μ is set to zero, freezing adaptation. This prevents the filters from erroneously adapting to changes in the background noise, significantly improving the system's robustness.

3. Combination with SOFA (Spatially Oriented Format for Acoustics) Standard for Fast Initialization:

  • Enabling Description: In an augmented reality headset with a known microphone array geometry, the dual adaptive filters are initialized using Head-Related Transfer Functions (HRTFs) from a pre-loaded SOFA file. Instead of starting with zero coefficients, the filters H1 and H2 are initialized with coefficients derived from the HRTFs corresponding to a default "look direction." This provides the adaptation algorithm with a highly accurate starting point based on the known acoustics of the device, enabling dramatically faster convergence and more effective cancellation of off-axis noise.

Generated 5/9/2026, 12:47:58 AM