Patent 10651866

Derivative works

Defensive disclosure: derivative variations of each claim designed to render future incremental improvements obvious or non-novel.

Active provider: Google · gemini-2.5-pro

Derivative works

Defensive disclosure: derivative variations of each claim designed to render future incremental improvements obvious or non-novel.

✓ Generated

Defensive Disclosure and Prior Art Generation for U.S. Patent 10,651,866

Publication Date: May 10, 2026
Field: Digital Signal Processing, Sensor Arrays, Beamforming
Technology: Fractional Time Delay in Oversampled Systems

This document discloses novel variations, applications, and integrations of the core technology described in U.S. Patent 10,651,866 (the '866 patent). The purpose is to place these concepts in the public domain, thereby establishing them as prior art against future patent applications on similar or incremental inventions. The core concept involves applying a controllable time delay to a digital signal in its oversampled state, prior to decimation and filtering to a baseband rate, to achieve high-resolution signal alignment.


Axis 1: Material & Component Substitution

1.1. FPGA-Based Reconfigurable Delay and Filter Block

  • Enabling Description: The time delay element (e.g., 808 in '866 patent) and the PDM receiver module (812) are implemented as a single, reconfigurable block on a Field-Programmable Gate Array (FPGA) or a System on a Chip (SoC) with an embedded FPGA. The delay is not a fixed-length buffer but a dynamically adjustable shift register synthesized in hardware description language (VHDL or Verilog). The number of delay stages (N) is controlled by writing to a memory-mapped register on the FPGA. This same FPGA fabric also implements the CIC, half-band, and FIR filter stages. This allows for in-field updates to both the delay resolution and the filter characteristics, such as changing the decimation ratio or filter coefficients to adapt to different acoustic environments or sensor types without hardware redesign. The entire path from PDM input to PCM output exists as a single, customizable IP core.
  • Mermaid.js Diagram:
    graph TD
        A[Digital Sensor - PDM Output] --> B{FPGA Fabric};
        subgraph B [FPGA / SoC]
            C[PDM Input Interface] --> D[Programmable Shift Register (Delay Element)];
            E[Control Register] -.->|Sets Delay 'N'| D;
            D --> F[CIC Filter Stage];
            F --> G[Half-Band Filter 1];
            G --> H[Half-Band Filter 2];
            H --> I[Programmable FIR Filter];
        end
        I --> J[Baseband PCM Output];
        K[System Bus/CPU] --> E;
    

1.2. Switched-Capacitor Array for Analog PDM Delay

  • Enabling Description: Instead of delaying the digital bitstream, the one-bit PDM signal (which is a high-frequency analog signal with two voltage levels) is passed through an analog delay line before being re-quantized. This delay line is composed of a series of switched-capacitor stages. A control voltage, set by a digital-to-analog converter (DAC) driven by the controller (e.g., 818 in '866 patent), adjusts the switching frequency of the MOSFETs in the capacitor array, thereby precisely controlling the group delay of the signal. This method avoids digital clock domain crossing issues and can offer lower power consumption in certain implementations. The output of the switched-capacitor delay line is then fed into a simple comparator to restore a clean digital PDM signal, which is then processed by a standard PDM receiver.
  • Mermaid.js Diagram:
    sequenceDiagram
        participant DS as Digital Sensor
        participant SCA as Switched-Capacitor Array
        participant C as Comparator
        participant PDM_RX as PDM Receiver
        participant CTL as Controller
    
        DS->>+SCA: Oversampled PDM Signal (Analog Levels)
        CTL->>SCA: Set Control Voltage (for Delay)
        SCA->>+C: Time-Delayed PDM Signal
        C->>-PDM_RX: Re-quantized Digital PDM
        PDM_RX-->>DS: (Internal Processing)
        Note right of PDM_RX: Decimation & Filtering
    

1.3. Optical Fiber Delay for RF-Oversampled Signals

  • Enabling Description: This variation replaces the digital sensor with a high-bandwidth electro-optical modulator that converts an analog RF signal from an antenna into a modulated light signal. The oversampling is performed in the optical domain. The time delay element is a variable-length fiber optic delay line. The length of the optical path is controlled using an optical switch matrix that routes the light through different lengths of spooled fiber optic cable. The delayed optical signal is then converted back to an electrical signal by a photodetector and processed by a high-speed ADC and digital decimator. This is applicable for phased-array radar or satellite communication systems where delays in the nanosecond range with picosecond resolution are required.
  • Mermaid.js Diagram:
    graph TD
        subgraph RF Frontend
            A[Antenna] --> B(Low-Noise Amplifier);
        end
        B --> C(Electro-Optical Modulator);
        subgraph Optical Delay Unit
            C --> D{Optical Switch Matrix};
            D -- Path 1 --> E1[Fiber Spool 1 (Δt1)];
            D -- Path 2 --> E2[Fiber Spool 2 (Δt2)];
            D -- Path N --> En[Fiber Spool N (Δt_n)];
            E1 --> F(Optical Combiner);
            E2 --> F;
            En --> F;
        end
        G[Controller] --> D;
        F --> H(Photodetector);
        H --> I(High-Speed ADC & Decimator);
        I --> J[Baseband Digital Signal];
    

Axis 2: Operational Parameter Expansion

2.1. Nanoscale NEMS Resonator Array for Mass Spectrometry

  • Enabling Description: An array of nanoelectromechanical systems (NEMS) resonators is used, where each resonator's frequency shifts when a molecule adsorbs onto its surface. The frequency output of each resonator is an analog signal that is oversampled by a gigahertz-rate sigma-delta modulator. The resulting oversampled data streams are time-delayed with femtosecond precision using cryogenic digital logic. By "beamforming" the data from the NEMS array, the system can spatially resolve and identify different molecular species landing on the array surface with extremely high precision, effectively creating a high-resolution chemical imaging system.
  • Mermaid.js Diagram:
    flowchart LR
        subgraph NEMS Array
            N1[Resonator 1] --> S1[ΣΔ Modulator 1];
            N2[Resonator 2] --> S2[ΣΔ Modulator 2];
            N3[Resonator 'n'] --> Sn[ΣΔ Modulator 'n'];
        end
        subgraph Cryogenic Processor
            S1 --> D1[ps-Delay 1];
            S2 --> D2[ps-Delay 2];
            Sn --> Dn[ps-Delay 'n'];
            D1 & D2 & Dn --> Sum(Arithmetic Combiner);
            Sum --> Filt(Decimator/Filter);
        end
        Filt --> Output[Mass/Location Data];
        Controller --> D1 & D2 & Dn;
    

2.2. Large-Scale Geophone Array for Seismic Imaging

  • Enabling Description: The system is applied to a geographically distributed array of hundreds of geophone sensors for oil and gas exploration. Each geophone station digitizes the analog seismic signal using an oversampling ADC (e.g., 24-bit, 256 ksps). The raw oversampled data is transmitted via a high-speed network to a central processing cluster. Within the cluster, programmable time delays, corresponding to integer and fractional parts of the final baseband sample rate (e.g., 1 ms), are applied to the oversampled streams. These delays compensate for seismic wave propagation times through different geological strata. The delayed signals are then summed and decimated, allowing geophysicists to "steer" a listening beam deep into the Earth's crust to image subterranean structures with higher resolution than conventional methods.
  • Mermaid.js Diagram:
    graph TD
        G1[Geophone 1] --> ADC1(Oversampling ADC 1);
        G2[Geophone 2] --> ADC2(Oversampling ADC 2);
        Gn[Geophone 'n'] --> ADCn(Oversampling ADC 'n');
    
        ADC1 --> |Network| C(Central Processor);
        ADC2 --> |Network| C;
        ADCn --> |Network| C;
    
        subgraph C
            D1[Delay Δt1];
            D2[Delay Δt2];
            Dn[Delay Δtn];
            C1[Data In 1] --> D1;
            C2[Data In 2] --> D2;
            Cn[Data In 'n'] --> Dn;
            D1 --> S(Summer);
            D2 --> S;
            Dn --> S;
            S --> F(Decimator & Filter);
        end
        F --> Img[Seismic Image];
    

2.3. High-Temperature Turbine Vibration Monitoring

  • Enabling Description: A sensor array using piezoelectric accelerometers fabricated from Gallium Nitride (GaN) is mounted inside the hot section of a gas turbine engine, operating at temperatures exceeding 500°C. The analog signals are transmitted to a remote, cooler location where they are digitized by high-speed, oversampling ADCs. The fractional delay beamforming technique is used to isolate vibration signatures from specific individual turbine blades. By applying precise time shifts to the oversampled signals from sensors placed around the turbine casing, the system can focus on the acoustic and vibrational signature of a single blade as it rotates, enabling the detection of micro-cracks or fatigue far earlier than conventional system-wide vibration analysis.
  • Mermaid.js Diagram:
    stateDiagram-v2
        state "Turbine Hot Section" as Hot {
            direction LR
            S1 : Sensor 1 (GaN)
            S2 : Sensor 2 (GaN)
            Sn : Sensor n (GaN)
        }
        state "Remote Electronics Unit" as Cold {
            direction LR
            ADC1 : Oversampling ADC 1
            ADC2 : Oversampling ADC 2
            ADCn : Oversampling ADC n
            Delay1: Delay Unit 1
            Delay2: Delay Unit 2
            Delayn: Delay Unit n
            Beamformer: Sum & Decimate
    
            ADC1 --> Delay1
            ADC2 --> Delay2
            ADCn --> Delayn
            Delay1 --> Beamformer
            Delay2 --> Beamformer
            Delayn --> Beamformer
        }
        S1 --> ADC1 : Analog Signal
        S2 --> ADC2 : Analog Signal
        Sn --> ADCn : Analog Signal
        Beamformer --> Output : Blade Health Data
    

Axis 3: Cross-Domain Application

3.1. Aerospace: GPS Anti-Jamming with Controlled Reception Pattern Antenna (CRPA)

  • Enabling Description: A multi-element GPS antenna (CRPA) is used on an aircraft. The RF signal from each antenna element is down-converted, and the intermediate frequency (IF) signal is digitized using a high-rate oversampling ADC. To counteract jamming, the system identifies the angle of arrival of the jamming signal. A controller then calculates a set of precise fractional time delays for each channel. These delays are applied to the oversampled digital IF streams to align the jamming signals from each element with an inverted phase before they are summed. This creates a deep null in the antenna's reception pattern in the direction of the jammer, while desired GPS satellite signals from other directions are coherently summed for a processing gain. This significantly improves the resilience of GPS navigation in hostile electronic warfare environments.
  • Mermaid.js Diagram:
    flowchart TD
        A1[Antenna 1] --> M1(RF Mixer 1);
        A2[Antenna 2] --> M2(RF Mixer 2);
        An[Antenna n] --> Mn(RF Mixer n);
    
        M1 --> ADC1(Oversampling ADC 1);
        M2 --> ADC2(Oversampling ADC 2);
        Mn --> ADCn(Oversampling ADC n);
    
        ADC1 --> D1[Fractional Delay 1];
        ADC2 --> D2[Fractional Delay 2];
        ADCn --> Dn[Fractional Delay n];
    
        JDA[Jammer Direction Analyzer] --> Controller;
        Controller -- Delay & Weight Values --> D1;
        Controller -- Delay & Weight Values --> D2;
        Controller -- Delay & Weight Values --> Dn;
    
        D1 --> S{Summer};
        D2 --> S;
        Dn --> S;
    
        S --> DEC(Decimator/Filter) --> GPS[GPS Signal Processor];
    

3.2. AgTech: Synthetic Aperture Soil Penetrating Radar

  • Enabling Description: A tractor or autonomous rover is equipped with a linear array of ground-penetrating radar (GPR) transceivers. As the vehicle moves, each transceiver emits pulses and records the echoes. The received analog echo signals are digitized using oversampling converters. The data streams from the entire array are stored. In post-processing, fractional time delays are applied to the oversampled echo data from different spatial locations (i.e., from different points along the vehicle's path). This process, known as "delay-and-sum" beamforming in the synthetic aperture context, focuses the radar energy at specific depths and locations under the soil. This allows for the high-resolution 3D mapping of soil moisture, root systems, or buried irrigation lines, with a much higher resolution than a single GPR unit could provide.
  • Mermaid.js Diagram:
    sequenceDiagram
        participant Vehicle
        participant GPR_Array
        participant Data_Recorder
        participant Post_Processor
    
        loop For each position X
            Vehicle->>GPR_Array: Trigger Pulse at X
            GPR_Array-->>Data_Recorder: Record Oversampled Echo Data(X)
        end
    
        Post_Processor->>Data_Recorder: Retrieve all Echo Data
        Note over Post_Processor: Apply fractional time delays to Data(X_n) to focus on target (Y, Z)
        Post_Processor->>Post_Processor: Sum delayed signals & Decimate
        Post_Processor-->>Output: 3D Soil Map Image
    

3.3. Medical Imaging: Ultrasound Tomography

  • Enabling Description: In a medical ultrasound probe containing an array of hundreds of transducer elements, the received analog signals from each element are immediately digitized by an on-chip oversampling ADC. To form a high-resolution image, the raw oversampled data streams from all elements are processed in parallel. A powerful FPGA or GPU applies dynamic, programmable fractional time delays to each channel. The delays are calculated based on the geometry of the transducer array and the desired focal point within the patient's body. By sweeping the focal point rapidly, the system can reconstruct a complete 2D or 3D image. Applying delays in the oversampled domain, rather than the baseband, allows for much finer focusing (sub-sample resolution), leading to sharper images with fewer artifacts, improving diagnostic accuracy.
  • Mermaid.js Diagram:
    graph TD
        subgraph Ultrasound Probe
            T1[Transducer 1] --> ADC1(Oversample ADC);
            T2[Transducer 2] --> ADC2(Oversample ADC);
            Tn[Transducer n] --> ADCn(Oversample ADC);
        end
        subgraph Image Processor (FPGA/GPU)
            ADC1 --> D1[Dynamic Delay Δt1];
            ADC2 --> D2[Dynamic Delay Δt2];
            ADCn --> Dn[Dynamic Delay Δtn];
            D1 & D2 & Dn --> Sum(Beamforming Adder);
            Sum --> Filt(Filter/Decimator/Envelope Detector);
        end
        Filt --> Img[Image Reconstruction];
        Controller[Focal Point Controller] --> D1 & D2 & Dn;
    

Axis 4: Integration with Emerging Tech

4.1. AI-Driven Adaptive Beamforming for Speech Separation

  • Enabling Description: The system is used in a smart speaker with a microphone array. A deep neural network (DNN) runs on a dedicated AI accelerator chip. The DNN receives the beamformed baseband audio output and a 'cacophony score' representing the noise level. The network's output layer directly controls the fractional delay values (N) for each oversampled microphone channel. The system is trained via reinforcement learning, where the reward function is maximizing the signal-to-interference-plus-noise ratio (SINR) of a target speaker's voice. The AI learns to dynamically steer the beam and even create nulls in the direction of competing speakers or noise sources in real-time, far more effectively than a traditional fixed-algorithm beamformer.
  • Mermaid.js Diagram:
    flowchart LR
    MicArray -->|Oversampled Signals| DelayBlock[Programmable Delay Elements];
    DelayBlock --> Beamformer[Sum & Decimate];
    Beamformer -->|Baseband Audio| DNN;
    Beamformer -->|Baseband Audio| SINR_Calc[SINR Calculator];
    
    subgraph AI Controller
        DNN[Deep Neural Network];
        SINR_Calc -->|Reward Signal| DNN;
    end
    
    DNN --|New Delay Values| DelayBlock;
    Beamformer --> OutputAudio;
    

4.2. IoT-Networked Acoustic Monitoring for Predictive Maintenance

  • Enabling Description: An array of MEMS microphones is deployed across a factory floor, with each microphone system acting as an IoT node. Each node digitizes audio using oversampling and connects to a central server via a time-synchronized protocol like PTP (Precision Time Protocol). The server collects the raw, oversampled streams. To inspect a specific machine, an operator selects it on a dashboard. The server then calculates the required fractional time delays for the relevant microphone nodes to form an acoustic beam focused on that machine. The beamformed, decimated audio is then fed into a machine learning model trained to detect anomalies like bearing wear or motor imbalance. This allows for targeted, non-intrusive monitoring of equipment across a large, noisy facility.
  • Mermaid.js Diagram:
    graph TD
        subgraph IoT Nodes
            Node1[Mic 1 + ADC] -- PTP Sync & Oversampled Stream --> N(Network);
            Node2[Mic 2 + ADC] -- PTP Sync & Oversampled Stream --> N;
            NodeN[Mic N + ADC] -- PTP Sync & Oversampled Stream --> N;
        end
        subgraph Cloud/Server
            N --> DataIngest(Data Ingest);
            DataIngest --> DelayProcessor[Fractional Delay Processor];
            Dashboard[Operator Dashboard] -->|Target Machine Coords| DelayProcessor;
            DelayProcessor --> Beamformer[Sum & Decimate];
            Beamformer --> ML[Anomaly Detection Model];
        end
        ML --> Alert[Maintenance Alert];
    

Axis 5: The "Inverse" or Failure Mode

5.1. Graceful Degradation Mode for Hearing Aids

  • Enabling Description: A hearing aid uses a dual-microphone array with fractional delay beamforming to focus on a speaker in a noisy environment (e.g., a restaurant). A power management IC monitors the battery level. When the battery drops below a 20% threshold, the controller (818) switches the system to a "low-power" mode. In this mode, the oversampling clock frequency is halved (e.g., from 2.048 MHz to 1.024 MHz), the high-order FIR filter (328) is bypassed, and the programmable delay element (808) is set to a fixed, zero-delay value. The system then operates as a simple omnidirectional microphone pair, consuming significantly less power. While the advanced directional focus is lost, the user retains basic hearing assistance, extending the device's operational life until it can be recharged.
  • Mermaid.js Diagram:
    stateDiagram-v2
        [*] --> FullPower
        FullPower: High-Res Beamforming
        FullPower: f_SF = 2.048MHz
        FullPower: Full PDM Filtering
        FullPower: Programmable Fractional Delay
    
        FullPower --> LowPower : Battery < 20%
        LowPower --> FullPower : Charging
    
        LowPower: Omnidirectional
        LowPower: f_SF = 1.024MHz
        LowPower: Simplified Filtering
        LowPower: Delay Bypassed (Δt = 0)
    

5.2. Acoustic Null-Steering for Privacy Applications

  • Enabling Description: The invention is implemented in a conference room speakerphone to create a "cone of silence." Instead of summing the time-delayed signals to enhance a sound source (constructive interference), the system subtracts them or adjusts delays and gains to create destructive interference. An operator can define a spatial zone (e.g., a visitor's chair) where audio should not be picked up. The controller (904) calculates the Δt values for each microphone in the array that will cause signals originating from that zone to cancel each other out when combined. This creates a deep null in the microphone array's sensitivity pattern, ensuring that side conversations in the designated zone are not transmitted, thereby protecting privacy.
  • Mermaid.js Diagram:
    graph TD
        subgraph Mic Array
            M1[Mic 1]; M2[Mic 2]; Mn[Mic n];
        end
    
        subgraph Processor
            M1 --> D1[Delay Δt1];
            M2 --> D2[Delay Δt2];
            Mn --> Dn[Delay Δtn];
    
            D1 --> S(Arithmetic Unit);
            D2 --> S;
            Dn --> S;
        end
    
        Controller -- Control Signals --> D1;
        Controller -- Control Signals --> D2;
        Controller -- Control Signals --> Dn;
        Controller -- Mode: Null-Steering --> S;
        UserInput[User Defines Privacy Zone] --> Controller;
    
        S --> Output[Transmitted Audio];
        style M1 fill:#f9f,stroke:#333,stroke-width:2px
        style M2 fill:#f9f,stroke:#333,stroke-width:2px
        style Mn fill:#f9f,stroke:#333,stroke-width:2px
    

Combination Prior Art Scenarios

C.1. Integration with I2S/TDM Bus Standards

  • Enabling Description: The time delay element is embodied as a specialized bus interface peripheral for an audio Digital Signal Processor (DSP). The peripheral is designed to sit on a Time-Division Multiplexed (TDM) audio bus, which is an extension of the I2S standard used to carry multiple channels of PDM data on a single data line. The peripheral snoops the frame clock (FSYNC) to identify the start of each channel's data slot. It contains a separate programmable buffer for each channel (e.g., up to 8 channels for TDM-8). A controller writes the desired delay, in units of bit-clock cycles, to a set of registers corresponding to each audio channel. The peripheral outputs a new, time-aligned TDM stream where each channel has been individually delayed with fractional-baseband-period precision. This creates a standard-compliant hardware block for multi-channel PDM beamforming pre-processing.

C.2. Integration with AES69 (SOFA) File Format

  • Enabling Description: A method and system for embedding beamforming parameters directly into a Spatially Oriented Format for Acoustics (SOFA) file. A SOFA file traditionally stores Head-Related Transfer Functions (HRTFs) or microphone array impulse responses. This disclosure describes an extension to the SOFA specification by adding a new data type: OversampledFractionalDelay. For each microphone position defined in the file (SourcePosition), a corresponding OversampledFractionalDelay variable is stored. This variable contains two values: the oversampling-to-baseband ratio (R) and the number of oversampling clock cycles of delay (N). A SOFA-compliant renderer or audio engine would read these parameters and apply the specified delay Δt = N * (1 / (f_bb * R)) to the corresponding oversampled audio stream before decimation, allowing for the precise recreation of a pre-configured beamformed soundfield described by the SOFA file.

C.3. Integration with WebRTC Standard for Browser-Based Beamforming

  • Enabling Description: A method for enabling high-resolution beamforming in a web browser using the WebRTC API. A custom JavaScript AudioWorkletProcessor is defined. This processor receives multiple raw audio streams from a USB microphone array connected to the client machine. Although the browser's Web Audio API typically provides access only to baseband audio, this disclosure describes a custom hardware driver for the microphone array that exposes the raw, oversampled PDM streams to the worklet. The AudioWorkletProcessor then implements the fractional delay logic of the '866 patent in software (e.g., using a circular buffer in a SharedArrayBuffer). The controller is a JavaScript function that adjusts the delay based on user input or another algorithm (e.g., voice activity detection) to steer the beam during a WebRTC video conference, improving clarity without requiring native applications or specialized hardware on the receiving end.

Generated 5/10/2026, 6:47:43 AM