Patent 6813742
Derivative works
Defensive disclosure: derivative variations of each claim designed to render future incremental improvements obvious or non-novel.
Active provider: Google · gemini-2.5-pro
Derivative works
Defensive disclosure: derivative variations of each claim designed to render future incremental improvements obvious or non-novel.
Defensive Disclosure: Architectures and Methods for Iterative Signal Decoding
Publication Date: May 9, 2026
Abstract: This document discloses various implementations, applications, and extensions of iterative decoding systems, particularly those employing multiple soft-in/soft-out (SISO) decoders in a pipelined and circular configuration. The disclosed variations are intended to enter the public domain to serve as prior art for future patent applications in the fields of digital communications, signal processing, and related technologies. The following disclosures build upon the core concepts found in U.S. Patent 6,813,742.
I. Derivative Embodiments
Axis 1: Material & Component Substitution
1.1. Quantum-Assisted Hybrid Decoder
- Enabling Description: This embodiment replaces the classical soft-decision decoders described in U.S. Patent 6,813,742 with hybrid quantum-classical modules. The architecture maintains two serially coupled decoding units operating in a circular, iterative fashion. Within each unit, a classical digital pre-processor calculates the branch metrics from the received soft-symbol inputs. These metrics are then mapped into a Quadratic Unconstrained Binary Optimization (QUBO) problem, where the trellis state transitions are represented by binary variables. A dedicated Quantum Annealing unit is used to find the ground-state solution to the QUBO problem, which corresponds to the maximum a posteriori probability path through the trellis. A classical post-processor converts the annealing result back into extrinsic information (log-likelihood ratios) for the subsequent decoding stage. The interleaver and de-interleaver memory modules are implemented with Magnetoresistive RAM (MRAM) to reduce static power consumption and provide non-volatility for state-saving operations.
- Mermaid Diagram:
graph TD subgraph Hybrid Decoder A A1[Classical Pre-Processor] --> A2{QUBO Formulation}; A2 --> A3[Quantum Annealer]; A3 --> A4[Classical Post-Processor]; end subgraph Hybrid Decoder B B1[Classical Pre-Processor] --> B2{QUBO Formulation}; B2 --> B3[Quantum Annealer]; B3 --> B4[Classical Post-Processor]; end subgraph Memory M1[Interleaver Memory (MRAM)]; M2[De-interleaver Memory (MRAM)]; end Input[Received Signal] --> A1; A4 --> M1; M1 --> B1; B4 --> M2; M2 -- Feedback --> A1; B4 --> Decoded_Output[Hard Decision Output];
1.2. Neuromorphic Spiking Network Decoder
- Enabling Description: This variation implements the Soft-In/Soft-Out (SISO) decoders using a Spiking Neural Network (SNN) on a neuromorphic processor. The iterative decoding algorithm is mapped to a recurrent SNN architecture. Trellis states from the Log-MAP algorithm are represented by distinct clusters of spiking neurons. Branch metrics derived from the input signal are encoded as input spike trains, where the probability is proportional to the spike frequency. The recursive state metric calculation is performed through the temporal integration of spikes by the neuron clusters. The Add-Compare-Select (ACS) function is realized using winner-take-all (WTA) inhibitory circuits that ensure only the most likely path (neuron cluster) continues to fire. The extrinsic information passed between the two neuromorphic decoders is encoded as the output spike patterns. This asynchronous, event-driven approach drastically reduces power consumption compared to a clocked, digital ASIC implementation.
- Mermaid Diagram:
sequenceDiagram participant R as Received Signal participant SNN_A as Neuromorphic Decoder A participant MEM_I as Interleaver Memory participant SNN_B as Neuromorphic Decoder B participant MEM_DI as De-interleaver Memory R->>SNN_A: Input Spike Trains (Systematic Info) Note over SNN_A: Neuron clusters compute forward/backward passes SNN_A->>MEM_I: Store Extrinsic Info (Encoded Spike Patterns) MEM_I->>SNN_B: Provide Interleaved Extrinsic Info R->>SNN_B: Input Spike Trains (Parity Info) Note over SNN_B: Neuron clusters compute forward/backward passes SNN_B->>MEM_DI: Store De-interleaved Extrinsic Info loop Iterations MEM_DI->>SNN_A: Feedback Extrinsic Info SNN_A->>MEM_I: Update Extrinsic Info MEM_I->>SNN_B: Update Extrinsic Info SNN_B->>MEM_DI: Update Extrinsic Info end SNN_B-->>Decoded_Output: Final Decision
Axis 2: Operational Parameter Expansion
2.1. Cryogenic Superconducting Decoder for Terabit Communication
- Enabling Description: This embodiment describes the decoder architecture implemented with superconducting logic, such as Rapid Single Flux Quantum (RSFQ) circuits, to achieve operational clock frequencies exceeding 100 GHz. The entire baseband processor, including the two SISO decoders and memory modules, is designed to operate at cryogenic temperatures (e.g., 4 Kelvin). The Log-MAP algorithm's adders, comparators, and selectors are built from Josephson junction-based logic gates. Data is represented by the propagation of single magnetic flux quanta. This design is intended for extreme-bandwidth applications, such as deep-space communication links or terrestrial terabit-per-second backhaul networks, where real-time decoding is required for data rates far beyond the capabilities of conventional CMOS technology.
- Mermaid Diagram:
graph TD subgraph Cryocooler (4K) subgraph RSFQ_Decoder_A direction LR BM_A[Branch Metric] --> FACS_A[Forward ACS]; BM_A --> BACS_A[Backward ACS]; FACS_A & BACS_A --> LMAP_A[Log-MAP Calc]; end subgraph RSFQ_Decoder_B direction LR BM_B[Branch Metric] --> FACS_B[Forward ACS]; BM_B --> BACS_B[Backward ACS]; FACS_B & BACS_B --> LMAP_B[Log-MAP Calc]; end subgraph Superconducting_Memory MEM_I[Interleaver RAM] MEM_DI[De-interleaver RAM] end RF_Input(Terabit RF In) --> RSFQ_Decoder_A; LMAP_A --> MEM_I; MEM_I --> RSFQ_Decoder_B; LMAP_B --> MEM_DI; MEM_DI -- Feedback --> RSFQ_Decoder_A; LMAP_B --> Decoded_Output(Terabit Data Out); end
2.2. Radiation-Hardened Decoder for Extreme Environments
- Enabling Description: This version is designed for operation in high-radiation environments, such as satellite avionics or planetary rovers. The decoder is fabricated on a Silicon-On-Insulator (SOI) process using radiation-hardened-by-design (RHBD) principles. All sequential and combinational logic within the Log-MAP decoders is implemented with Triple-Modular Redundancy (TMR), where each gate is triplicated and its output is determined by a majority voter circuit to mitigate single-event upsets (SEUs). The interleaver and de-interleaver memory arrays are protected by built-in Error Detection and Correction (EDAC) logic (e.g., a Hamming code or BCH code) for each memory word, ensuring the integrity of the stored extrinsic information against single-event functional interrupts (SEFIs). The iterative nature of the decoder provides a further layer of resilience, as transient errors in one iteration can be corrected in subsequent passes.
- Mermaid Diagram:
stateDiagram-v2 [*] --> Idle Idle --> Receiving: Decoder_Enable Receiving --> Decode_A: Block_Ready state Decode_A { direction LR TMR_BM: Branch Metric (TMR) TMR_SM: State Metric (TMR) TMR_LMAP: Log-MAP (TMR) TMR_BM --> TMR_SM --> TMR_LMAP } Decode_A --> Store_Interleaved: A_Done note right of Decode_A: All logic uses Triple Modular Redundancy Store_Interleaved --> Decode_B: Stored note left of Store_Interleaved: Memory uses EDAC codes state Decode_B { direction LR TMR_BM_B: Branch Metric (TMR) TMR_SM_B: State Metric (TMR) TMR_LMAP_B: Log-MAP (TMR) TMR_BM_B --> TMR_SM_B --> TMR_LMAP_B } Decode_B --> Store_Deinterleaved: B_Done Store_Deinterleaved --> Iteration_Check Iteration_Check --> Decode_A: Iterate Iteration_Check --> Hard_Decision: Max_Iterations_Reached Hard_Decision --> [*]: Output_Ready
Axis 3: Cross-Domain Application
3.1. Genomic Sequencing Error Correction
- Enabling Description: The iterative decoding architecture is applied to correct errors in raw data from Next-Generation Sequencing (NGS) of DNA or RNA. A DNA fragment is modeled as a message protected by a convolutional code, where the "channel" is the error-prone sequencing process. The systematic information is derived from a primary read of the fragment, while parity information is derived from a redundant paired-end read. The "soft information" input to the decoders is the per-base quality score (e.g., Phred score) provided by the sequencer. The pipelined decoders iteratively refine the probability of each base call (A, T, C, G), using the redundant information to resolve ambiguities and correct substitution errors. The interleaver helps mitigate the impact of burst errors common in some sequencing technologies. The final output is a high-confidence consensus sequence.
- Mermaid Diagram:
flowchart TD A[NGS Sequencer] --> B{Raw Reads (Read 1 + Read 2) & Quality Scores}; B --> C[Soft Value Mapping]; C --> |Systematic: Read 1, Parity: Read 2| D[SISO Decoder A]; D --> E[Interleaver Memory]; E --> F[SISO Decoder B]; F --> G[De-interleaver Memory]; G -- Iterative Feedback --> D; F --> H{Consensus Sequence Generation}; H --> I[High-Fidelity DNA Sequence];
3.2. Predictive Maintenance in Industrial IoT
- Enabling Description: The decoder architecture is used to predict failures in industrial machinery by analyzing correlated sensor data. A machine's healthy operational state is modeled as a known state machine (the code's trellis). Correlated sensor streams, such as vibration data (systematic information) and acoustic emissions (parity information), are treated as noisy signals. The decoder system processes these streams in real-time. Deviations from the "healthy" signal pattern are treated as errors. The soft-decision extrinsic information that is iteratively passed between decoders represents the evolving probability of a fault condition. When the log-likelihood ratio of a fault state exceeds a predetermined threshold, a predictive maintenance alert is triggered.
- Mermaid Diagram:
graph LR subgraph Machine S1[Vibration Sensor] S2[Acoustic Sensor] end subgraph Predictive_Maintenance_Unit D1[SISO Decoder 1] D2[SISO Decoder 2] M1[Interleaver Memory] M2[De-Interleaver Memory] end subgraph ControlSystem A[Alert Dashboard] end S1 -- Systematic Data --> D1 S2 -- Parity Data --> D1 D1 --> M1 --> D2 D2 --> M2 -- Feedback --> D1 D2 -- Failure Probability --> A
Axis 4: Integration with Emerging Tech
4.1. AI-Managed Adaptive Decoding
- Enabling Description: A reinforcement learning (RL) agent is integrated to dynamically manage the decoder's operational parameters. The RL agent monitors the communication channel's state (e.g., SNR) and the decoder's performance (e.g., extrinsic information convergence rate). Based on this state, the agent selects actions to optimize for a reward function balancing accuracy, latency, and power consumption. Actions include: (1) dynamically adjusting the number of decoding iterations, (2) selecting the optimal MAP approximation algorithm (e.g., Log-MAP vs. Max-Log-MAP), and (3) altering the bit-width of the soft-value quantization. This creates an intelligent decoder that adapts its resource usage to changing channel conditions in real-time.
- Mermaid Diagram:
flowchart TD subgraph Main_Decoder Input[Signal In] --> D_A[Decoder A] D_A <--> D_B[Decoder B] D_B --> Output[Signal Out] end subgraph RL_Controller Monitor[Monitor SNR, BER, Power] Agent{RL Agent} Action[Set Iterations, Algorithm, Quantization] end Monitor -- State --> Agent Agent -- Action --> Action Action --> D_A Action --> D_B Output -- Reward Signal --> Monitor
4.2. Distributed Cooperative Decoding in IoT Mesh Networks
- Enabling Description: The decoder's components are distributed across two or more separate nodes in an IoT mesh network to enable cooperative decoding of weak signals. Node A receives the systematic portion of a signal and functions as Decoder A. After its processing pass, it transmits the resulting extrinsic information over the wireless mesh to Node B. Node B, which received the parity portion of the signal, uses this extrinsic information as an input for its decoding pass (as Decoder B). It then transmits its updated extrinsic information back to Node A. This "over-the-air" iterative exchange allows the nodes to jointly decode a signal that would be indecipherable to either node individually.
- Mermaid Diagram:
sequenceDiagram participant Source participant IoT_A participant IoT_B participant Sink Source->>IoT_A: Transmit Systematic Bits (Weak Signal) Source->>IoT_B: Transmit Parity Bits (Weak Signal) IoT_A->>IoT_A: Perform Decode Pass 1 IoT_A->>IoT_B: Transmit Extrinsic Info via Mesh IoT_B->>IoT_B: Perform Decode Pass 2 IoT_B->>IoT_A: Transmit Updated Extrinsic Info via Mesh Note over IoT_A, IoT_B: Iterations continue over the mesh link IoT_B->>Sink: Send Final Decoded Data
Axis 5: The "Inverse" or Failure Mode
5.1. Graceful Degradation Low-Power Decoder
- Enabling Description: This embodiment is designed for power-constrained devices and features multiple operational modes for graceful degradation of performance. In a "full power" mode, it operates as described in U.S. Patent 6,813,742. When a power management unit signals a low-battery state, the decoder controller transitions to a "low power" mode. In this mode, it reduces the maximum number of iterations (e.g., from 8 to 2), switches from the computationally complex Log-MAP algorithm to the simpler Max-Log-MAP approximation, and reduces the quantization precision of the soft-information from 6 bits to 3 bits. In a "critical power" mode, one of the two decoder cores is power-gated, and the system performs a single-decoder iterative process at half the throughput, further reducing power consumption while maintaining a baseline communication link.
- Mermaid Diagram:
stateDiagram-v2 state "Full Power Mode" as Full { [*] --> Iteration_Loop note right of Full Algorithm: Log-MAP Iterations: 8 Quantization: 6-bit end note } state "Low Power Mode" as Low { [*] --> Iteration_Loop_Reduced note right of Low Algorithm: Max-Log-MAP Iterations: 2 Quantization: 3-bit end note } state "Critical Power Mode" as Critical { [*] --> Single_Decoder_Loop note right of Critical Architecture: Single Decoder Iterations: 1 end note } [*] --> Full: High_Battery Full --> Low: Low_Battery_Signal Low --> Full: Battery_Recharged Low --> Critical: Critical_Battery_Signal
II. Combination Prior Art with Open-Source Standards
1. Combination with GNU Radio:
- Enabling Description: The pipelined, circular decoder architecture is implemented as a C++ processing block within the GNU Radio open-source SDR framework. The block, named
pipelined_turbo_decoder, exposes input ports for systematic and parity soft symbols and an output port for decoded bits. Internally, it instantiates and manages the two SISO decoders, memory buffers, and the iterative feedback loop as described in U.S. Patent 6,813,742. This makes the architecture readily available as a standard component for developers using GNU Radio, thereby placing the implementation in the public domain for SDR applications.
2. Combination with RISC-V ISA Extension:
- Enabling Description: The core computational kernels of the Log-MAP algorithm are defined as a custom instruction set extension for the open-source RISC-V architecture. This extension includes custom instructions such as
ACS.FWD Rdest, Rs1, Rs2to execute a full forward Add-Compare-Select operation in a single cycle, andLMAP.CALC Rdest, Rs1, Rs2, Rs3to compute the a posteriori probability. A RISC-V processor core designed with this extension can execute the iterative decoding algorithm with high efficiency, with the control flow managed in software and the intensive computations handled by the custom hardware instructions. This standardizes the core method as a public feature of a RISC-V communications processor.
3. Combination with WebRTC Forward Error Correction:
- Enabling Description: The iterative decoding method is implemented as a Forward Error Correction (FEC) mechanism within the open-source WebRTC standard. A WebAssembly (WASM) module containing a complete implementation of the dual-decoder architecture is specified as a standard FEC option for the WebRTC data channel. When a web browser negotiates a WebRTC session that uses this FEC scheme, it loads and executes the WASM module to perform iterative decoding on incoming data packets. This integrates the patented method directly into an open web standard, making it a publicly available technique for any developer of real-time web applications.
Generated 5/9/2026, 12:49:14 PM