Patent 6604216

Derivative works

Defensive disclosure: derivative variations of each claim designed to render future incremental improvements obvious or non-novel.

Active provider: Google · gemini-2.5-pro

Derivative works

Defensive disclosure: derivative variations of each claim designed to render future incremental improvements obvious or non-novel.

✓ Generated

Defensive Disclosure Document

Publication Date: 2026-05-12
Title: Advanced Methods for Adaptive Incremental Redundancy in Variable Rate Communication Systems and Cross-Domain Applications
Reference Art: US Patent 6,604,216

This document discloses a series of derivative works, improvements, and alternative embodiments related to the core concepts of US patent 6,604,216. The purpose of this disclosure is to place these concepts into the public domain to serve as prior art against future patent applications claiming these incremental improvements. The core concept involves encoding a data block into a mother code word, reordering the bits of this codeword into a single prioritized sequence, and transmitting variable-length subsequences from this sequence to flexibly match the capacity of a communication channel.


Derivative Variations on Core Claims

Axis 1: Component and Algorithm Substitution

1.1. Dynamic Ordering Vector Generation via Channel-Adaptive Neural Network
  • Enabling Description: The static, predefined "ordering vector" is replaced with a dynamic vector generator implemented as a lightweight convolutional neural network (CNN) or recurrent neural network (RNN) at the transmitter's physical layer. This network takes real-time channel state information (CSI), including frequency-selective fading profiles, signal-to-interference-plus-noise ratio (SINR), and Doppler spread as input. It outputs an optimized ordering vector for each transmission time interval (TTI). The network is trained offline to prioritize bits from the mother code word that provide the most error-correcting power for the predicted channel state at the time of reception. For example, in a channel with high frequency selectivity, the AI model will generate a vector that heavily interleaves bits from different parts of the mother codeword to combat burst errors in specific subcarriers.
  • Mermaid Diagram:
    graph TD
        A[Mother Code Word] --> C{Dynamic Ordering Vector Generator (AI Model)};
        B[Real-time CSI] --> C;
        C --> D[Reordered Mother Code Word];
        D --> E{Subsequence Selector};
        F[Available Channel Bandwidth] --> E;
        E --> G[Modulated Subsequence for TX];
    
1.2. Mother Codeword Generation with Spatially-Coupled LDPC Codes
  • Enabling Description: The generic "coding circuit" is replaced with an encoder for Spatially-Coupled Low-Density Parity-Check (SC-LDPC) codes. These codes exhibit superior error-floor performance and near-capacity-approaching behavior. The "mother code word" generated is a single, very long SC-LDPC codeword. The reordering vector is specifically designed to align with the "windowed" decoding properties of SC-LDPC codes. The vector first lists all systematic bits, followed by parity bits from the initial "un-coupled" sections of the code's Tanner graph, and finally the parity bits from the "coupled" core, ordered by their check node degree. This structure allows a receiver to begin the iterative belief propagation decoding process with partial subsequences more effectively.
  • Mermaid Diagram:
    flowchart LR
        subgraph Transmitter
            Data[Digital Data Block] --> LDPCC[SC-LDPC Encoder];
            LDPCC --> MCW[SC-LDPC Mother Code Word];
            MCW --> Reorder[Reordering Circuit];
            Vector[SC-LDPC-Aware Ordering Vector] --> Reorder;
            Reorder --> RMCW[Reordered Mother Code Word];
            RMCW --> Select[Subsequence Selector];
        end
        subgraph Receiver
            RX[Received Subsequences] --> Combine[Combiner];
            Combine --> Decoder[SC-LDPC Windowed Decoder];
            Decoder --> DecodedData[Decoded Data Block];
        end
    

Axis 2: Operational Parameter Expansion

2.1. Application in Terabit/s Free-Space Optical (FSO) Links
  • Enabling Description: The system is applied to a high-bandwidth FSO link between two fixed points (e.g., buildings or satellites). The "available gross rate channel" varies dramatically and rapidly due to atmospheric scintillation (turbulence). The transmitter uses the reordering method to adapt its transmission on a microsecond basis. The mother code word is generated using a large-block Polar Code. The ordering vector prioritizes bits corresponding to the most reliable sub-channels in the polar code construction. A high-speed FPGA selects a subsequence whose length precisely matches the predicted channel capacity for the next microsecond transmission window, determined by a laser-based channel probe. This allows the link to maintain maximum possible throughput despite the turbulence.
  • Mermaid Diagram:
    sequenceDiagram
        participant TX as FSO Transmitter
        participant Channel as Turbulent Atmosphere
        participant RX as FSO Receiver
        loop Microsecond Adaptation Cycle
            TX->>Channel: Send Laser Channel Probe
            Channel-->>TX: Return Scintillation Index
            TX->>TX: Select Subsequence Size to Match Capacity
            TX->>Channel: Transmit Modulated Optical Subsequence
            Channel->>RX: Deliver Attenuated/Distorted Subsequence
            RX-->>TX: Send NACK/ACK via low-rate RF channel
        end
    
2.2. Quantum Key Distribution (QKD) Error Reconciliation
  • Enabling Description: The technique is used for the error reconciliation phase in a QKD protocol like BB84. After the initial quantum transmission and sifting phase, Alice and Bob share a partially correlated string of bits (the "sifted key"). This sifted key is treated as a noisy version of Alice's original key (the "digital data block"). Alice encodes her key with an error-correcting code to create the "mother code word" of parity information. The "available gross rate channel" is the public, authenticated (but not confidential) classical channel. Alice sends subsequences of the reordered parity information. Bob uses these subsequences, along with his own sifted key, to correct errors. The reordering allows Alice to send the minimum amount of parity information required, preventing unnecessary information leakage about the final secret key. The process stops once Bob confirms his key matches Alice's.
  • Mermaid Diagram:
    stateDiagram-v2
        [*] --> Sifting: Alice & Bob perform quantum exchange and sift keys
        Sifting --> Encoding: Alice encodes her sifted key to get Mother Code Word (Parity)
        Encoding --> Reconciliation
        state Reconciliation {
            Alice_Sends: Alice sends subsequence of reordered parity bits
            Bob_Corrects: Bob uses subsequence to correct his key
            Bob_Corrects --> Alice_Sends: If errors remain (NACK)
            Bob_Corrects --> Success: If keys match (ACK)
        }
        Success --> Privacy_Amplification
        Privacy_Amplification --> [*]
    

Axis 3: Cross-Domain Application

3.1. Incremental Transmission of Medical Imaging Data (DICOM)
  • Enabling Description: A large medical imaging study (e.g., a 10 GB multi-slice CT scan) in DICOM format is the "digital data block". It is encoded using a systematic Reed-Solomon code to generate a mother code word. A radiologist's viewing station with a variable-bandwidth connection (e.g., cellular) requests the study. The Picture Archiving and Communication System (PACS) server reorders the mother code word, prioritizing the DICOM metadata and low-resolution image layers first, followed by the redundancy data and high-resolution layers. The PACS server sends a first subsequence sized to the hospital's available outbound bandwidth. The viewing station can render a low-resolution preview almost instantly. As more bandwidth becomes available or the radiologist requests higher detail, the server sends additional subsequences from the reordered stream until the full, error-corrected, high-resolution study is available.
  • Mermaid Diagram:
    flowchart TD
        PACS[PACS Server] -- Encodes DICOM Study --> MCW[Mother Code Word];
        MCW -- Prioritizes Metadata & Low-Res --> RMCW[Reordered Mother Code Word];
        subgraph "Variable-Bandwidth WAN"
            RMCW -- Subsequence 1 --> Viewer;
            RMCW -- Subsequence 2 --> Viewer;
            RMCW -- "..." --> Viewer;
        end
        Viewer[Radiologist Viewer] -- Renders Low-Res Preview --> User;
        Viewer -- Combines & Decodes --> FullStudy[Display Full Resolution Study];
        User -- Requests More Detail --> Viewer;
        Viewer -- Sends NACK --> PACS;
    
3.2. Resilient Over-the-Air Updates for Automotive ECUs
  • Enabling Description: A firmware update for a vehicle's Electronic Control Unit (ECU) is the "digital data block." The update is critical and must not be corrupted. The automotive OEM's server encodes the firmware binary into a mother code word. The vehicle's Telematics Control Unit (TCU) downloads the update over a cellular connection, which is an "available gross rate channel" that varies with vehicle location and network congestion. The server reorders the mother code word using an ordering vector that prioritizes the bootloader and critical function libraries first, followed by less critical components and parity data. The TCU downloads the update in variable-sized subsequences whenever it has a stable connection (e.g., parked overnight). If a download is interrupted, it resumes by requesting the next subsequence from where it left off. This incremental and robust method ensures the update can be completed over many short, unreliable connection windows.
  • Mermaid Diagram:
    erDiagram
        OEM_SERVER {
            string FirmwareBinary
            string MotherCodeWord
            string ReorderedMotherCodeWord
        }
        VEHICLE_TCU {
            string StoredSubsequences
            bool UpdateComplete
        }
        OEM_SERVER ||--o{ VEHICLE_TCU : "sends subsequences to"
    

Axis 4: Integration with Emerging Technology

4.1. AI-Optimized Reordering for Semantic Communication
  • Enabling Description: The system is integrated into a semantic communication framework. Instead of transmitting raw bits, the transmitter sends the latent space representation of data (e.g., an image) generated by a deep neural network. This latent representation is the "digital data block." It is encoded to create a "mother code word." A separate AI model at the transmitter analyzes the importance of each element in the latent representation to the final reconstructed quality. This AI model generates an "ordering vector" that prioritizes the most semantically significant elements. The receiver, which has the corresponding decoder network, can reconstruct a meaningful, high-quality version of the data even with only the initial subsequences. Subsequent subsequences add finer detail rather than just correcting random bit errors.
  • Mermaid Diagram:
    graph TD
        subgraph Transmitter
            A[Source Data] --> B[Semantic Encoder (NN)];
            B --> C[Latent Representation (Data Block)];
            C --> D[Channel Encoder];
            D --> E[Mother Code Word];
            F[Semantic Importance Analyzer (AI)] --> G{Ordering Vector Generator};
            C --> F;
            E --> H[Reordering Circuit];
            G --> H;
            H --> I[Reordered Stream];
            I --> J[Subsequence TX];
        end
        subgraph Receiver
            K[Subsequence RX] --> L[Semantic Decoder (NN)];
            L --> M[Reconstructed Data];
        end
    

Axis 5: The "Inverse" or Failure Mode

5.1. Graceful Degradation via Truncated Ordering Vectors
  • Enabling Description: For a power-constrained device (e.g., a battery-powered sensor), the transmitter stores multiple, pre-computed ordering vectors: V_full, V_medium, V_low_power. V_full provides the highest error correction. V_medium is a truncated version that omits the lowest-priority third of the parity bits. V_low_power is further truncated, containing only systematic bits and a small set of high-priority parity bits. When the device's power manager reports a battery level below a threshold (e.g., 20%), the transmitter switches from using V_full to V_low_power. It creates subsequences from this much shorter reordered mother code word. This reduces the computational load of the reordering/selection process and the transmission energy per packet, extending device life at the cost of reduced robustness. The receiver is notified of the vector change via a header field.
  • Mermaid Diagram:
    stateDiagram-v2
        state "High Power (>50%)" as High
        state "Medium Power (20-50%)" as Med
        state "Low Power (<20%)" as Low
    
        [*] --> High: Power On
        High --> Med: Battery Drain
        Med --> Low: Battery Drain
        Low --> Med: Charging
        Med --> High: Charging
        Low --> [*]: Power Off
    
        High: Use V_full
        Med: Use V_medium
        Low: Use V_low_power
    

Combination Prior Art with Open-Source Standards

1. Integration with QUIC and Forward Error Correction (FEC)
  • Disclosure: A system where the reordering method of US 6,604,216 is implemented as a pluggable congestion control and reliability module for the QUIC transport protocol. An application data stream is treated as the "digital data block" and is encoded using a fountain code (e.g., RaptorQ) to create a virtually infinite "mother code word." The ordering vector is a simple sequential counter. When sending data, the QUIC endpoint determines the maximum packet size for the current path MTU. It creates a "subsequence" of that size from the encoded stream and sends it in a QUIC packet. If a packet is lost, the sender does not re-transmit the same data; instead, it sends the next subsequence in the sequence. The receiver can reconstruct the original stream as long as it receives any K' packets, where K' is slightly larger than the original number of source packets K. This integrates the flexible payload sizing of the patent with the loss resilience of fountain codes and the modern transport features of QUIC.
2. Integration with WebRTC for Adaptive Video Streaming
  • Disclosure: A video conferencing system using the WebRTC open standard. A single uncompressed video frame is the "digital data block." It is encoded using a scalable video coding (SVC) scheme (e.g., AV1-SVC), creating multiple layers of data (base layer, enhancement layers). The combination of all layers forms the "mother code word." An "ordering vector" is constructed that lists all bits from the base layer first, followed by the bits of the first enhancement layer, and so on. The WebRTC endpoint constantly estimates the available network bandwidth. For each transmission opportunity, it selects a "subsequence" from the reordered stream with a length that matches the available bandwidth. If bandwidth is low, it sends only the base layer. If bandwidth is high, it sends the base layer plus one or more enhancement layers in a single transport packet. This allows for seamless, granular adaptation of video quality without the overhead of managing separate streams for each quality level.
3. Integration with the 5G NR Physical Layer (3GPP TS 38.212)
  • Disclosure: A method for mapping data to physical resources in a 5G New Radio (NR) system. The 3GPP standards define a flexible physical layer where a user can be allocated a variable number of Physical Resource Blocks (PRBs). This variable allocation is the "available gross rate channel." The data for a transport block is encoded using the standardized LDPC encoder to create the "mother code word." The rate matching and HARQ functions are modified to use the reordering principle. A single reordered mother code word is created. When the MAC layer schedules the user with a certain number of PRBs for a slot, the physical layer simply takes the next subsequence of bits from the reordered stream that is exactly long enough to fill the allocated PRBs. This simplifies the rate-matching procedure, eliminating the need for complex puncturing/shortening calculations for every possible resource allocation size, and provides a more streamlined approach to incremental redundancy.

Generated 5/12/2026, 12:48:41 AM