Patent 10778989

Derivative works

Defensive disclosure: derivative variations of each claim designed to render future incremental improvements obvious or non-novel.

Active provider: Google · gemini-2.5-pro

Derivative works

Defensive disclosure: derivative variations of each claim designed to render future incremental improvements obvious or non-novel.

✓ Generated

Defensive Disclosure: Rolling Intra-Prediction Enhancements and Cross-Domain Applications

Publication Date: April 26, 2026
Reference Patent: US 10,778,989 ("the '989 patent")
Field: Digital Data Compression, Image and Video Coding, Signal Processing

This document discloses novel enhancements, applications, and combinations related to the "rolling intra prediction" methods described in US patent 10,778,989. The purpose of this disclosure is to place these concepts into the public domain, thereby establishing prior art against future patent applications claiming these or similar incremental innovations.


Derivative Variations on Core Claims

The following disclosures are derivative works based on the independent claims of the '989 patent.

Axis 1: Material & Component Substitution

Derivative 1.1: Neuromorphic Predictive Coding

  • Enabling Description: The prediction function, defined in the '989 patent as a linear weighted function, is replaced with a low-power Spiking Neural Network (SNN) implemented on a neuromorphic co-processor (e.g., an Intel Loihi or IBM TrueNorth architecture). The SNN is pre-trained to recognize textural and edge patterns. During encoding/decoding, the "mode" selects a specific pre-trained SNN model. The inputs to the SNN are not pixel intensity values but are instead temporal spike trains converted from the reference pixel intensities. The SNN's output spike train is then converted back into a predicted pixel value. This substitution replaces conventional arithmetic logic units (ALUs) with specialized, energy-efficient neuromorphic hardware, achieving the same predictive function with substantially lower power consumption.
  • Mermaid Diagram:
    graph TD
        A[Reference Pixels] --> B(Pixel-to-Spike Converter);
        B --> C{Spiking Neural Network};
        D[Mode Selection] --> C;
        C --> E(Spike-to-Pixel Converter);
        E --> F[Predicted Pixel];
        F -- feeds back as input --> A;
    

Derivative 1.2: Logarithmic Domain Prediction

  • Enabling Description: All pixel-level calculations for the prediction function are performed in the logarithmic domain instead of the linear domain. Reference and predicted pixel values are first converted to a logarithmic representation (e.g., base 2). The prediction function then becomes a series of additions and subtractions, which are computationally cheaper than the multiplications required by weighted-average functions in the linear domain. The final predicted value is converted back to the linear domain via an exponentiation operation. This is particularly effective for hardware with limited multiplier resources, such as low-cost microcontrollers.
  • Mermaid Diagram:
    flowchart LR
        subgraph Logarithmic Prediction Module
            A[Input Pixels] --> B{Logarithmic Converter};
            B --> C[Log-Domain Prediction Function];
            C --> D{Exponential Converter};
            D --> E[Output Predicted Pixel];
        end
        E -- rolling input --> A;
    

Derivative 1.3: Ternary and Quaternary Coefficient Prediction

  • Enabling Description: The prediction function's weighting factors are constrained to a ternary (-1, 0, 1) or quaternary (-2, -1, 1, 2) set. This eliminates the need for floating-point or complex integer multipliers. The prediction calculation is reduced to a series of register shifts (for powers of 2) and additions/subtractions. The encoder selects the mode that provides the best rate-distortion performance using this constrained coefficient set, slightly trading prediction accuracy for a significant reduction in computational complexity.
  • Mermaid Diagram:
    sequenceDiagram
        participant Encoder
        participant PredictionEngine
        Encoder->>PredictionEngine: Select Mode (Ternary Coefficients)
        PredictionEngine->>PredictionEngine: p(x,y) = ref(x-1,y) - ref(x-1,y-1)
        PredictionEngine-->>Encoder: Predicted Block
    

Axis 2: Operational Parameter Expansion

Derivative 2.1: Gigapixel Image Tiling Prediction

  • Enabling Description: For processing gigapixel-scale images (e.g., satellite imagery, digital pathology), the rolling intra-prediction is applied to macro-tiles (e.g., 8192x8192 pixels). The traversing order is defined by a Hilbert space-filling curve to maintain spatial locality. To manage memory, only a "rolling buffer" of the N most recently predicted rows and the M most recently predicted pixels in the current row are kept in active memory. This allows the prediction to propagate across massive image tiles with a fixed memory footprint, operating at an industrial scale.
  • Mermaid Diagram:
    stateDiagram-v2
        [*] --> TraversingTile
        TraversingTile --> PredictingPixel: Next pixel in Hilbert curve order
        PredictingPixel --> UpdatingBuffer: Store predicted pixel
        UpdatingBuffer --> PredictingPixel: Use buffered pixels for next prediction
        UpdatingBuffer --> TraversingTile: If end of row/column
    

Derivative 2.2: Cryogenic Sensor Data Compression

  • Enabling Description: The rolling intra-prediction algorithm is implemented on an FPGA situated in a cryogenic environment (e.g., < 77 Kelvin) for compressing data from superconducting quantum sensors or deep-space telescopes. At these temperatures, thermal noise in the pixel data is minimal and exhibits different statistical properties. The prediction functions are optimized for this low-noise environment, using higher-order predictors that would be ineffective with room-temperature sensor data. The traversing order is aligned with the sensor's electronic readout sequence to use predicted pixel values as soon as they are generated.
  • Mermaid Diagram:
    graph TD
        subgraph Cryogenic Dewar
            A(Superconducting Sensor Array) --> B(FPGA Codec);
            B --> C{Rolling Intra-Prediction Module};
        end
        C -- optimized for low noise --> D[Predicted Pixel Stream];
        B -- feeds back predicted data --> C;
        D --> E(Compressed Data Output);
    

Derivative 2.3: High-Frequency Volumetric Data Prediction (4D)

  • Enabling Description: The rolling prediction is extended from 2D blocks to 3D cubes (XxYxZ) for compressing time-series volumetric data, such as 4D medical scans (MRI, CT) or computational fluid dynamics simulations. The traversing order moves through the volume slice by slice (Z), and within each slice, by a raster scan (XxY). The prediction template is 3D, using reference pixels from previously predicted planes (Z-1) as well as adjacent pixels in the current plane. This allows for the exploitation of spatial redundancy across three dimensions.
  • Mermaid Diagram:
    classDiagram
        class VolumetricCodec {
            +processVoxel(x, y, z)
        }
        class PredictionFunction3D {
            +calculatePrediction(referenceVoxels)
        }
        class RollingBuffer3D {
            +getVoxel(x, y, z)
            +setVoxel(x, y, z, value)
        }
        VolumetricCodec --> PredictionFunction3D : uses
        VolumetricCodec --> RollingBuffer3D : manages
    

Axis 3: Cross-Domain Application

Derivative 3.1: Aerospace - Hypersonic Flow Field Prediction

  • Enabling Description: In computational fluid dynamics (CFD) for hypersonic vehicle design, the rolling prediction method is used to compress the massive datasets representing pressure and temperature fields. The simulation grid is treated as a 2D or 3D image. The prediction algorithm runs in-situ with the solver, compressing data for each time step. The traversing order follows the direction of the dominant shockwave, as the data values across the shock front are highly correlated. This allows for more efficient storage and post-processing of simulation runs.
  • Mermaid Diagram:
    flowchart TD
        A[CFD Solver Step n] --> B(Generate Flow Field Data);
        B --> C{Partition into Blocks};
        C --> D(Select Traversing Order along Shockwave);
        D --> E{Apply Rolling Intra-Prediction};
        E --> F[Store Compressed Field Data];
        F --> G(Solver Step n+1);
    

Derivative 3.2: AgTech - Soil Nutrient Map Compression

  • Enabling Description: Data from ground-penetrating radar and chemical sensors on automated farm equipment creates large, spatially correlated maps of soil nutrients (nitrogen, phosphorus, potassium). The rolling prediction method is used to compress these maps for transmission and storage. Since nutrient levels often form smooth gradients, the prediction functions are low-pass filters that propagate values smoothly across the map. The traversing order follows the path of the sensing vehicle to maximize the utility of recently predicted data points.
  • Mermaid Diagram:
    sequenceDiagram
        participant SensorVehicle
        participant OnboardCodec
        participant CloudStorage
        SensorVehicle->>OnboardCodec: Stream of Nutrient Data (N, P, K)
        OnboardCodec->>OnboardCodec: Apply Rolling Prediction along vehicle path
        OnboardCodec->>CloudStorage: Transmit Compressed Nutrient Map
    

Derivative 3.3: Consumer Electronics - Real-time E-ink Display Updates

  • Enabling Description: For partial updates on an E-ink display, the rolling prediction method is used to calculate the residual (the change between the old and new screen buffer). This is more efficient than transmitting the full updated block. The traversing order starts at the top-left corner of the changed region. The prediction function uses a simple predictor (e.g., p(x,y) = p(x-1, y)) to quickly generate a predicted state. Only the residual needs to be sent to the display controller, reducing bandwidth and power for screen updates.
  • Mermaid Diagram:
    stateDiagram-v2
        state "Partial Update" as Update {
            [*] --> Calculating_Residual
            Calculating_Residual --> Transmitting_Residual: Use Rolling Prediction
            Transmitting_Residual --> [*]
        }
    

Axis 4: Integration with Emerging Tech

Derivative 4.1: AI-Driven Mode and Template Selection

  • Enabling Description: A convolutional neural network (CNN) analyzes the current block to be encoded. Based on its learned understanding of textures, edges, and patterns, the CNN directly outputs the optimal prediction mode, traversing order, and template shape for the rolling prediction algorithm. This replaces the brute-force rate-distortion optimization search, significantly speeding up the encoding process. The CNN's output is encoded as side information in the bitstream for the decoder.
  • Mermaid Diagram:
    graph TD
        A[Current Block] --> B(CNN Analyzer);
        B --> C[Optimal Mode];
        B --> D[Optimal Traversing Order];
        B --> E[Optimal Template];
        subgraph Encoder
            F{Rolling Prediction Engine}
        end
        C & D & E --> F;
    

Derivative 4.2: IoT Sensor Network Data Compression

  • Enabling Description: In a dense IoT sensor network (e.g., environmental monitoring), each sensor node uses rolling intra-prediction to compress its time-series data. The "block" is a 1D vector of recent sensor readings. The "neighboring reference samples" are the last readings from adjacent physical sensors, received via low-power radio. The prediction then "rolls" forward in time, predicting the sensor's next value based on its own previously predicted values and data from its neighbors. This exploits both temporal and spatial redundancy in the sensor field.
  • Mermaid Diagram:
    erDiagram
        SENSOR ||--o{ READING : has
        SENSOR }o--|| SENSOR : neighbors
        READING {
            string value
            datetime timestamp
        }
        SENSOR {
            int id
            string location
        }
    

Derivative 4.3: Blockchain-Verified Prediction

  • Enabling Description: In applications requiring verifiable data integrity (e.g., medical imaging, satellite surveillance), the predicted block is generated using rolling intra-prediction. A cryptographic hash of the final predicted block is then calculated and stored on a blockchain, along with the mode and reference sample data. A third-party decoder can independently regenerate the exact same predicted block using the public information and verify that its hash matches the one on the blockchain, proving that the prediction process was not tampered with.
  • Mermaid Diagram:
    sequenceDiagram
        participant Encoder
        participant Blockchain
        participant Decoder
        Encoder->>Encoder: Generate Predicted Block (p)
        Encoder->>Blockchain: Store HASH(p), mode, refs
        Decoder->>Blockchain: Retrieve mode, refs
        Decoder->>Decoder: Generate Predicted Block (p')
        alt HASH(p) == HASH(p')
            Decoder->>Decoder: Verification Success
        else
            Decoder->>Decoder: Verification Failure
        end
    

Axis 5: The "Inverse" or Failure Mode

Derivative 5.1: Graceful Degradation Mode

  • Enabling Description: The encoder/decoder monitors its own computational load or battery level. If the load exceeds a threshold, it enters a "low-power" mode. In this mode, the set of available prediction modes is restricted to only the simplest functions (e.g., DC prediction, where p(x,y) = p(x-1,y)). Furthermore, the rolling mechanism is disabled; all pixels are predicted only from the external reference samples. This reduces prediction accuracy (increasing the residual size) but drastically cuts CPU cycles, preventing device overheating or battery drain under stress.
  • Mermaid Diagram:
    stateDiagram-v2
        state "High Performance" as HP
        state "Low Power" as LP
        [*] --> HP
        HP --> LP: CPU_Load > 85%
        LP --> HP: CPU_Load < 50%
        HP: Full set of prediction modes, Rolling enabled
        LP: Simplified modes, Rolling disabled
    

Derivative 5.2: Safe-Failure Prediction for Safety-Critical Systems

  • Enabling Description: In an automotive or aviation vision system, if the decoder detects corrupted input data (e.g., a failed checksum on the bitstream for a block), it triggers a "safe-failure" prediction mode instead of crashing. In this mode, the entire block is filled by propagating the value of the top-left-most available valid pixel from a neighboring block. This creates a visually obvious, flat-colored block, which is preferable to a block of random noise or a system halt. This ensures the rest of the image frame can be decoded and presented, alerting the operator or an AI system to a data integrity issue in a predictable way.
  • Mermaid Diagram:
    flowchart TD
        A{Decode Block Header} --> B{Checksum OK?};
        B -- Yes --> C(Perform Rolling Prediction);
        B -- No --> D(Enter Safe-Failure Mode);
        D --> E(Fill block with neighbor pixel value);
        C --> F[Reconstruct Block];
        E --> F;
    

Combination Prior Art Scenarios

Combination 3.1: Integration with AV1 Video Codec

  • Description: The rolling intra-prediction method of the '989 patent is integrated into the open-source AV1 video codec as a new set of intra-prediction modes. The existing AV1 framework for signaling prediction modes is used to signal the selection of a "rolling" mode. The AV1 reference frame buffer provides the external "neighboring reference samples." The rolling prediction is implemented as a new function within the av1_predict_intra_block library call, selectable by a new mode enum. This combines the novel prediction-generation process with a standardized, widely adopted codec framework.

Combination 3.2: GStreamer Multimedia Framework Plugin

  • Description: A GStreamer plugin (gst-rollingpred) is created that implements the encoder and decoder described in the '989 patent. The plugin exposes standard GStreamer source and sink pads. When encoding, it accepts raw video frames and outputs a bitstream compliant with the patent's method. When decoding, it accepts the compliant bitstream and outputs raw video frames. This allows the rolling prediction codec to be used within any GStreamer-based application by simply linking the new element into a pipeline (e.g., gst-launch-1.0 filesrc location=video.raw ! videoconvert ! gst-rollingpred-enc ! filesink location=video.rll).

Combination 3.3: Implementation within FFmpeg

  • Description: The rolling intra-prediction codec is implemented within the open-source FFmpeg library. A new codec ID (AV_CODEC_ID_ROLLINGPRED) is defined. Encoder and decoder structures are added to libavcodec, implementing the logic from the '989 patent. This enables any application built on FFmpeg to encode or decode this format natively. The implementation uses FFmpeg's internal AVFrame structures for handling pixel data and AVPacket for the output bitstream, fully integrating the method into this ubiquitous multimedia processing tool.

Generated 5/8/2026, 3:07:12 PM