Patent 11991234

Derivative works

Defensive disclosure: derivative variations of each claim designed to render future incremental improvements obvious or non-novel.

Active provider: Google · gemini-2.5-pro

Derivative works

Defensive disclosure: derivative variations of each claim designed to render future incremental improvements obvious or non-novel.

✓ Generated

Defensive Disclosure based on U.S. Patent 11,991,234

Publication Date: April 26, 2026
Disclosed by: [Inventor's Name], Senior Patent Strategist and Research Engineer

This document discloses technical variations and new applications of the methods, systems, and apparatuses described in U.S. Patent 11,991,234 ("Apparatus, system, and method for multi-bitrate content streaming"). The intent of this disclosure is to place these concepts into the public domain to serve as prior art against future patent applications that may claim these variations as novel inventions.


Core Concept 1: Streamlet Segmentation and Multi-Bitrate Set Generation

The foundational concept involves segmenting a media file into time-indexed "streamlets" and creating a "set" of these streamlets at various bitrates for each time index.

Derivative 1.1: Material & Component Substitution

  • Variation: Content-Defined Chunking for Streamlet Segmentation.

  • Enabling Description: Instead of segmenting media content into streamlets of fixed temporal duration (e.g., 2 seconds), segmentation points are determined by the content itself using content-defined chunking (CDC) algorithms, such as the Rabin-Karp rolling hash. A sliding window calculates a hash of the content, and a boundary is declared when the hash value meets a predefined condition (e.g., the lower N bits are zero). This results in variable-duration streamlets, but ensures that identical content segments (e.g., repeated scenes, intros) produce identical streamlets, enabling significant cross-file caching and storage deduplication efficiencies at the server and CDN level. The "set" for a given time index would then reference the appropriate content-defined chunk and provide multiple bitrate-encoded versions of that specific chunk.

  • Mermaid Diagram:

    graph TD
        A[Original Media Content] --> B{Rabin-Karp Rolling Hash};
        B --> C{Chunk Boundary Detection};
        C --> D[Variable-Duration Streamlets];
        D --> E{Encoding Module};
        E --> F[Set 1: Chunk_A_360p, Chunk_A_720p, Chunk_A_1080p];
        E --> G[Set 2: Chunk_B_360p, Chunk_B_720p, Chunk_B_1080p];
    
  • Variation: Perceptual Hashing for Segmentation.

  • Enabling Description: Segmentation points are determined by significant changes in the visual or auditory information, identified using perceptual hashing algorithms (e.g., pHash, aHash). The system generates a perceptual hash for each frame or audio block. When the Hamming distance between consecutive hashes exceeds a set threshold, it signifies a scene change, and a new streamlet boundary is created. This aligns segment boundaries with natural scene cuts, potentially improving the compression efficiency of each streamlet as they are more likely to be self-contained scenes.

  • Mermaid Diagram:

    sequenceDiagram
        participant CM as Content Module
        participant SM as Streamlet Module
        participant EM as Encoding Module
        CM->>SM: Raw Video/Audio Frames
        loop For Each Frame
            SM->>SM: Calculate pHash(frame_n)
            SM->>SM: Compare distance(pHash_n, pHash_n-1)
            alt Hamming Distance > Threshold
                SM->>SM: Mark Frame_n-1 as Streamlet Boundary
            end
        end
        SM->>EM: Perceptually-Defined Streamlets
        EM-->>EM: Generate Multi-Bitrate Sets
    

Derivative 1.2: Operational Parameter Expansion

  • Variation: Nanosecond-Scale Streamlets for Low-Latency Industrial Control.

  • Enabling Description: The method is applied to stream high-frequency sensor data from industrial machinery, where latency is critical. Media content is defined as a stream of time-series data (e.g., pressure, vibration, temperature) sampled at MHz frequencies. "Streamlets" are generated at microsecond or nanosecond durations. The "multi-bitrate" sets are defined not by visual quality, but by data precision (e.g., 8-bit, 16-bit, 32-bit float representations) and sampling frequency (e.g., 1MHz, 500kHz, 100kHz). A remote control system can dynamically switch between these "bitrates" to balance real-time responsiveness with data fidelity based on network jitter and packet loss between the factory floor and the control center.

  • Mermaid Diagram:

    stateDiagram-v2
        [*] --> HighFidelity
        HighFidelity: 1MHz Sampling, 32-bit Precision
        HighFidelity --> MediumFidelity: Network Jitter > 5ms
        MediumFidelity: 500kHz Sampling, 16-bit Precision
        MediumFidelity --> LowFidelity: Packet Loss > 2%
        LowFidelity: 100kHz Sampling, 8-bit Precision
        LowFidelity --> MediumFidelity: Packet Loss < 1%
        MediumFidelity --> HighFidelity: Network Jitter < 2ms
        LowFidelity --> [*]: Connection Lost
    
  • Variation: Terabyte-Scale Streamlets for Genomic Data Processing.

  • Enabling Description: The "content" is a multi-terabyte genomic sequence file (e.g., a BAM or CRAM file). The file is logically segmented into "streamlets," where each streamlet represents a specific chromosome or a large genomic region. The "multi-bitrate" encoding corresponds to different levels of data annotation and compression. For example, a "low-bitrate" streamlet might contain only the raw sequence alignment data, while a "high-bitrate" version includes full quality scores, metadata tags, and variant call information. A distributed genomic analysis pipeline can request the appropriate "bitrate" for each chromosomal streamlet based on the computational requirements of a specific analysis step, minimizing data transfer across the computing cluster.

  • Mermaid Diagram:

    erDiagram
        GENOMIC_FILE ||--o{ CHROMOSOME_STREAMLET : contains
        CHROMOSOME_STREAMLET {
            string chromosome_id
            int start_position
            int end_position
        }
        CHROMOSOME_STREAMLET ||--|{ ENCODED_VERSION : has
        ENCODED_VERSION {
            string version_id
            enum bitrate (Low, Medium, High)
            string data_content
        }
    

Derivative 1.3: Cross-Domain Application

  • Variation (Aerospace): Adaptive Streaming of Satellite Telemetry Data.

  • Enabling Description: A satellite in orbit generates vast amounts of telemetry data. The satellite-to-ground link has variable bandwidth due to atmospheric conditions and orbital position. The onboard computer acts as the content server, segmenting telemetry data into time-indexed streamlets. "Multi-bitrate sets" are created by applying different data compression algorithms (e.g., lossless vs. lossy) or by prioritizing data types (e.g., a "high-bitrate" streamlet contains all sensor data, while a "low-bitrate" streamlet contains only critical health and safety parameters). Ground control stations act as the client, dynamically requesting the highest fidelity streamlet that the current link conditions can support, ensuring that critical data is always received.

  • Mermaid Diagram:

    flowchart LR
        subgraph Satellite
            A[Sensor Data] --> B[Streamlet Module];
            B --> C{Encoding Module};
            C --> D[Set: Critical_Data_Only];
            C --> E[Set: All_Data_Compressed];
            C --> F[Set: All_Data_Raw];
        end
        subgraph Ground_Link
            G((Variable Bandwidth))
        end
        subgraph Ground_Station
            H[Client Module] --> I{Select Streamlet};
            I --> H;
        end
        D -- Downlink --> G;
        E -- Downlink --> G;
        F -- Downlink --> G;
        G -- Uplink Request --> H;
    
  • Variation (AgTech): Multi-Spectral Drone Imagery Streaming.

  • Enabling Description: An agricultural drone captures multi-spectral imagery of a field for crop health analysis. The drone has a limited-bandwidth wireless link to a farmer's tablet. The drone's onboard processor segments the flight path's imagery into GPS-tagged streamlets. The "multi-bitrate" sets correspond to different image resolutions and spectral bands. A "low-bitrate" streamlet might be a low-resolution RGB thumbnail. A "medium-bitrate" streamlet could be a high-resolution NDVI (Normalized Difference Vegetation Index) band only. A "high-bitrate" streamlet would contain all captured spectral bands at full resolution. The farmer's tablet can request a low-quality preview in real-time during the flight and then automatically request higher-quality streamlets for specific areas of concern once the drone has better signal strength.

  • Mermaid Diagram:

    sequenceDiagram
        participant Drone
        participant Tablet
        Drone->>Tablet: Request for streamlet at GPS_Coord_X
        activate Tablet
        Tablet-->>Drone: GET /streamlet_X_low_res.jpg
        activate Drone
        Drone-->>Tablet: [JPEG Data]
        deactivate Drone
        Tablet->>Tablet: User taps on area of interest
        Tablet-->>Drone: GET /streamlet_X_NDVI_band.tiff
        activate Drone
        Drone-->>Tablet: [TIFF Data]
        deactivate Drone
        deactivate Tablet
    
  • Variation (Consumer Electronics): Adaptive Firmware Updates for Smart Home Devices.

  • Enabling Description: A manufacturer needs to deliver a large firmware update to millions of IoT devices (e.g., smart speakers) with varying network quality. The firmware binary is segmented into logical feature blocks (e.g., kernel, voice recognition model, security patch) which act as streamlets. The "multi-bitrate" versions are created using delta compression against previous firmware versions. A "low-bitrate" streamlet is a small delta patch for a device that is only one version behind. A "high-bitrate" streamlet is the full, uncompressed feature block for a new device or one with corrupted firmware. The device's update agent requests the most efficient "bitrate" (patch or full block) for each segment that its local storage and network connection can handle, making the update process more robust and efficient.

  • Mermaid Diagram:

    graph TD
        subgraph Update_Server
            A[Firmware v2.0] --> B{Segmentation Module};
            B --> C[Kernel Block];
            B --> D[Voice Model Block];
            C --> E{Delta Engine};
            E --> F[Kernel_Patch_v1.0_to_v2.0];
            E --> G[Kernel_Full_v2.0];
        end
        subgraph Smart_Speaker
            H{Update Agent};
            H -- "Firmware is v1.0" --> I[Request Kernel Patch];
            H -- "Firmware is corrupt" --> J[Request Kernel Full];
        end
        F --> I;
        G --> J;
    

Core Concept 2: Distributed Encoding via Master/Host Bidding

This concept involves a master module assigning encoding jobs to host modules based on which host can complete the task the fastest, as determined by a "job completion bid."

Derivative 2.1: Integration with Emerging Tech

  • Variation: AI-Driven Predictive Bidding and Resource Allocation.

  • Enabling Description: The master module is replaced with an AI-driven scheduler. Instead of hosts submitting simple bids based on current load, the AI scheduler builds a predictive model for each host's performance based on historical data. The model considers variables such as the codec being used (H.264 vs. AV1), the resolution of the source streamlet, the time of day (to account for network load in a distributed system), and the host's hardware (CPU vs. GPU vs. FPGA). The "bid" becomes a prediction from the AI model (e.g., "Predicted completion time for job XYZ on Host A is 1.2s with 95% confidence"). The master assigns jobs not to the host that is currently idle, but to the host predicted to have the lowest completion time for that specific job type, significantly improving overall system throughput.

  • Mermaid Diagram:

    sequenceDiagram
        participant MasterAI as Master (AI Scheduler)
        participant HostA as Host A (GPU)
        participant HostB as Host B (CPU)
        MasterAI->>MasterAI: Receive New Streamlet Encoding Job (AV1, 4K)
        MasterAI->>MasterAI: Query Performance Model for Host A
        MasterAI->>MasterAI: Query Performance Model for Host B
        Note right of MasterAI: Model(A, AV1, 4K) -> 0.8s<br>Model(B, AV1, 4K) -> 3.5s
        MasterAI->>HostA: Assign Job(AV1, 4K)
        HostA-->>MasterAI: Acknowledge
    
  • Variation: Blockchain-Based Decentralized Encoding Marketplace.

  • Enabling Description: The master-host system is decentralized using a blockchain and smart contracts. A content publisher submits an encoding job (e.g., a manifest of source streamlets) to a smart contract. Independent "host" nodes on the network monitor this contract. Hosts stake cryptocurrency to place "bids," which are commitments to encode a streamlet for a certain price and within a certain time. The smart contract automatically assigns the job to the winning bid. Upon completion, the host submits a cryptographic proof of the encoded output (e.g., a hash of the resulting streamlets) to the smart contract. The contract verifies the proof and automatically releases payment to the host. This creates a trustless, global, and scalable encoding network where compute resources can be sourced from anyone.

  • Mermaid Diagram:

    flowchart TD
        A[Publisher] --> B{Submits Job to Smart Contract};
        B --> C{Encoding Job Listed on Blockchain};
        D[Host 1] -- "Stake 10 tokens" --> E{Bids on Job};
        F[Host 2] -- "Stake 12 tokens" --> E;
        E --> G{Smart Contract Assigns Job to Host 1};
        G --> H[Host 1 Encodes Streamlet];
        H --> I{Submits Proof-of-Work};
        I --> J{Smart Contract Verifies};
        J --> K[Payment Released to Host 1];
        J -- On Failure --> L[Stake Slashed];
    

Derivative 2.2: The "Inverse" or Failure Mode

  • Variation: Graceful Degradation Bidding for Low-Power Mode.
  • Enabling Description: In a power-constrained environment (e.g., a battery-powered mobile encoding truck), hosts can submit "degraded-quality" bids. A host might bid with a standard completion time for a two-pass VBR encode, but also submit a secondary, much faster bid for a single-pass CBR encode or an audio-only transcode. The master module, aware of the system's overall power budget, can choose to accept these lower-quality bids to conserve energy, dynamically trading encoding quality for operational longevity. This allows the system to continue operating in a limited-functionality mode rather than shutting down completely when power is low.
  • Mermaid Diagram:
    graph TD
        subgraph Master
            A[Job Queue]
        end
        subgraph Host (Power: 25%)
            B{Bidding Module}
            B --> C["Bid 1: 2-Pass VBR (5s)"];
            B --> D["Bid 2: 1-Pass CBR (1.5s)"];
            B --> E["Bid 3: Audio-Only (0.5s)"];
        end
        A --> F{Master Scheduler};
        C --> F;
        D --> F;
        E --> F;
        F -- "Power > 50%" --> G[Accept Bid 1];
        F -- "Power < 50%" --> H[Accept Bid 2 or 3];
    

Combination Prior Art Scenarios

  1. Combination with MPEG-DASH and SRV6 Networking: The distributed master-host encoding system is used to generate MPEG-DASH compliant segments and manifests (MPDs). The location of the encoded streamlets (segments) is described in the MPD not with a standard URL, but with an SRv6 (Segment Routing over IPv6) locator. The client player, operating on an SRv6-aware network, can use this information to route its requests for segments over a specific, policy-defined network path (e.g., a low-latency path for high-bitrate segments). The master module, when assigning jobs, could also consider the network topology and place encoded streamlets on hosts that are topologically closer to the expected viewers, with this information encoded directly into the MPD via custom SRv6 metadata tags.

  2. Combination with Kubernetes and Prometheus: The entire master-host architecture is deployed on a Kubernetes cluster. The encoding hosts are worker pods. The "bidding" mechanism is implemented by a custom Kubernetes scheduler that queries a Prometheus monitoring instance. Each pod exposes metrics like cpu_cycles_per_encode, gpu_temp_celsius, and current_job_queue_length. Instead of a pod actively "bidding," the custom scheduler pulls these metrics and calculates an "availability score" for each pod, assigning the next streamlet encoding job (defined as a Kubernetes Job object) to the pod with the highest score. This fully automates the resource management using industry-standard, open-source cloud-native tools.

  3. Combination with WebAssembly (Wasm): The encoding logic is compiled into a WebAssembly module. The master module distributes not just a job ticket, but the Wasm binary of the specific encoder (e.g., a highly optimized AV1 encoder) to the host. This allows the system to be language-agnostic and secure. A host machine simply needs a Wasm runtime. This decouples the encoding logic from the host's environment and allows the master to dynamically push updated or specialized encoders to the fleet of hosts on a per-job basis. For instance, a job for an animated movie could be sent with a Wasm encoder specifically tuned for animation, while a live sports event job could be sent with an encoder tuned for high-motion, real-world footage.

Generated 5/8/2026, 6:46:16 PM