Patent 11677798

Derivative works

Defensive disclosure: derivative variations of each claim designed to render future incremental improvements obvious or non-novel.

Active provider: Google · gemini-2.5-pro

Derivative works

Defensive disclosure: derivative variations of each claim designed to render future incremental improvements obvious or non-novel.

✓ Generated

As a Senior Patent Strategist and Research Engineer, I have analyzed U.S. Patent No. 11,677,798. The following document is a defensive disclosure intended to establish prior art against potential future patent applications that claim incremental improvements upon the core concepts of the '798 patent.

Note on Patent Status: As of the date of this document (April 26, 2026), U.S. Patent 11,677,798 is understood to have expired, with its term concluding on April 28, 2025. This disclosure, therefore, serves to preempt attempts to re-patent minor variations of its underlying technology and place them firmly in the public domain.

Defensive Disclosure: Derivative Embodiments for Distributed Media Encoding

This disclosure details a series of derivative implementations, extensions, and applications of a system for segmenting media content into "streamlets" and encoding them at multiple bitrates using a distributed master-host architecture where job allocation is determined by a bidding mechanism.


Axis 1: Material & Component Substitution

Variation 1.1: QUIC-Based Streamlet Transport and Bid Protocol

Enabling Description: This variation replaces the HTTP/TCP-based transport layer for both streamlet delivery and the master-host bidding communication with the QUIC protocol. The master module opens a single QUIC connection to each host, but utilizes QUIC's multiplexed, independent streams within that connection. An "encoding job" is broadcast over a control stream, and each host returns its "bid" on a dedicated response stream. This approach mitigates head-of-line blocking; a delayed bid packet from one host does not stall the master's reception of bids from other hosts. Furthermore, the 0-RTT (Zero Round Trip Time) connection resumption feature of QUIC is used by hosts to re-establish communication with the master instantly after a transient network failure, improving the robustness of the encoding farm. The bid itself is a structured binary message (e.g., a Protocol Buffer) containing estimated clock cycles, GPU memory allocation, and a confidence score, which is then sent over a QUIC datagram frame for lower latency.

sequenceDiagram
    participant Master as Master Module
    participant HostA as Host A (QUIC Endpoint)
    participant HostB as Host B (QUIC Endpoint)

    Master->>HostA: Announce Job (Control Stream 1)
    Master->>HostB: Announce Job (Control Stream 1)
    Note right of HostA: Calculates bid based on local state.
    HostA-->>Master: Submit Bid A (Datagram)
    Note right of HostB: Network delay on Bid B.
    HostB-->>Master: Submit Bid B (Datagram)
    Note left of Master: Master receives Bid A first, can<br/>act without waiting for Bid B.
    Master->>HostA: Assign Job (Unidirectional Stream 3)

Variation 1.2: Heterogeneous Compute Fabric with Resource-Specific Bidding

Enabling Description: The "host computing modules" are not uniform CPUs but a heterogeneous fabric of computing resources, including FPGAs (Field-Programmable Gate Arrays) with dedicated video encoding IP cores, GPUs with CUDA/OpenCL acceleration, and specialized ASICs (Application-Specific Integrated Circuits). The "bid" from a host is a multi-part message that specifies which resource is being offered. For example, an FPGA-equipped host might bid with { "resource": "fpga_h265_encoder", "estimated_latency_ms": 5, "power_draw_watts": 15 }, while a GPU host bids with { "resource": "gpu_cuda_nvenc", "estimated_latency_ms": 12, "concurrent_streams": 4, "power_draw_watts": 75 }. The master module's assignment algorithm is extended to a multi-objective optimization problem, considering not just completion time but also power consumption, cost per encode, or the need to reserve certain resource types for higher-priority tasks.

graph TD
    subgraph Master Module
        A[Job Announcer] --> B{Bidding Evaluator};
        B --> C{Task Scheduler};
    end
    subgraph Host Fabric
        H1[Host 1: CPU Farm] -- Bid --> B;
        H2[Host 2: GPU Array] -- Bid --> B;
        H3[Host 3: FPGA Pool] -- Bid --> B;
        H4[Host 4: ASIC-based] -- Bid --> B;
    end
    C -- Assign Job to Optimal Resource --> H3;
    B -- Cost/Power/Latency Metrics --> C;

Axis 2: Operational Parameter Expansion

Variation 2.1: Nanoscale Real-Time Sensor Data Processing

Enabling Description: The invention is applied to the real-time processing of data streams from a high-frequency sensor array, such as in a particle accelerator or a high-resolution electron microscope. The "media content" is the raw sensor data stream (terabytes/second). A "streamlet" is a microsecond-duration data segment. The system "encodes" these streamlets into multiple "bitrates," which correspond to different data resolutions or analysis products: (1) a low-bitrate "thumbnail" stream representing a down-sampled or filtered version for real-time anomaly detection, (2) a medium-bitrate stream with key statistical features extracted for immediate scientific review, and (3) a high-bitrate, losslessly compressed version for archival. The "hosts" are nodes in a high-performance computing (HPC) cluster, and their "bids" reflect the availability of specific analysis libraries and processing cores required for each type of encoding.

flowchart LR
    subgraph Data Source
        Sensor[High-Frequency Sensor Array]
    end
    subgraph Processing System
        Capture[Capture Module] --> Streamletizer[Microsecond Streamletizer];
        Streamletizer --> Master[Master Module];
        Master -- Job Offers --> Host1[HPC Node 1];
        Master -- Job Offers --> Host2[HPC Node 2];
        Host1 -- Bid --> Master;
        Host2 -- Bid --> Master;
        Master -- Job Assignment --> Host1;
        Host1 --> Output1[Low-Res Anomaly Detection Stream];
        Host1 --> Output2[Medium-Res Feature Extraction];
        Host1 --> Output3[Lossless Archival Stream];
    end
    Sensor --> Capture;

Variation 2.2: Deep Space Communications with Latency-Aware Bidding

Enabling Description: This system is adapted for communications between a ground station (Master) and multiple Mars rovers or satellites (Hosts). The "content" is a high-resolution panoramic image or scientific data set. A "streamlet" is a packetized portion of this data. The "encoding" is a combination of compression and Forward Error Correction (FEC) at various levels of redundancy ("bitrates"). A more redundant encoding has a higher effective bitrate but is more resilient to transmission errors. The "bid" from a rover/host includes its current power level, available processing time, and, crucially, its next predicted communication window with Earth, including expected duration and signal-to-noise ratio. The Master module uses this to assign encoding tasks that can be completed and transmitted during the next available, and potentially brief, communication window.

sequenceDiagram
    participant GroundStation as Master (Earth)
    participant RoverA as Host A (Mars)
    participant RoverB as Host B (Mars)

    GroundStation->>RoverA: Request Bids for Data Chunk #123
    Note right of RoverA: Considers power, CPU idle, and next comms window (in 4 hours).
    RoverA-->>GroundStation: Bid: { chunk:123, time_to_encode: 120min, power: 80%, comms_in: 4h, snr_est: 12dB }
    GroundStation->>RoverB: Request Bids for Data Chunk #123
    Note right of RoverB: Considers power, CPU busy, and next comms window (in 1 hour).
    RoverB-->>GroundStation: Bid: { chunk:123, time_to_encode: 180min, power: 95%, comms_in: 1h, snr_est: 10dB }
    GroundStation->>RoverB: Assign Chunk #123 (Higher priority due to earlier comms window)

Axis 3: Cross-Domain Application

Variation 3.1: Distributed Pharmaceutical Compound Screening

Enabling Description: The system is applied to computational drug discovery. The "media content" is the vast chemical space of potential drug compounds. A "streamlet" is a batch of candidate molecules. The system "encodes" each batch at different "bitrates," where each bitrate represents a different type of simulation: a low-cost/low-fidelity docking simulation (low bitrate), a more intensive molecular dynamics simulation (medium bitrate), and a full quantum mechanics simulation (high bitrate). The "hosts" are research institutions or cloud-based compute clusters that "bid" on processing a batch of molecules. The bid reflects the availability of licensed simulation software (e.g., GROMACS, NAMD) and specialized hardware, allowing the master to farm out different simulation types to the most suitable providers.

graph TD
    A[Chemical Library Database] --> B(Streamlet Module <br> Batches of Molecules);
    B --> C{Master Module};
    C -- "Job: Low-Fidelity Docking" --> D1[Host 1: University Cluster];
    C -- "Job: Molecular Dynamics" --> D2[Host 2: Cloud GPU Farm];
    C -- "Job: Quantum Simulation" --> D3[Host 3: Supercomputer Center];
    D1 -- Bid(cost_per_sim: $0.1) --> C;
    D2 -- Bid(cost_per_sim: $5) --> C;
    D3 -- Bid(cost_per_sim: $100) --> C;
    C -- Assignment --> D1;
    D1 --> E[Results: Hit List];

Variation 3.2: Adaptive Fidelity in Federated Machine Learning

Enabling Description: The system is used to manage training rounds in a federated learning architecture. The "media content" is the global model from the central server. A "streamlet" is a specific layer or subset of the model's weights. The system "encodes" this streamlet into different "bitrates" by applying varying levels of quantization (e.g., 32-bit float, 16-bit float, 8-bit integer). The "hosts" are the end-user devices (e.g., mobile phones). Each device "bids" on training a quantized model version based on its current battery level, network connection (Wi-Fi vs. cellular), and CPU load. The master server assigns the highest fidelity (largest bitrate) training jobs to powerful, well-connected devices, and lower fidelity jobs to constrained devices, thereby maximizing participation without unduly burdening any single device.

stateDiagram-v2
    direction LR
    state "Master Server" as Master {
        [*] --> Announce_Round
        Announce_Round --> Awaiting_Bids: Distribute Global Model
        Awaiting_Bids --> Assign_Jobs: Receive Bids from Devices
        Assign_Jobs --> Aggregating: Collect Local Updates
        Aggregating --> Announce_Round: Update Global Model
    }
    state "Edge Device (Host)" as Device {
        state "Evaluate Capacity" as Eval
        state "Send Bid" as Bid
        state "Local Training" as Train
        state "Send Update" as Update

        [*] --> Eval: New Round Starts
        Eval --> Bid: {battery:90%, net:wifi, cpu:10%}
        Bid --> Train: Receive Quantized Model (e.g., FP16)
        Train --> Update: Compute Weight Deltas
        Update --> [*]
    }

Axis 4: Integration with Emerging Tech

Variation 4.1: AI-Optimized Predictive Bidding and Content-Aware Assignment

Enabling Description: Each host module incorporates a lightweight, trained neural network that predicts encoding completion time. The model's inputs are not just static system parameters (CPU, RAM) but also features extracted from the incoming raw streamlet, such as motion vectors, scene complexity, and color histograms. The host's "bid" is the output of this predictive model. The master module uses a reinforcement learning (RL) agent (e.g., a multi-armed bandit algorithm) to assign jobs. It learns over time which hosts are consistently over- or under-bidding for certain content types (e.g., Host A is fast at animation, Host B excels at live-action sports) and adjusts its assignment strategy to maximize the global throughput of the encoding farm, going beyond simply picking the lowest bidder.

flowchart TD
    subgraph Host
        A[Raw Streamlet] --> B(Feature Extractor);
        B -- motion, complexity --> C{Predictive NN};
        D[System Metrics] -- cpu, mem --> C;
        C -- Predicted Time --> E[Bid];
    end
    subgraph Master
        F[RL Agent]
    end
    E --> F;
    F -- Assigns job based on bid AND learned host performance --> G[Assignment Decision];

Variation 4.2: Blockchain-Verified Encoding for Royalty Distribution

Enabling Description: This variation integrates a permissioned blockchain to create an immutable audit trail for content processing. When a host completes an encoding job, it generates a proof-of-work that includes the hash of the source streamlet, the hash of the output encoded streamlet, its own digital ID, and the bitrate. This proof is submitted to the master, which validates it and commits it as a transaction to a blockchain. Smart contracts on this blockchain can then automatically trigger micropayments for royalties. For example, a content owner is paid for the use of the source, the encoding host is paid for its work, and a distributor is paid when the final streamlet is requested by a user, all tracked transparently and verifiably on-chain.

sequenceDiagram
    participant Master
    participant Host
    participant Blockchain
    participant User

    Master->>Host: Assign Encoding Job (Streamlet S1)
    Host->>Host: Encode S1 -> S1_encoded
    Host->>Master: Return S1_encoded + Proof(hash(S1), hash(S1_encoded), HostID)
    Master->>Blockchain: Validate and Record Transaction
    User->>Master: Request S1_encoded
    Master->>User: Serve Streamlet
    Blockchain->>Blockchain: Smart Contract Executes (Triggers Micropayments)

Axis 5: The "Inverse" or Failure Mode

Variation 5.1: Graceful Degradation via Single-Source Bypass

Enabling Description: The master module continuously monitors the health of the host farm by analyzing the rate and quality of incoming bids. If the number of active hosts drops below a critical threshold (e.g., 50%) or if the average bid time exceeds a predefined SLO (Service Level Objective), the master triggers a "fail-safe" state. In this state, it signals the upstream capture module to stop segmenting the content and instead produce a single, low-bitrate, low-quality "failsafe" stream. This stream bypasses the master-host system entirely and is written directly to the content delivery network. This ensures uninterrupted service for end-users, albeit at a reduced quality, preventing a catastrophic failure of the complex distributed encoding system from causing a complete outage. The system automatically reverts to the distributed mode once host health is restored.

stateDiagram-v2
    [*] --> Normal_Operation
    Normal_Operation --> Failsafe_Mode: Host Count < Threshold OR Avg Bid Time > SLO
    Failsafe_Mode --> Normal_Operation: Host Health Restored

    state Normal_Operation {
        description "Master assigns jobs to distributed hosts for multi-bitrate encoding."
    }
    state Failsafe_Mode {
        description "Master signals Capture to produce a single, low-bitrate stream, bypassing hosts."
    }

Combination Prior Art Scenarios

  1. Combination with MPEG-DASH and Kubernetes: The master module is implemented as a Kubernetes Operator. When a new live stream starts, a Custom Resource Definition (CRD) for a MediaStream is created. The operator watches for this CRD and dynamically scales a deployment of containerized encoding "hosts" (e.g., using FFmpeg). The operator's control loop acts as the master, querying the Kubernetes API for node metrics (CPU/GPU utilization) which serve as the basis for a "bid". The operator then creates Kubernetes Job objects to perform the encoding on the chosen nodes. The outputted streamlets and the generated MPEG-DASH manifest (.mpd) are written to a cloud storage bucket for distribution.

  2. Combination with WebRTC and AV1: A live stream is ingested from a user's web browser using the WebRTC getUserMedia API and transmitted to the capture module via a Secure Real-time Transport Protocol (SRTP) connection. The capture module segments this stream. The master-host system, utilizing hosts equipped with hardware AV1 encoders, processes the streamlets into a set of adaptive bitrate streams using the royalty-free AV1 codec. The bids from the hosts would specifically indicate their AV1 encoding profile capabilities. The resulting AV1 streamlets are packaged for delivery via HLS.

  3. Combination with Apache Kafka for Job Queuing: The master module does not communicate directly with hosts. Instead, it acts as a producer to an Apache Kafka topic named encoding-jobs. Each message contains a source streamlet. The host modules act as consumers in a consumer group on this topic. Kafka's partitioning mechanism handles the initial load balancing. A host "bids" by sending a message to a separate bids topic with its ID and capacity. The master reads this topic to maintain a real-time state of host availability and can send high-priority jobs to specific partitions assigned to the best-bidding hosts. This decouples the system, making it more resilient and scalable.

Generated 5/8/2026, 3:25:13 PM