Patent 11991234B2
Derivative works
Defensive disclosure: derivative variations of each claim designed to render future incremental improvements obvious or non-novel.
Active provider: Google · gemini-2.5-pro
Derivative works
Defensive disclosure: derivative variations of each claim designed to render future incremental improvements obvious or non-novel.
Defensive Disclosure and Prior Art Generation for Multi-Bitrate Content Streaming Architectures
Publication Date: May 8, 2026
Subject Matter: This document discloses a series of derivative works, improvements, and alternative implementations related to the apparatus, system, and method described in U.S. Patent 11,991,234 B2. The intent of this disclosure is to place these concepts into the public domain, thereby establishing them as prior art for the purposes of patent examination.
Axis 1: Material & Component Substitution
Derivative 1.1: Specialized Hardware Acceleration for Streamlet Encoding
Enabling Description: The system's "host computing modules," originally conceived as general-purpose CPUs, are replaced with specialized hardware accelerators to perform the computationally intensive task of video encoding. Host modules are implemented as either Field-Programmable Gate Arrays (FPGAs) loaded with a specific encoding bitstream (e.g., a custom H.264 or AV1 encoder core) or Application-Specific Integrated Circuits (ASICs) designed solely for multi-bitrate video transcoding. The "muster module" communicates with these hardware hosts via a PCIe or network-based API. The "encoding job completion bid" is reformulated to be based on hardware-specific metrics. An FPGA-based host's bid includes the number of available logic slices, current die temperature, and projected power draw for the task. An ASIC-based host's bid would be based on the number of available parallel encoding pipelines. The muster module uses this data to allocate a raw streamlet to the host that can perform the multi-bitrate encoding with the lowest latency and/or power consumption.
Mermaid.js Diagram:
graph TD subgraph Muster Module A[Job Scheduler] end subgraph Host Pool B[FPGA Host 1] C[ASIC Host 1] D[FPGA Host 2] end A -- Assigns Job to Best Bidder --> C B -- Bid(Logic Slices, Temp) --> A C -- Bid(Pipelines Free, Power) --> A D -- Bid(Logic Slices, Temp) --> A RawStreamlet[Raw Video Streamlet] --> A C -- Encoded Set --> StreamletDB[(Streamlet Database)]
Derivative 1.2: Decentralized P2P Job Allocation via Distributed Hash Table (DHT)
Enabling Description: The centralized "muster module" is eliminated in favor of a decentralized, peer-to-peer architecture for job allocation. The "host computing modules" form a P2P network, using a Kademlia-based Distributed Hash Table (DHT) to manage the encoding queue. When a new raw streamlet needs encoding, its descriptor is published to the DHT under a key derived from its time index. Each host periodically queries the DHT for new jobs. To implement the "bidding" process, each host also publishes its current capacity (CPU load, memory availability, supported codecs) to the DHT under its unique node ID. When a host decides to take a job, it places a "claim" on the job's DHT entry and begins processing. Other nodes see the claim and move to the next available job, preventing redundant work. This architecture provides high resilience, as the failure of any single node does not halt the system.
Mermaid.js Diagram:
sequenceDiagram participant Publisher participant DHT participant HostA as Host A (High Capacity) participant HostB as Host B (Low Capacity) Publisher->>DHT: PUBLISH(Job_123, RawStreamlet_Info) HostA->>DHT: PUT(HostA_ID, {cpu: 10, mem: 90}) HostB->>DHT: PUT(HostB_ID, {cpu: 80, mem: 40}) loop Job Discovery HostA->>DHT: GET(Job_123) HostB->>DHT: GET(Job_123) end HostA->>DHT: CLAIM(Job_123, HostA_ID) HostB->>DHT: GET(Job_123) // Sees job is claimed by HostA HostA->>Publisher: Fetch RawStreamlet HostA-->>HostA: Encode Streamlet Set HostA->>StreamletDB: Store Encoded Set
Derivative 1.3: Serverless Function-Based Encoding Hosts
Enabling Description: The "host computing modules" are implemented as ephemeral, serverless functions (e.g., AWS Lambda, Azure Functions, Google Cloud Functions). The "muster module" acts as a function orchestrator. When a raw streamlet is ready for processing, the muster module invokes a new, parallel set of serverless functions—one for each target bitrate. For example, to create a set of 5 bitrates, 5 separate function instances are invoked simultaneously, each receiving the raw streamlet and a target bitrate as input. The concept of a "bid" is replaced by a pre-execution cost and time analysis based on the cloud provider's pricing model and the known complexity of the source content. The muster module's role shifts from load balancing existing servers to on-demand resource provisioning, providing immense scalability.
Mermaid.js Diagram:
flowchart TD A[Raw Streamlet Ready] --> B{Muster Module / Orchestrator}; B --> C{Invoke Serverless Functions}; C --> D[Function 1 (1080p)]; C --> E[Function 2 (720p)]; C --> F[Function 3 (480p)]; D --> G((S3 Bucket / Storage)); E --> G; F --> G;
Axis 2: Operational Parameter Expansion
Derivative 2.1: Distributed Encoding for Cryo-Electron Microscopy (Cryo-EM) Data
Enabling Description: The patented method is applied to the processing of large-scale Cryo-EM datasets for structural biology. The "media content" is a stream of micrograph movies captured by the microscope. Each movie is a "streamlet." The different "bitrates" correspond to different levels of data processing and compression: (1) raw movie frames, (2) motion-corrected and dose-weighted frames, and (3) a highly compressed version with key particle projections identified for quick preview. The "host computing modules" are GPUs in a high-performance computing (HPC) cluster. The "muster module" assigns movie processing jobs to available GPUs, with the "bid" being determined by the GPU's available VRAM, CUDA core utilization, and temperature. This allows for near real-time feedback to the microscopist on the quality of their data acquisition.
Mermaid.js Diagram:
classDiagram class MusterModule { +assignCryoJob(job) } class GpuHost { -vram_available -cuda_utilization +generateBid() +processMovie(movie) } class CryoEmMovie { <<streamlet>> +rawData } class ProcessedDataSet { <<set_of_streamlets>> +rawFrames +motionCorrectedFrames +previewProjections } MusterModule --> "1..*" GpuHost : manages MusterModule --> CryoEmMovie : receives GpuHost --> ProcessedDataSet : produces
Derivative 2.2: Geo-Distributed Encoding for Global Live Events
Enabling Description: For streaming a global live event (e.g., the Olympics), the system is deployed on an industrial scale across multiple geographic regions. A central "meta-muster" module receives the primary live feed. It then segments the feed into raw streamlets and distributes them to regional "muster modules" located in data centers in North America, Europe, and Asia. Each regional muster module manages a fleet of local "host" encoders. The "bidding" algorithm is expanded to include inter-regional data transfer costs and latency. A regional muster module may choose to have a streamlet encoded by a host in a different region if that region has significantly cheaper spot-instance computing prices, balancing encoding cost against the latency of transferring the encoded streamlet back. This creates a global, cost-aware, and latency-aware encoding grid.
Mermaid.js Diagram:
graph TD subgraph Global Control MetaMuster[Meta-Muster Module] end subgraph Region_NA MusterNA[NA Muster Module] HostNA1[Host NA-1] HostNA2[Host NA-2] MusterNA --> HostNA1 & HostNA2 end subgraph Region_EU MusterEU[EU Muster Module] HostEU1[Host EU-1] HostEU2[Host EU-2] MusterEU --> HostEU1 & HostEU2 end LiveFeed[Live Source Feed] --> MetaMuster; MetaMuster -- Raw Streamlets --> MusterNA; MetaMuster -- Raw Streamlets --> MusterEU; MusterNA -- Job (if cheaper) --> HostEU1; HostEU1 -- Encoded Streamlet --> CDN_EU[(EU CDN)]; HostNA2 -- Encoded Streamlet --> CDN_NA[(NA CDN)];
Axis 3: Cross-Domain Application
Derivative 3.1: Adaptive Fidelity Simulation for Aerospace Digital Twins
Enabling Description: The system is applied to managing real-time simulations for an aircraft's "digital twin." The "content" is a stream of live telemetry data from an operating aircraft (e.g., control surface positions, engine thrust, altitude). "Streamlets" are time-stamped packets of this data. The "set of streamlets" at varying "bitrates" are replaced by simulation results of varying fidelity. For each telemetry packet, the system runs multiple simulations: a low-fidelity linear model (fast, low-bitrate equivalent), a medium-fidelity physics-based model, and a high-fidelity computational fluid dynamics (CFD) simulation (slow, high-bitrate equivalent). The "muster module" is a simulation management server, and the "hosts" are nodes in an HPC cluster. The "bid" reflects a node's suitability for a particular simulation type (e.g., access to a GPU for CFD). This allows ground controllers to choose the required simulation accuracy based on the criticality of the flight phase, balancing response time and precision.
Mermaid.js Diagram:
stateDiagram-v2 [*] --> Receiving_Telemetry Receiving_Telemetry --> Muster_Assigns_Jobs Muster_Assigns_Jobs --> Fork Fork --> Low_Fidelity_Sim: Host A (CPU) Fork --> Med_Fidelity_Sim: Host B (CPU) Fork --> High_Fidelity_Sim: Host C (GPU) Low_Fidelity_Sim --> Join Med_Fidelity_Sim --> Join High_Fidelity_Sim --> Join Join --> Results_Available Results_Available --> [*]
Derivative 3.2: Real-Time Pest Detection in Agriculture via Drone Swarms
Enabling Description: The invention is repurposed for precision agriculture. A swarm of autonomous drones equipped with hyperspectral cameras survey a field. The "content" is the continuous stream of hyperspectral image data. A "streamlet" is a single image tile captured by a drone. The "hosts" are the onboard embedded GPUs (e.g., NVIDIA Jetson) on each drone. A "muster module," running on a ground station, assigns image analysis tasks. The "set of streamlets" represents different analysis outputs for the same tile: (1) a low-bitrate result indicating only the presence/absence of anomalies, (2) a medium-bitrate result classifying the anomaly (e.g., nutrient deficiency vs. pest infestation), and (3) a high-bitrate result that includes a precise bounding box around each identified pest. The "bid" from a drone includes its remaining battery life, current location, and GPU load, allowing the muster module to efficiently allocate analysis tasks across the swarm in real-time.
Mermaid.js Diagram:
sequenceDiagram participant GroundStation as Muster Module participant Drone1 as Host 1 participant Drone2 as Host 2 Drone1->>GroundStation: Bid(Battery: 80%, GPU: 20%) Drone2->>GroundStation: Bid(Battery: 50%, GPU: 30%) GroundStation->>Drone1: Assign Tile_42 Analysis Job Drone1->>Drone1: Capture Hyperspectral Tile_42 Drone1->>Drone1: Perform Multi-Level Analysis Drone1->>GroundStation: Result Set for Tile_42 {Low, Med, High}
Axis 4: Integration with Emerging Tech
Derivative 4.1: AI-Based Predictive Encoding and Quality Optimization
Enabling Description: The system is integrated with an AI model to predictively manage encoding resources. The "muster module" includes a Long Short-Term Memory (LSTM) neural network that analyzes real-time and historical viewership data. It predicts which content is likely to be viewed and at what quality levels, based on time of day, user geography, and content type. It then proactively assigns encoding jobs for this content before it is even requested. Furthermore, the bitrate selection for each "set of streamlets" is no longer fixed. A separate computer vision model (e.g., a lightweight CNN) analyzes each raw streamlet for its visual complexity. For low-complexity scenes (e.g., talking heads), it instructs the hosts to encode at lower bitrates for the same perceived quality, while for high-action scenes, it allocates higher bitrates, thus optimizing the bitrate ladder on a per-scene basis. The "bid" now includes a host's ability to run these specific AI models.
Mermaid.js Diagram:
flowchart LR subgraph Muster Module A[Viewership LSTM] --> B{Resource Planner} C[Scene Complexity CNN] --> B end subgraph Host Pool D[Host 1] E[Host 2] end F[Live Data] --> A G[Raw Streamlet] --> C B -- Job + Dynamic Bitrate Ladder --> D D -- Bid --> B E -- Bid --> B
Derivative 4.2: Blockchain-Verified Content Provenance and Royalty Distribution
Enabling Description: The system is integrated with a blockchain to create an immutable and auditable record of the content encoding and distribution process. The "muster module" is a smart contract on a permissioned blockchain (e.g., Hyperledger Fabric). A content owner submits a raw streamlet's hash to the smart contract to initiate an encoding job. "Hosts" are network participants who must stake collateral to participate. They "bid" by calling a function on the smart contract. The smart contract logic, based on the bid data, assigns the job to a host. Upon completion, the host submits the hashes of the newly encoded streamlet set back to the smart contract. This transaction serves as proof of work. The smart contract can then automatically trigger royalty payments to the content owner and a service fee to the encoding host. This provides a trustless environment for a decentralized content delivery network (dCDN).
Mermaid.js Diagram:
sequenceDiagram actor Owner participant SmartContract as Muster Module participant Host participant Ledger Owner->>SmartContract: submitJob(rawStreamletHash) Host->>SmartContract: submitBid(performanceMetrics, stake) SmartContract->>SmartContract: selectWinner() SmartContract->>Host: assignJob(rawStreamletHash) Host-->>Host: Encode Streamlet Set Host->>SmartContract: completeJob(encodedHashes) SmartContract->>Ledger: Record Transaction (Provenance) SmartContract->>Owner: Transfer Royalty SmartContract->>Host: Transfer Fee, Return Stake
Axis 5: The "Inverse" or Failure Mode
Derivative 5.1: Graceful Degradation via Distributed Job Queue
Enabling Description: This version is designed to fail safely if the central "muster module" becomes unavailable. In normal operation, the muster module pushes jobs to hosts. If hosts fail to receive a heartbeat from the muster module for a predefined period, they automatically switch to a "decentralized fallback" mode. In this mode, all incoming raw streamlets are placed into a distributed, redundant message queue (e.g., RabbitMQ cluster, Redis Pub/Sub). Each host acts as an independent worker, pulling the next available job from the queue. To avoid job collision, a locking mechanism on the queue is used. While less efficient at load balancing than the "bidding" system, this ensures the encoding pipeline remains operational, thus providing high availability. When the muster module comes back online, it signals the hosts to revert to the centrally managed mode.
Mermaid.js Diagram:
stateDiagram-v2 state "Centralized Mode" as C state "Decentralized Mode" as D [*] --> C C --> D: Muster Module Heartbeat Timeout D --> C: Muster Module Recovery Signal state C { direction LR Muster --> Host1 Muster --> Host2 Muster: Pushes jobs based on bids } state D { direction LR Host1 --> Queue Host2 --> Queue Queue: Hosts pull jobs from shared queue }
Combination Prior Art with Open-Source Standards
Combination with Kubernetes and FFmpeg: The patented system's logic is implemented as a custom Kubernetes Scheduler. The "host computing modules" are pods running the open-source FFmpeg library within Docker containers. The custom scheduler, replacing the default
kube-scheduler, subscribes to the Kubernetes API server to watch for new encodingJobobjects. Instead of using standard pod affinity or resource requests for scheduling, it implements the "bidding" logic. It queries the Metrics Server API to get real-time CPU and memory usage for each Node (host). It then annotates theJobobject with the Node that provides the "best bid" (lowest load), ensuring the pod is scheduled there. This combines the patent's core scheduling idea with ubiquitous, open-source cloud-native infrastructure.Combination with Apache Kafka and WebM/AV1: The architecture is built on open-source messaging and codecs. Raw streamlets are produced to an
input-streamletstopic in an Apache Kafka cluster. The "host computing modules" are a Kafka Streams application. The "muster module" is a separate control process that monitors the consumer lag and throughput of each application instance via JMX metrics. Based on these performance metrics (the "bid"), the muster module uses the Kafka Admin API to rebalance the consumer group, assigning more topic partitions to the faster instances. The Kafka Streams application consumes the raw streamlets, uses an embedded libaom library to encode them into the open-source AV1 codec within a WebM container, and produces the results to anoutput-streamletstopic.Combination with IPFS and Secure Reliable Transport (SRT): A decentralized live streaming system is created. A live video feed is ingested using the open-source SRT protocol for its low-latency, reliable transmission over noisy networks. An ingest server segments this SRT feed into raw "streamlets" and adds them to the InterPlanetary File System (IPFS), announcing the resulting Content Identifier (CID) on a public message bus. The "muster module" is a decentralized application (dApp) that reads these CIDs. "Hosts" are independent nodes that "bid" for jobs. The dApp assigns the job to a host, which then retrieves the raw streamlet from IPFS, encodes it into multiple bitrates, and pins the resulting set of streamlets to IPFS. The manifest file for playback is updated with the new CIDs, allowing for a fully decentralized, censorship-resistant streaming platform built on open standards.
Generated 5/8/2026, 9:57:14 PM