Patent 11470138
Derivative works
Defensive disclosure: derivative variations of each claim designed to render future incremental improvements obvious or non-novel.
Active provider: Google · gemini-2.5-pro
Derivative works
Defensive disclosure: derivative variations of each claim designed to render future incremental improvements obvious or non-novel.
Based on my analysis of U.S. Patent 11,470,138, and in my capacity as a Senior Patent Strategist and Research Engineer, I have generated the following defensive disclosure. The purpose of this document is to publicly disclose foreseeable variations, extensions, and applications of the '138 patent's teachings, thereby placing them into the public domain and establishing them as prior art.
Disclaimer: The priority date for the patent family of US 11,470,138 is April 28, 2005. The calculated expiration date for this patent is April 28, 2025. As the current date is May 8, 2026, the specific patent has expired. This disclosure is provided for defensive purposes against any existing or future related patents in this family or similar inventions by third parties.
Defensive Disclosure of Derivative Inventions
The core concepts disclosed in US 11,470,138—segmenting media into streamlets, creating multi-bitrate sets, and using a master/host architecture with a bidding mechanism for distributed encoding—can be extended in several ways. The following descriptions detail these extensions.
Axis 1: Material & Component Substitution
This axis explores substituting the computational and infrastructural components of the described system.
1.1. Heterogeneous Hardware Acceleration Encoding
- Enabling Description: The "host computing modules" are not limited to general-purpose CPUs. The encoding farm is composed of a heterogeneous mix of hardware accelerators. Host modules can be comprised of Graphics Processing Units (GPUs) using CUDA or OpenCL for massively parallel H.264/H.265/AV1 encoding, Field-Programmable Gate Arrays (FPGAs) configured with specific video processing pipelines for ultra-low latency, or Application-Specific Integrated Circuits (ASICs) designed for high-density, power-efficient encoding. The "encoding job completion bid" from each host includes metadata specifying its hardware type (CPU, GPU, FPGA, ASIC), the specific codecs it can accelerate, and its current thermal and power-draw state. The master module's scheduler is specifically designed to parse these bids and match the encoding job (e.g., a request for an AV1 streamlet) to the most appropriate hardware type (an AV1-capable ASIC or GPU), rather than a less-efficient CPU-based host.
- Mermaid Diagram:
graph TD subgraph Master Module A[Job Queue: Streamlet_N] B[Scheduler] end subgraph Host Farm H1(CPU Host) -- "Bid: {type:CPU, load:75%, est_time:250ms}" --> B H2(GPU Host) -- "Bid: {type:GPU, load:40%, est_time:80ms}" --> B H3(FPGA Host) -- "Bid: {type:FPGA, latency:15ms, codec:H265}" --> B end A --> B B -- "Assign Job N to most efficient" --> H2
1.2. RDMA-based Inter-Node Communication
- Enabling Description: The network communication between the master module and the host computing modules is implemented over a high-throughput, low-latency fabric like InfiniBand or RoCE (RDMA over Converged Ethernet) instead of a standard TCP/IP stack. The master module transfers raw streamlet data directly into the memory of the target host module using Remote Direct Memory Access (RDMA) write operations, bypassing the host's CPU and operating system kernel. This significantly reduces data transfer latency and CPU overhead on the hosts. The "bid" from a host can include its RDMA buffer availability and queue depth, allowing the master to select hosts that can ingest the next job with the least network overhead.
- Mermaid Diagram:
sequenceDiagram participant Master participant HostA participant HostB Master->>HostA: RDMA Write (Streamlet N) Note right of Master: Bypasses HostA Kernel Master->>HostB: RDMA Write (Streamlet N+1) Note right of Master: Bypasses HostB Kernel HostA-->>Master: RDMA Read (Bid with RDMA buffer status) HostB-->>Master: RDMA Read (Bid with RDMA buffer status) Master->>Master: Select best host based on RDMA stats
Axis 2: Operational Parameter Expansion
This axis defines the technology's operation under extreme conditions or at different scales.
2.1. Nanoscale Genomic Sequence Encoding
- Enabling Description: The system is adapted for real-time genomic sequencing and analysis. The "media content" is a raw DNA or RNA sequence feed from a nanopore sequencer. The "streamlet module" segments this continuous data stream into chunks of a predetermined number of base pairs (e.g., 10,000 bp). The "encoding module" does not perform video compression but rather different forms of bioinformatics analysis at varying computational costs. For example, a "low bitrate" streamlet is a simple base-calling and quality score calculation, while a "high bitrate" streamlet involves a full gene annotation, variant calling, and alignment to a reference genome. The master module assigns these analysis "encoding" jobs to a high-performance computing (HPC) cluster. A host's "bid" is based on its current queue of analysis tasks, available memory for holding the reference genome, and access speed to genomic databases.
- Mermaid Diagram:
flowchart LR A[Sequencer Data Stream] --> B{Streamlet Module<br/>(10k base pairs)}; B --> C[Raw Streamlet<br/>(seq_1.fasta)]; C --> D{Master Scheduler}; D -- Job: Basic QC --> E1[Host 1<br/>(Low-power CPU)]; D -- Job: Full Annotation --> E2[Host 2<br/>(GPU Bio-Cluster)]; E1 -- "Bid: {mem:1GB, time:1s}" --> D; E2 -- "Bid: {mem:128GB, time:30s}" --> D; E1 --> F1[Low-Bitrate Result<br/>(seq_1.qc)]; E2 --> F2[High-Bitrate Result<br/>(seq_1.vcf)];
2.2. Cryogenic Supercomputer-Based Encoding
- Enabling Description: The encoding process is performed within a cryogenic computing environment to manage heat dissipation from an extremely dense array of processors. The "host modules" are superconducting processors or quantum annealing processors operating near absolute zero. The "encoding job completion bid" is critically dependent on the processor's quantum state or superconducting stability. The bid includes not just processing load but also qubit decoherence time estimates, thermal flux measurements from cryo-coolers, and the energy cost of performing the computation. The master module's algorithm is optimized not for speed alone, but for a multi-objective function that minimizes both processing time and the heat generated, thereby preserving the stability of the cryogenic environment. This is applicable for encoding streams with quantum-resistant encryption or performing physics simulations where the "video" is a visualization of the simulation data.
- Mermaid Diagram:
graph TD subgraph Room Temperature A[Master Module] end subgraph Cryostat (-273°C) H1(Superconducting<br/>Processor A) H2(Superconducting<br/>Processor B) S1[Thermal Sensor] S2[Qubit Stability Sensor] end A -- Encoding Job --> H1; S1 -- Thermal Flux --> H1; S2 -- Decoherence Rate --> H1; H1 -- "Bid: {est_time: 1ns, heat_mW: 5, stable: 99.8%}" --> A; H2 -- "Bid: {est_time: 1.2ns, heat_mW: 3, stable: 99.9%}" --> A;
Axis 3: Cross-Domain Application
This axis describes applications of the core mechanism in unrelated industries.
3.1. Aerospace: Distributed Satellite Imagery Processing
- Enabling Description: A constellation of Earth observation satellites acts as a distributed processing network. A "master" satellite or ground station segments a large target area (e.g., the Amazon rainforest) into geographical tiles. Each tile is a "streamlet." The "encoding" process involves applying different analytical models to the raw sensor data for each tile: a "low bitrate" version could be simple cloud detection, "medium" could be vegetation indexing (NDVI), and "high" could be a machine-learning-based deforestation detection model. Nearby satellites in the constellation act as "hosts." A host satellite's "bid" is based on its available processing power, remaining battery life, current orientation (is the target tile in view?), and the quality of its inter-satellite communication link. The master assigns processing jobs to the most suitable satellites to generate an analytical map in near-real-time.
- Mermaid Diagram:
graph TD subgraph Ground Station / Master Satellite M[Master Scheduler] end subgraph LEO Constellation Sat_A[Satellite A<br/>Processor: Idle<br/>Battery: 90%] Sat_B[Satellite B<br/>Processor: Busy<br/>Battery: 60%] Sat_C[Satellite C<br/>Processor: Idle<br/>Battery: 85%] end M -- Job: Tile_734 Analysis --> Sat_A M -- Job: Tile_735 Analysis --> Sat_C Sat_A -- "Bid: {CPU: 10%, Batt: 90%, Link: 1Gbps}" --> M Sat_B -- "Bid: {CPU: 95%, Batt: 60%, Link: 100Mbps}" --> M Sat_C -- "Bid: {CPU: 15%, Batt: 85%, Link: 1Gbps}" --> M Sat_A -.-> Sat_C Sat_A -.-> Sat_B
3.2. Agriculture Technology: Swarm Robotics for Crop Analysis
- Enabling Description: A swarm of autonomous drones is deployed over a large agricultural field. The "master" is a base station computer. The "media content" is the entire field, which the master divides into GPS-defined sectors ("streamlets"). Each drone is a "host." The "encoding" job is to fly to a sector, capture multi-spectral imagery, and process it locally to generate different "bitrates" of data: "low bitrate" is a simple hydration level, "medium bitrate" is a nitrogen deficiency map, and "high bitrate" is a machine learning model that identifies specific insect infestations. A drone's "bid" to process a sector is a function of its proximity to the sector, current battery level, onboard processing capacity, and the quality of its wireless link back to the base station. The master assigns sectors to drones to optimize for total field coverage time and data richness.
- Mermaid Diagram:
erDiagram DRONE ||--o{ SECTOR : "processes" MASTER_STATION ||--|{ SECTOR : "assigns" MASTER_STATION }|--|| DRONE : "receives bid from" DRONE { string DroneID int BatteryLevel string GpsPosition int CpuLoad } SECTOR { string SectorID string GpsCoordinates string Status } MASTER_STATION { string MasterID string JobQueue }
3.3. Financial Services: Real-Time Risk Model Execution
- Enabling Description: A financial institution uses this system for real-time portfolio risk analysis. The "media content" is the live market data feed (e.g., OPRA options feed). The "streamlet module" divides the feed into 100-millisecond time slices. For each slice, the "encoding module" runs multiple risk models in parallel. A "low bitrate" model is a simple calculation of portfolio Delta. A "medium bitrate" model is a Value at Risk (VaR) calculation. A "high bitrate" model is a complex Monte Carlo simulation. The "hosts" are servers in a distributed computing grid. A host's "bid" reflects its current CPU load, memory availability for loading the portfolio state, and network latency to the source market data. The "master" (a risk management server) assigns the model execution jobs to the fastest available hosts to provide traders with sub-second risk updates.
- Mermaid Diagram:
sequenceDiagram participant MarketFeed participant MasterRiskServer participant AnalyticsHost1 participant AnalyticsHost2 MarketFeed->>MasterRiskServer: Market Data (t=0ms to 100ms) MasterRiskServer->>AnalyticsHost1: BID REQ MasterRiskServer->>AnalyticsHost2: BID REQ AnalyticsHost1-->>MasterRiskServer: BID RESP (Low Load) AnalyticsHost2-->>MasterRiskServer: BID RESP (High Load) MasterRiskServer->>AnalyticsHost1: RUN MonteCarlo(Data_t0) AnalyticsHost1-->>MasterRiskServer: Result_MonteCarlo
---
### **Axis 4: Integration with Emerging Tech**
This axis describes integrating the core patent concepts with modern technologies.
#### **4.1. AI-Driven Predictive Load Balancing**
* **Enabling Description:** The master module is enhanced with an AI-based predictive scheduling engine, replacing the simple reactive bidding system. It uses a trained machine learning model (e.g., a recurrent neural network or gradient boosting model) to predict the completion time for a given encoding job on each host. The model's features include not only the host's reported stats (the "bid") but also historical performance on similar jobs, the time of day (to account for external network traffic patterns), the complexity of the source streamlet (measured by spatial and temporal information metrics), and even the ambient temperature of the data center. The master no longer waits for bids but proactively assigns jobs to the host the model predicts will provide the optimal balance of speed, cost, and quality, even before the host becomes fully available.
* **Mermaid Diagram:**
```mermaid
graph TD
subgraph MasterModule
A[Job Queue] --> B{ML Scheduler}
C[Host Performance DB] --> B
end
subgraph Hosts
H1[Host 1]
H2[Host 2]
end
B -- "Predicts H1 is optimal" --> D[Assign Job to H1]
D --> H1
H1 -- "Performance Data" --> C
```
#### **4.2. IoT-Aware Edge Encoding**
* **Enabling Description:** The system is deployed on a distributed network of Internet of Things (IoT) edge devices (e.g., smart cameras, industrial sensors). These devices act as both capture points and "hosts". The "master" module runs in the cloud or on a regional gateway. An edge device captures a video "streamlet" and then submits a "bid" to encode it. The bid is enriched with IoT-specific metrics: its power status (mains or battery), local network quality (Wi-Fi, 5G, LoRaWAN), and on-device resource contention from other running applications. The master module can decide to have the device encode its own streamlet (self-hosting) if it has sufficient resources, or it can offload the raw streamlet to another, more powerful nearby edge device that submitted a better bid, optimizing for battery life and network data usage across the entire IoT deployment.
* **Mermaid Diagram:**
```mermaid
stateDiagram-v2
state "Idle" as Idle
state "Capturing" as Capturing
state "Bidding" as Bidding
state "Encoding" as Encoding
state "Offloading" as Offloading
[*] --> Idle
Idle --> Capturing: Start Event
Capturing --> Bidding: Streamlet Ready
Bidding --> Encoding: Master Assigns Self
Bidding --> Offloading: Master Assigns Peer
Encoding --> Idle: Complete
Offloading --> Idle: Complete
```
#### **4.3. Blockchain-Verified Encoding for Royalty Reporting**
* **Enabling Description:** The system is augmented with a private blockchain for creating an immutable audit trail of the encoding process. When a host completes an encoding job for a streamlet, it generates a proof-of-work that includes a hash of the source streamlet, a hash of the output encoded streamlet, the parameters of the winning "bid," and the actual time taken. The master module validates this proof and records it as a transaction on a distributed ledger. This provides a transparent, tamper-proof record of which content was encoded, at what quality levels, and by which computational resource. This is used for automated royalty payments in a multi-publisher content system, where different publishers are billed for the computational resources used to encode their content, and royalties are paid to content owners based on the number of streamlets encoded.
* **Mermaid Diagram:**
```mermaid
flowchart TD
A[Start Encoding Job for 'Content A'] --> B{Host X Wins Bid};
B --> C[Host X Encodes Streamlet S1];
C --> D{Generate Proof:<br/>- hash(S1_raw)<br/>- hash(S1_encoded)<br/>- bid_params<br/>- timestamp};
D --> E[Master Validates Proof];
E --> F((Blockchain Ledger));
F -- "Transaction Recorded" --> G[Trigger Royalty Payment<br/>to Owner of 'Content A'];
```
---
### **Axis 5: The "Inverse" or Failure Mode**
This axis describes designing the system for graceful degradation or safe failure.
#### **5.1. Graceful Degradation via "Best-Effort" Encoding Tier**
* **Enabling Description:** The system operates in a "best-effort" or "low-power" mode during periods of high load or energy constraints. In this mode, the master module alters its job assignment criteria. It will only solicit bids for and assign jobs to encode the lowest-bitrate streamlet (e.g., 240p audio-only). Higher-quality streamlets are temporarily skipped. If all hosts report high load (i.e., all "bids" exceed a predefined time threshold), the master module will not assign the encoding job at all. Instead, it instructs the origin server to serve a pre-generated "Sorry, this quality is temporarily unavailable" slate image or loop the previously successful streamlet. This ensures the core service remains available at a minimum quality level and prevents cascading failures from overloading the encoding farm.
* **Mermaid Diagram:**
```mermaid
graph TD
A{System Load High?} -- Yes --> B{Enter Low-Power Mode}
B --> C{Master only requests bids for 240p streamlet}
C --> D{All Bids > Threshold?}
D -- Yes --> E[Serve 'Unavailable' Slate]
D -- No --> F[Assign 240p job to best host]
A -- No --> G[Normal Operation]
```
#### **5.2. Proactive Redundant Encoding for Fault Tolerance**
* **Enabling Description:** To ensure high availability for critical live events, the master module implements a policy of proactive redundancy. When a new streamlet job arrives, the master does not assign it to the single best host based on its bid. Instead, it assigns the *same encoding job* to the top *two* hosts that submitted acceptable bids. The hosts race to complete the job. The first host to return the encoded streamlet has its result written to the primary streamlet database. The result from the second host is discarded. This "inverse" of pure efficiency-based assignment protects against a single host failing or becoming unexpectedly slow during a critical encoding task. The system pays a power and resource cost for the redundant work, but gains a significant increase in resilience and a reduction in the "long tail" of high-latency streamlet delivery.
* **Mermaid Diagram:**
```mermaid
sequenceDiagram
participant Master
participant Host_A
participant Host_B
participant Database
Master->>Host_A: Encode Streamlet_X (Primary)
Master->>Host_B: Encode Streamlet_X (Redundant)
Host_A-->>Master: Result_X
Master->>Database: Write Result_X
Host_B-->>Master: Result_X
Master->>Master: Discard redundant result
```
---
### **Combination Prior Art with Open-Source Standards**
#### **1. Combination with FFmpeg and Bash**
* **Enabling Description:** A content delivery system uses the master/host architecture of the '138 patent where the "hosts" are standard Linux servers running the open-source FFmpeg multimedia framework. The "master" is a central server running a scheduler script (e.g., in Bash or Python). When a new video file arrives, the master segments it into 2-second chunks. For each chunk, it polls the host servers. Each host runs a script that calculates its "bid" by checking `1-minute load average` from `/proc/loadavg` and available RAM from `free -m`. The master receives these bids and assigns the job to the host with the lowest load by issuing a remote `ssh` command. The command executes `ffmpeg` with specific parameters to create multiple output files at different bitrates (e.g., `ffmpeg -i chunk_1.ts -b:v 500k chunk_1_500k.ts -b:v 1M chunk_1_1M.ts`). This combines the patent's bidding logic with ubiquitous, open-source tools.
#### **2. Combination with Kubernetes and Prometheus**
* **Enabling Description:** The encoding system is deployed as a cloud-native application on a Kubernetes cluster. The encoding application is packaged as a Docker container. The "master module" is implemented as a Kubernetes Custom Controller. The "host modules" are pods that can be scaled horizontally. The open-source monitoring tool Prometheus scrapes metrics from all pods in real-time. The "encoding job completion bid" is not actively sent by the pods; instead, the master controller queries the Prometheus API to get the current CPU utilization, memory usage, and network I/O for each encoder pod. Based on these real-time metrics, the controller creates a Kubernetes `Job` and uses node affinity rules to schedule it onto the pod/node that is currently the most under-utilized, thereby achieving dynamic load balancing without a direct "bid" message from the hosts themselves.
#### **3. Combination with DASH.js and WebRTC Data Channels**
* **Enabling Description:** The client-side player, using the open-source DASH.js library, is modified to provide real-time network performance feedback to the content server infrastructure. In addition to standard HTTP requests for media segments, the client opens a WebRTC data channel back to a "feedback aggregator" service. The client sends frequent updates on its measured bandwidth, buffer health, and dropped frame count over this channel. This feedback data is used by the '138 patent's master encoder module as a key input for its job assignment logic. The master can prioritize encoding bitrates that are in high demand by currently active clients or deprioritize bitrates that clients are reporting they cannot sustain. This creates a feedback loop from an open-source client directly influencing the behavior of the proprietary server-side encoding farm.
Generated 5/8/2026, 12:13:50 AM