Patent 11470138B2
Derivative works
Defensive disclosure: derivative variations of each claim designed to render future incremental improvements obvious or non-novel.
Active provider: Google · gemini-2.5-pro
Derivative works
Defensive disclosure: derivative variations of each claim designed to render future incremental improvements obvious or non-novel.
Defensive Disclosure and Prior Art Derivations for U.S. Patent 11,470,138 B2
Publication Date: May 8, 2026
Disclosed by: [Your Name/Entity], acting as a Senior Patent Strategist and Research Engineer.
Subject Matter: This document discloses novel variations, extensions, and applications of the adaptive multi-bitrate streaming method described in U.S. Patent 11,470,138 B2. The intent of this publication is to place these concepts into the public domain to be considered prior art for any future patent applications.
Derivations Based on Independent Claim 1
The core method involves segmenting media, creating multi-bitrate versions of each segment ("streamlets"), and enabling a client to adaptively request these streamlets over HTTP based on network conditions. The following are derivative implementations and applications of this core method.
1. Derivations via Material & Component Substitution
1.1. FPGA-Based Real-Time Streamlet Encoder
Enabling Description: This variation replaces the general-purpose CPU-based "host computing modules" (as described in the patent's embodiment in FIG. 5a) with Field-Programmable Gate Array (FPGA) co-processors. A master control unit receives the uncompressed video data and segments it into time-indexed chunks. Each chunk is then dispatched to a dedicated FPGA-based encoding pipeline. Each FPGA is configured with multiple parallel Digital Signal Processing (DSP) blocks, with each block hard-coded to output a specific bitrate (e.g., 500kbps, 1500kbps, 4000kbps) and codec (e.g., AV1, HEVC). The parallel nature of the FPGA allows a single chip to generate a complete "set of streamlets" for a given time index in a deterministic, low-latency manner, significantly outperforming software-based encoding on general-purpose servers for live, high-density applications like multi-angle sports broadcasting. The master module's "bidding" system is replaced by a deterministic job scheduler that routes raw video frames to the next available FPGA pipeline.
Mermaid Diagram:
graph TD A[Live Video Feed - Uncompressed YUV] --> B{Master Control Unit}; B --> C1[FPGA Encoder 1]; B --> C2[FPGA Encoder 2]; B --> C_N[FPGA Encoder N]; subgraph FPGA Encoder direction LR D[Input: Raw Streamlet] --> E{Parallel DSP Blocks}; E --> F1[AV1 Encoder @ 500kbps]; E --> F2[AV1 Encoder @ 1500kbps]; E --> F3[AV1 Encoder @ 4000kbps]; end F1 --> G{Streamlet Set Assembler}; F2 --> G; F3 --> G; G --> H[Streamlet Database / Origin Server];
1.2. Neuromorphic Processor for Perceptual Quality Encoding
Enabling Description: This derivative substitutes standard encoders with a neuromorphic processor (e.g., based on Loihi or SpiNNaker architecture) that performs content-aware encoding. Instead of fixed bitrates, the encoding module analyzes the visual complexity and motion vectors of each raw streamlet using a Spiking Neural Network (SNN). The SNN determines the optimal bitrate allocation to achieve a consistent target VMAF (Video Multimethod Assessment Fusion) or other perceptual quality score. For a given time segment, it might generate a 720p streamlet at 1.2 Mbps for a low-motion scene but a 720p streamlet at 2.5 Mbps for a high-action scene, both targeting a VMAF score of 95. This creates "sets of streamlets" based on perceptual quality targets rather than rigid bitrate ladders, providing a more efficient use of bandwidth for equivalent user-perceived quality.
Mermaid Diagram:
sequenceDiagram participant CM as Content Module participant NP as Neuromorphic Processor (SNN) participant SD as Streamlet Database CM->>NP: Send Raw Streamlet (Time Index T) NP->>NP: Analyze Complexity & Motion NP-->>CM: Return Quality-Targeted Bitrate Set (e.g., VMAF 80, 90, 95) CM->>SD: Store Encoded Streamlet Set for Index T
2. Derivations via Operational Parameter Expansion
2.1. Micro-Streamlets for Ultra-Low Latency Telepresence
Enabling Description: For applications like remote surgery or industrial robotics, latency is more critical than quality. This variation reduces the streamlet duration to the micro-second scale (e.g., 100-500ms). The "set of streamlets" for each time index includes not only different bitrates but also different levels of Forward Error Correction (FEC) and even spatial resolutions (e.g., 480p, 720p, 1080p). The client's adaptation logic is tuned to prioritize the fastest-arriving streamlet, even if it means accepting a lower-quality or lower-resolution segment, to maintain a glass-to-glass latency of under 50ms. The system operates on a dedicated, high-frequency network (e.g., private 5G or Li-Fi) where bandwidth fluctuations are rapid but short-lived.
Mermaid Diagram:
graph TD subgraph Origin Server A[Real-Time Video Capture] --> B{Segmenter (100ms chunks)}; B --> C{Parallel Encoder}; C --> D1[Set T: 480p, Low FEC, 1Mbps]; C --> D2[Set T: 720p, Med FEC, 3Mbps]; C --> D3[Set T: 1080p, High FEC, 6Mbps]; end subgraph Client Device (Remote Robot) E[Network Monitor] --> F{Adaptation Logic}; F -- Chooses lowest latency --> G[HTTP GET Request]; G --> D1; G --> D2; G --> D3; end
2.2. Macro-Streamlets for Deep Space Communications
Enabling Description: This derivative addresses communication with space probes where the network is characterized by extremely high latency (minutes to hours) but predictable, albeit low, bandwidth. Here, "streamlets" are expanded into multi-megabyte data chunks representing hours of scientific data (e.g., a full panoramic image from a Mars rover). The "set of streamlets" comprises versions with different levels of data compression (lossy vs. lossless) and scientific fidelity. A low-bitrate version might be a heavily compressed JPEG thumbnail, a medium version a wavelet-compressed science-grade image, and a high-bitrate version the full, uncompressed RAW sensor data. The client (a mission control system) would first request the thumbnail streamlet to preview the data. Based on its scientific value, it would then schedule a long-duration download of the medium or high-fidelity streamlet during a scheduled communication window. The adaptation is manual or semi-automated, not real-time, based on data priority and available downlink capacity.
Mermaid Diagram:
stateDiagram-v2 [*] --> Thumbnail_Received: Rover sends low-fi streamlet Thumbnail_Received --> Analyzing: Mission Control analyzes thumbnail Analyzing --> High_Value: If data is high priority Analyzing --> Low_Value: If data is low priority High_Value --> Scheduling_Download: Schedule downlink for high-fi streamlet Scheduling_Download --> [*]: Download Complete Low_Value --> [*]: Archive Thumbnail
3. Derivations via Cross-Domain Application
3.1. Aerospace: Adaptive Telemetry Streaming
Enabling Description: In aviation, real-time aircraft engine and systems data is streamed to ground control for monitoring. This system applies the streamlet concept to telemetry data. The raw data stream (pressures, temperatures, RPMs, GPS) is segmented into 1-second "streamlets." A "set of streamlets" is generated for each second with varying data resolutions. The low-bitrate streamlet contains only critical go/no-go parameters. The medium-bitrate streamlet adds secondary diagnostic parameters. The high-bitrate streamlet includes the full, high-precision sensor data. An aircraft communicating over a fluctuating satellite link would adaptively switch between these streamlets. In normal conditions, it sends the full data. If the connection degrades, the client automatically drops to the critical-parameters-only streamlet, ensuring that the most vital information is always received by ground control.
Mermaid Diagram:
flowchart LR subgraph Aircraft A[Sensor Bus] --> B{Telemetry Segmenter}; B --> C{Encoder}; C --> D1[Critical Params (5kbps)]; C --> D2[Diagnostics (50kbps)]; C --> D3[Full Data (500kbps)]; D1 & D2 & D3 --> E{Adaptive Transmitter}; end subgraph Ground Control F[Satellite Link] --> G[Receiver]; G --> H{Data Re-assembler}; H --> I[Monitoring Dashboard]; end E -- Fluctuating Bandwidth --> F;
3.2. AgTech: Adaptive Multispectral Data from Drones
Enabling Description: An agricultural drone flies over a field, capturing multispectral imagery to assess crop health. The connection between the drone and the ground station can be unreliable. This system segments the sensor data into "geospatial streamlets," where each streamlet corresponds to a specific 10x10 meter plot of land. For each plot, a "set" is generated: a low-bitrate NDVI (Normalized Difference Vegetation Index) value, a medium-bitrate compressed RGB+Infrared image, and a high-bitrate full 5-band raw sensor data file. The ground station client monitors the drone's telemetry link. As the drone flies, the client requests the highest quality streamlet possible for each geospatial segment. If the link weakens, it automatically switches to requesting only the low-bitrate NDVI values, ensuring a complete but low-fidelity health map is always created, rather than a high-fidelity map with large gaps.
Mermaid Diagram:
erDiagram DRONE ||--o{ GEOSPATIAL_STREAMLET : captures GEOSPATIAL_STREAMLET { string plotID string timestamp } GEOSPATIAL_STREAMLET ||--|{ STREAMLET_SET : contains STREAMLET_SET { string qualityLevel string dataType blob data }
3.3. Medical: Adaptive Streaming of 4D Medical Scans
Enabling Description: A radiologist remotely reviews a 4D (3D + time) cardiac MRI scan, which is several gigabytes in size. The '138 patent's method is used to stream this data. The scan is segmented into temporal streamlets (e.g., 500ms of the cardiac cycle). Each temporal streamlet is encoded into a set with varying spatial resolutions: a low-bitrate 256x256x96 voxel version, a medium 512x512x128 version, and the full high-bitrate 1024x1024x256 version. When the radiologist starts the review, the client pre-fetches the low-resolution streamlets for the entire cardiac cycle for rapid scrubbing. When they pause at a specific time index to examine a structure, the client detects the pause and automatically requests the high-resolution streamlet for that specific temporal segment, which then loads in to provide diagnostic detail. This balances the need for interactive, real-time control with the requirement for high-resolution diagnostic imagery.
Mermaid Diagram:
sequenceDiagram Radiologist->>Viewer: Scrubs timeline to 2.5s Viewer->>Server: HTTP GET /scan/T2.5_low_res.dcm Server-->>Viewer: Returns low-res streamlet Viewer->>Viewer: Display low-res 2.5s frame Radiologist->>Viewer: Pauses playback Viewer->>Server: HTTP GET /scan/T2.5_high_res.dcm Server-->>Viewer: Returns high-res streamlet Viewer->>Viewer: Re-renders frame with high-res data
4. Derivations via Integration with Emerging Tech
4.1. AI-Driven Predictive Streamlet Fetching
Enabling Description: This variation enhances the client-side adaptation logic with a machine learning model (e.g., an LSTM network). The model is trained on historical network performance data (throughput, latency, jitter) and user behavior (pauses, seeks, time-of-day usage). Instead of reactively switching bitrates based on past performance, the client predicts the likely bandwidth for the next 5-10 seconds. It then pro-actively requests streamlets at the predicted quality level, even if current conditions are temporarily better or worse. This "predictive buffering" smooths out the user experience by avoiding jarring quality shifts caused by transient network spikes or dips and reduces the likelihood of buffer underruns by anticipating network degradation before it occurs.
Mermaid Diagram:
graph TD A[Real-time Network Stats] --> B(LSTM Prediction Model); C[User Behavior History] --> B; B -- Predicted Bandwidth for T+5s --> D{Streamlet Selection Logic}; D -- "Request 4Mbps streamlet" --> E[HTTP GET Request]; E --> F[Media Player Buffer];
4.2. IoT-Informed Live Stream Encoding
Enabling Description: In a live event setting (e.g., a concert), a network of IoT sensors monitors crowd density, Wi-Fi/5G access point load, and backhaul status in different venue sections. This real-time data feeds into the master encoding module. The module dynamically adjusts the available bitrate "rungs" in the streamlet sets. If sensors report heavy congestion in Section 101, the encoder might cap the maximum available bitrate for streams delivered to that zone at 2 Mbps to ensure stability for all users. Simultaneously, for Section 205 where the network is clear, it can generate a new, higher-quality 8 Mbps 4K streamlet. This creates a geographically and network-aware encoding profile that optimizes resource allocation for the entire venue in real-time.
Mermaid Diagram:
flowchart TD subgraph IoT_Sensors A[AP Load Sensor - Sec 101] B[Crowd Density - Sec 203] C[Backhaul Monitor] end subgraph Content_Server D{Encoding Policy Engine} E[Streamlet Encoder] end A & B & C --> D; D -- "Limit Sec 101 to 2Mbps" --> E; D -- "Enable 8Mbps for Sec 205" --> E; E --> F[Streamlet Sets for All Zones];
4.3. Blockchain-Verified Streamlet Delivery for Digital Rights Management
Enabling Description: To prevent piracy and ensure proper royalty payments in a decentralized CDN, this derivative integrates blockchain. When the content module generates a set of streamlets, it calculates the hash of each streamlet and records it on a distributed ledger (e.g., a Hyperledger Fabric chain) along with its time index and bitrate. A client's request for a streamlet is a micro-transaction on the chain. The CDN node serving the streamlet provides proof-of-delivery. This creates an immutable, auditable record of every streamlet viewed by every user. This allows for transparent, per-segment royalty calculations and prevents unauthorized servers from injecting malicious or pirated content, as the client can verify the hash of each received streamlet against the public ledger.
Mermaid Diagram:
sequenceDiagram participant Encoder participant Blockchain participant Client participant CDN_Node Encoder->>Blockchain: StoreHash(Streamlet_T1_480p) Client->>CDN_Node: GET Streamlet_T1_480p CDN_Node->>Client: Deliver Streamlet_T1_480p Client->>Client: Calculate Hash(Received_Data) Client->>Blockchain: VerifyHash(Streamlet_T1_480p) Blockchain-->>Client: Hash Match OK Client->>Blockchain: Record Playback Transaction
5. Derivations via "Inverse" or Failure Mode
5.1. "Scout" Streamlet for Graceful Degradation
Enabling Description: This variation is designed for extremely low-bandwidth or unreliable networks (e.g., satellite phone, IoT networks). In addition to the standard video streamlets, the system generates a "scout" streamlet for each time index. The scout streamlet contains no video; it is a tiny data file (<1KB) containing only audio and a single, highly-compressed I-frame thumbnail for that segment. When network bandwidth drops below a critical threshold (e.g., 50 kbps), the client switches to "scout mode." It stops requesting video streamlets and only fetches the audio/thumbnail scout streamlets. This allows the user to continue following the audio content of the program, with a periodically updating static image, instead of experiencing a total failure (buffering). This provides a predictable, ultra-low-bandwidth failure state.
Mermaid Diagram:
stateDiagram-v2 state "Full Video" as S1 state "Scout Mode (Audio + Thumbnail)" as S2 [*] --> S1: Start Playback S1 --> S2: Bandwidth < 50kbps S2 --> S1: Bandwidth > 100kbps S1 --> [*]: Stop S2 --> [*]: Stop
5.2. Proactive Redundant Streamlet Transmission
Enabling Description: To combat networks with high packet loss but sufficient bandwidth, this system introduces redundancy. For every time index
T, in addition to the set of different quality streamlets (Q1, Q2, Q3), the client can request a primary streamlet (e.g.,Q2) and simultaneously a secondary, very low-bitrate "redundancy" streamlet (QR). The redundancy streamlet contains Forward Error Correction (FEC) data derived from the primaryQ2streamlet. IfQ2arrives with missing packets, the client uses the data fromQRto reconstruct the missing information without needing to re-request the data, thus preventing a stall. The adaptation logic decides whether to request theQRstreamlet based on measured packet loss rates, not just raw bandwidth.Mermaid Diagram:
graph TD A{Client Adaptation Logic} A -- Bandwidth Check --> B[Select Primary Streamlet Q2]; A -- Packet Loss Check --> C[Request Redundant FEC Streamlet QR?]; B --> D[GET /stream/T_Q2.m4s]; C -- Yes --> E[GET /stream/T_QR_fec.m4s]; subgraph Client-Side F[Received Q2] & G[Received QR] --> H{Packet Reconstructor}; H --> I[Decode & Play]; end
Combination Prior Art Scenarios
1. Combination with MPEG-DASH Standard (ISO/IEC 23009-1)
- Enabling Description: The method of the '138 patent is implemented using the open MPEG-DASH standard. The "streamlets" are formatted as ISO Base Media File Format (ISOBMFF) segments. The server generates a Media Presentation Description (MPD) file, which is an XML manifest. This MPD file describes the available media content, its various representations (the different bitrate "streams"), and the temporal and byte-range location of each segment ("streamlet"). The "set of streamlets" for a given time index is described within the MPD as a single
AdaptationSetcontaining multipleRepresentationelements, each with a differentbandwidthattribute. The client, a standard DASH.js or Shaka Player, parses this MPD and performs the adaptive switching logic as described in the patent, but does so by requesting the segments listed in the standard-compliant manifest file. This combination is a direct implementation of the patent's method using a well-established, open standard.
2. Combination with WebRTC for Peer-to-Peer (P2P) Delivery
- Enabling Description: This disclosure combines the '138 patent's method with the WebRTC protocol to create a hybrid P2P/CDN adaptive streaming network. A central server still creates the streamlets and a manifest file (e.g., an HLS playlist or DASH MPD). When a client needs a specific streamlet (e.g.,
segment_10_720p.m4s), it first queries a WebRTC tracker server to find peers that have recently downloaded that same streamlet. If a peer is found, the client establishes a direct WebRTCDataChannelto the peer and downloads the streamlet. If no peers are available, or if the P2P transfer is too slow, the client falls back to requesting the streamlet from the traditional CDN/web server via HTTP. The client's adaptation logic considers both CDN bandwidth and peer availability/speed when deciding which bitrate to request next, effectively using the peer network as a dynamic, distributed cache.
3. Combination with QUIC Protocol and Server Push
- Enabling Description: The streamlet fetching mechanism is implemented over the QUIC protocol instead of traditional TCP/HTTP. QUIC, which underlies HTTP/3, provides multiplexed streams over a single connection, mitigating head-of-line blocking. The client makes an initial request for the first streamlet. The server, knowing the sequential nature of the content, uses the QUIC "Server Push" feature to proactively send the next few streamlets of the same quality level to the client without waiting for an explicit request. This reduces the latency associated with multiple request-response cycles. The client's adaptive logic monitors the delivery rate of the pushed streams. If it detects congestion, it can send a
CANCEL_PUSHframe to the server and issue a new GET request for a lower-quality streamlet, thus combining the low-latency benefits of server push with the adaptive capabilities of the core invention.
Generated 5/8/2026, 6:47:33 PM