Patent 8145721

Derivative works

Defensive disclosure: derivative variations of each claim designed to render future incremental improvements obvious or non-novel.

Active provider: Google · gemini-2.5-pro

Derivative works

Defensive disclosure: derivative variations of each claim designed to render future incremental improvements obvious or non-novel.

✓ Generated

Publication Date: April 26, 2026
Title: Defensive Disclosure of Derivative Embodiments and Obvious Improvements for Systems of Progressive Multimedia File Delivery

This document is intended to enter the public domain as prior art. It discloses a series of derivative works, improvements, and alternative embodiments based on the core concepts described in U.S. Patent 8,145,721. The purpose is to render obvious any future patent claims on these or similar incremental improvements.

Core Concept Background

The foundational concept involves a client-server architecture where a multimedia file is split into a low-quality, streamable first part and a high-quality second part. These parts are transmitted separately and subsequently combined on the client device to reconstruct the original file. This disclosure expands upon that concept.


1. Derivative Implementations

1.1. Component Substitution: Scalable Codec Implementation

  • Enabling Description: This embodiment replaces the abstract "first coding" and "second coding" with a standards-based scalable codec. For video, the server utilizes a Scalable Video Coding (SVC) encoder (ITU-T H.264 Annex G) or Scalable High Efficiency Video Coding (SHVC) (ITU-T H.265 Annex F). The multimedia file is encoded once into a multi-layer bitstream. The "first part" consists of the temporal and spatial base layer (BL). The "second part" consists of one or more enhancement layers (ELs). The server sends the BL via a first bitstream for immediate decoding and playback. The ELs are sent via a second bitstream. The client's SVC/SHVC-compliant decoder combines the decoded layers in real-time or post-facto to render the full resolution/quality video. This method is more efficient than two separate encodings as the enhancement layers re-use information from the base layer.
  • Diagram:
    sequenceDiagram
        participant Client
        participant Server
        Client->>Server: Request Scalable Video (SVC/SHVC)
        Server-->>Client: Stream 1: Base Layer (BL)
        Server-->>Client: Stream 2: Enhancement Layer(s) (EL)
        activate Client
        Note over Client: Decode and Play BL immediately
        Note over Client: Buffer ELs
        Note over Client: Combine decoded BL + ELs for full quality
        deactivate Client
    

1.2. Component Substitution: Asymmetric Transport Protocol

  • Enabling Description: This embodiment utilizes different transport protocols for each bitstream to optimize for their distinct purposes. The "first part" (low-quality) is streamed using Real-time Transport Protocol (RTP) over UDP to minimize latency for real-time playback, accepting potential minor packet loss. The "second part" (high-quality) is transmitted using the QUIC protocol (IETF RFC 9000), which leverages UDP but provides stream multiplexing, congestion control, and improved error recovery over TCP. This ensures the larger, high-quality data is transferred reliably and efficiently without the head-of-line blocking issues of TCP, while the real-time stream remains unimpeded.
  • Diagram:
    graph TD
        subgraph Server
            A[Multimedia File] --> B{Encoder};
            B --> C[Part 1: Low-Q];
            B --> D[Part 2: High-Q];
        end
        subgraph Network
            E[RTP over UDP]
            F[QUIC over UDP]
        end
        subgraph Client
            G[Real-time Player]
            H[File Buffer]
            I{Decoder & Combiner}
        end
        C -- Stream 1 --- E --> G;
        D -- Stream 2 --- F --> H;
        G --> I;
        H --> I;
    

1.3. Operational Parameter Expansion: Volumetric Microscopy Data Delivery

  • Enabling Description: The system is applied to terabyte-scale volumetric data sets from sources like light-sheet fluorescence microscopy (LSFM). The "first part" is a decimated point cloud representation of the data, generated using an octree algorithm, resulting in a file of a few megabytes. This part is streamed to a researcher's workstation for interactive 3D navigation. The "second part" comprises the full-resolution voxel data blocks. As the researcher selects a specific region of interest in the low-resolution viewer, the client requests the corresponding high-resolution data blocks for that region from the server, which are then downloaded and combined with the base model to provide a high-fidelity view of the selected area.
  • Diagram:
    flowchart LR
        subgraph Server
            A[TB-scale Voxel Data] --> B{Octree Decimator};
            B --> C[Part 1: Low-Res Point Cloud];
            A --> D[Part 2: Full-Res Voxel Blocks];
        end
        subgraph Researcher Client
            E[3D Viewer] --> F{ROI Selection};
            F --> G[Request High-Res Blocks];
            H[Combiner]
        end
        C -- Stream 1 --> E;
        F -- Generates --> G;
        G -- Sends to --> Server;
        Server -- Sends blocks from D --> H;
        E -- Feeds into --> H;
    

1.4. Operational Parameter Expansion: Industrial Digital Twin Synchronization

  • Enabling Description: This embodiment is applied to a digital twin of a manufacturing facility. The "first part" is a continuous, low-bandwidth telemetry stream using the MQTT protocol, containing key performance indicators (KPIs) and state data for major machinery, rendered as a simplified 3D schematic. The "second part" consists of high-fidelity Computer-Aided Engineering (CAE) and physics-based simulation models. When an anomaly is detected in the real-time MQTT stream (e.g., vibration exceeds a threshold), a diagnostic request is triggered. The server then transmits the relevant high-fidelity simulation model ("second part") for the specific malfunctioning asset, which is loaded by the engineer's workstation for detailed root-cause analysis.
  • Diagram:
    stateDiagram-v2
        [*] --> Streaming_Low_Fi
        Streaming_Low_Fi: Client displays live MQTT telemetry on schematic.
        Streaming_Low_Fi --> Anomaly_Detected: Vibration > Threshold
        Anomaly_Detected --> Downloading_High_Fi: Engineer requests diagnostic model.
        Downloading_High_Fi --> Analysis: High-fidelity CAE model loaded and combined with telemetry data.
        Analysis --> Streaming_Low_Fi: Diagnostic complete.
    

1.5. Cross-Domain Application: Aerospace In-Flight Entertainment (IFE)

  • Enabling Description: In an IFE system, bandwidth is variable and costly. The "first part" of a film, a 480p H.264-encoded version, is pre-loaded on the seatback unit's solid-state drive or streamed over the cabin Wi-Fi from the central server. The "second part," containing the delta information required to upgrade the stream to 1080p or 4K (using a scalable video codec), is downloaded opportunistically. The aircraft's network management system prioritizes the "second part" download when the aircraft is within a high-throughput Ku/Ka-band satellite spot beam or connected to a gate's ground-based Wi-Fi, minimizing satellite data costs. The user can start watching immediately, and the quality improves seamlessly once the second part is downloaded and combined.
  • Diagram:
    sequenceDiagram
        participant SeatbackUnit
        participant AircraftServer
        participant GroundLink
        SeatbackUnit->>AircraftServer: User selects movie
        AircraftServer-->>SeatbackUnit: Stream/Load Part 1 (480p)
        loop Opportunistic Download
            AircraftServer->>GroundLink: Is High-Bandwidth Link available?
            GroundLink-->>AircraftServer: Yes (Spot Beam/Gate WiFi)
            AircraftServer-->>SeatbackUnit: Download Part 2 (HD/4K Delta)
        end
        Note over SeatbackUnit: Combines Part 1 & Part 2 for HD playback
    

1.6. Cross-Domain Application: Agricultural Drone Imagery

  • Enabling Description: A fixed-wing drone captures 100GB of multispectral imagery of a farm. While in the air, its onboard 5G transmitter sends the "first part": a heavily compressed, low-resolution NDVI (Normalized Difference Vegetation Index) map. This allows a farmer on the ground to identify stress areas in near real-time. The "second part," the full-resolution, multi-band GeoTIFF data, is stored on the drone's local storage. Upon landing and connecting to a local Wi-Fi network, the drone automatically transmits this second part to a local server for detailed analysis, variable rate fertilizer prescription map generation, and archival.
  • Diagram:
    flowchart TD
        A[Drone captures 100GB multispectral image] --> B{Onboard Processing};
        B --> C[Part 1: Compressed NDVI Map];
        B --> D[Part 2: Full-Res GeoTIFF on SSD];
        C -- 5G Link --> E[Farmer's Tablet for Real-time Triage];
        D -- Wi-Fi at base --> F[Farm Server for Detailed Analysis];
        F --> G{Prescription Map Generation};
    

1.7. Cross-Domain Application: Progressive Video Game Loading

  • Enabling Description: In a large open-world video game, the initial download and installation are split. The "first part" comprises the game engine, core logic, and low-polygon models and low-resolution (e.g., 512x512) textures for the starting zone, allowing the player to begin gameplay in minutes. The "second part" contains the high-resolution (e.g., 4K) texture packs, detailed character models, and audio for other game zones. This second part is downloaded in the background using a bandwidth-throttled downloader while the user is playing. The game engine's asset manager swaps the low-resolution assets for the high-resolution ones from the second part as they become available, storing them in a local cache.
  • Diagram:
    classDiagram
        class AssetManager {
            -lowResCache
            -highResCache
            +requestAsset(assetID)
            +streamHighRes(assetID)
        }
        class GameEngine {
            +loadLevel()
        }
        class Renderer {
            +drawObject(model, texture)
        }
        GameEngine o-- AssetManager
        GameEngine o-- Renderer
        note for AssetManager "On first request, returns low-res asset from Part 1 and triggers background download of high-res asset from Part 2."
    

1.8. Integration: AI-Driven Predictive Delivery

  • Enabling Description: A machine learning model (e.g., a recurrent neural network) on the server analyzes real-time data to optimize the split. It considers: 1) Network telemetry from the client (latency, jitter, packet loss), 2) User context (device model, screen resolution, time of day), and 3) Content characteristics (scene complexity, motion). The model dynamically adjusts the bitrate and GOP (Group of Pictures) structure of the "first part" for optimal real-time playback. It also predicts future network availability (e.g., likelihood of connecting to Wi-Fi in the next 15 minutes based on user mobility patterns) to schedule the download of the "second part" for a time of lowest cost and highest throughput.
  • Diagram:
    graph LR
        subgraph Client
            A[Device/Network Sensors] --> B[Telemetry Data]
        end
        subgraph Server
            C[ML Model (RNN)]
            D{Dynamic Encoder}
            E{Download Scheduler}
        end
        B --> C
        C -- Optimal Encoding Params --> D
        C -- Predicted Network State --> E
        D --> F[Part 1 Stream]
        E --> G[Part 2 Download]
        F --> Client
        G --> Client
    

1.9. Integration: Blockchain-Verified Secure Firmware Updates

  • Enabling Description: This system delivers Over-The-Air (OTA) firmware updates to an automotive Electronic Control Unit (ECU). The "first part" is a small, critical security patch that must be deployed immediately. The "second part" is a larger file containing new features and performance enhancements. The manufacturer generates hashes for both parts and registers them on a private consortium blockchain. The ECU downloads the "first part" over a cellular connection, verifies its hash against the blockchain, and applies it. It then downloads the "second part," often during a scheduled maintenance window over Wi-Fi, and again verifies its hash. The ECU's bootloader only combines and activates the full new firmware image after both parts have been successfully received and cryptographically verified against the immutable blockchain record.
  • Diagram:
    sequenceDiagram
        participant Manufacturer
        participant Blockchain
        participant ECU
        Manufacturer->>Blockchain: Register Hashes (H1, H2)
        ECU->>Manufacturer: Request Update
        Manufacturer-->>ECU: Send Part 1 (Security Patch)
        ECU->>Blockchain: Verify Hash(Part 1) == H1
        Note over ECU: If verified, apply Part 1
        Manufacturer-->>ECU: Send Part 2 (Feature Update)
        ECU->>Blockchain: Verify Hash(Part 2) == H2
        Note over ECU: If verified, combine Parts 1 & 2 for next boot
    

1.10. Inverse Mode: Graceful Degradation on Unreliable Networks

  • Enabling Description: The system is designed for environments with intermittent connectivity, such as maritime or tactical military networks. The "first part" is a fully self-contained, playable media file (e.g., a 360p H.264 file). The "second part" is a binary diff file created using a utility like xdelta. If the "second part" download fails to complete or its checksum verification fails, the client application is explicitly designed to halt the combination process. It presents the user with the successfully downloaded "first part" for playback and marks it as "Low Quality." It provides a UI option to re-attempt the download of only the "second part" later, thus preserving a usable asset instead of creating a corrupted file.
  • Diagram:
    stateDiagram-v2
        [*] --> Downloading
        Downloading: Get Part 1 (playable), Get Part 2 (diff)
        Downloading --> Verifying: Download complete
        Downloading --> Playable_LQ: Part 2 download fails
        Verifying --> Playable_HQ: Part 2 checksum OK --> Combine
        Verifying --> Playable_LQ: Part 2 checksum fail
        Playable_LQ --> Downloading: User re-attempts Part 2 download
    

2. Combination with Open-Source Standards

2.1. Combination with MPEG-DASH

  • Enabling Description: The system is implemented using the MPEG-DASH standard. The server generates a Media Presentation Description (MPD) manifest file that defines two distinct Periods. The first Period contains AdaptationSets for a low-bitrate, immediately playable version of the content (the "first part"). The second Period, with a start time of 0, contains a single AdaptationSet with a high-bitrate, non-segmented representation of the complete file (the "second part"). A custom DASH client is configured to: 1) Parse the MPD and begin streaming the first Period for immediate playback. 2) Simultaneously, initiate a background HTTP GET request for the entire resource described in the second Period. 3) Once the background download is complete and the first Period playback finishes, the client combines the files to create the full quality version for storage.
  • Diagram:
    erDiagram
        MPD_MANIFEST {
            string Period1_ID
            string Period2_ID
        }
        PERIOD_1 {
            string AdaptationSet_LQ_video
            string AdaptationSet_LQ_audio
        }
        PERIOD_2 {
            string AdaptationSet_HQ_complete
        }
        MPD_MANIFEST ||--o{ PERIOD_1 : "contains"
        MPD_MANIFEST ||--o{ PERIOD_2 : "contains"
    

2.2. Combination with WebRTC and File API

  • Enabling Description: A browser-based client uses WebRTC to connect to a server. The server establishes two channels. The first is a standard MediaStream which carries the low-quality, real-time audio/video "first part." The second is a RTCDataChannel, configured for reliable transmission, which is used to send the "second part" as a sequence of binary chunks. The client JavaScript receives the MediaStream and attaches it to an HTML5 <video> element for playback. Concurrently, it listens for message events on the RTCDataChannel, appending the received chunks into a Blob using the File API. Once the DataChannel signals the transfer is complete, the client can use a library like ffmpeg.wasm (WebAssembly) to combine the buffered low-quality stream and the high-quality Blob into a single MP4 file for local storage.
  • Diagram:
    sequenceDiagram
        participant BrowserClient
        participant WebRTC_Server
        BrowserClient->>WebRTC_Server: Initiate PeerConnection
        WebRTC_Server-->>BrowserClient: Establish MediaStream (Part 1)
        WebRTC_Server-->>BrowserClient: Establish DataChannel (Part 2)
        Note over BrowserClient: Play MediaStream in <video> element
        loop Transfer Chunks
            WebRTC_Server-->>BrowserClient: Send chunk over DataChannel
            BrowserClient->>BrowserClient: Append chunk to Blob
        end
        Note over BrowserClient: Use ffmpeg.wasm to combine stream and blob
    

2.3. Combination with BitTorrent Protocol

  • Enabling Description: To distribute a large media file (e.g., a 4K movie), the system uses a hybrid approach. The server hosts the "first part" (a 720p version) on a standard HTTP CDN for fast, centralized delivery. Embedded within this file's metadata (e.g., in an MP4 udta box) is a magnet: URI for the "second part". A custom client application begins by downloading and playing the file from the CDN. While playing, it parses the metadata, extracts the magnet link, and joins a BitTorrent swarm to download the "second part" (a high-quality data file) from other peers in a decentralized manner. This reduces server load for the larger portion of the data. Once the peer-to-peer download is complete, the client combines the two parts.
  • Diagram:
    graph TD
        subgraph Centralized Delivery
            A[CDN HTTP Server] -- Part 1 (720p version with embedded magnet link) --> B[Client Player];
        end
        subgraph Decentralized Delivery
            C{BitTorrent Swarm}
        end
        B -- Extracts magnet link & joins swarm --> C;
        C -- Part 2 (High-Q data) --> B;
        B --> D{Combiner};
    

Generated 5/11/2026, 6:05:18 PM