Patent 8868772B2

Derivative works

Defensive disclosure: derivative variations of each claim designed to render future incremental improvements obvious or non-novel.

Active provider: Google · gemini-2.5-pro

Derivative works

Defensive disclosure: derivative variations of each claim designed to render future incremental improvements obvious or non-novel.

✓ Generated

Defensive Disclosure Document for US Patent 8,868,772 B2

Title: Methods, Systems, and Architectures for Dynamic Media Stream Adaptation and Delivery
Publication Date: May 8, 2026
Technical Field: Data Networking, Media Streaming, Content Delivery Networks (CDN), Client-Server Architecture, Real-Time Communication.

Abstract: This disclosure describes a series of derivative inventions and improvements upon the core concept of client-driven adaptive bitrate streaming. The disclosed methods expand upon the idea of segment-based media delivery by introducing alternative components, expanding operational parameters to extreme environments, applying the core logic to disparate technical domains, integrating next-generation technologies like AI and blockchain, and defining novel failure or low-power operational modes. These disclosures are intended to enter the public domain to serve as prior art for future patent applications in this field.


Analysis and Derivations Based on Independent Claim 1

Core Concept of Claim 1: A client-side media player streams video by requesting discrete segments ("streamlets") from one of several alternative video streams, each encoded at a different quality. The client monitors a performance factor (e.g., download speed, buffer health) and uses this data to decide whether the next segment requested should be from a higher-quality or lower-quality stream, using standard web servers and TCP/IP protocols.


Derivative Set 1: Material & Component Substitution

Derivative 1.1: Quantum Dot-Based Network Interface Card (NIC) for Latency Sensing

  • Enabling Description: This variation replaces the software-based performance monitoring module of the client with a specialized hardware component. The client device is equipped with a network interface card (NIC) that incorporates a quantum dot (QD) array. The QDs are tuned to exhibit specific quantum tunneling effects based on the energy levels of incoming TCP/IP packet signals. The time-of-flight and energy degradation of packets are measured with femtosecond precision by observing state changes in the QD array. This provides an ultra-low-latency, physical-layer "performance factor" that is more accurate and predictive than application-layer monitoring. The agent controller module queries this QD-NIC directly via a dedicated hardware interrupt to decide on upshifting or downshifting streamlet quality. The system no longer relies on calculating an average of receive times but instead uses a direct hardware measurement of network jitter and packet integrity.
  • Mermaid Diagram:
    graph TD
        subgraph QD-NIC Hardware
            A[Photon Input] --> B{Quantum Dot Array};
            B --> C[Tunneling Event Detector];
            C --> D[Latency & Jitter Value];
        end
        subgraph Client Software
            E[Agent Controller] -- Hardware Interrupt --> F{Query QD-NIC};
            F --> G{Receive Hardware Performance Factor};
            G --> H{Decision Engine};
            H -- Upshift/Downshift --> I[Streamlet Request];
        end
        D -- Reports Value --> G;
        I -- HTTP GET --> J[Web Server];
    

Derivative 1.2: Memristor-Based Buffer Management

  • Enabling Description: The client's staging module (buffer) is implemented using memristive memory arrays instead of conventional DRAM. Memristors, with their ability to retain a state based on the history of applied voltage, are used to create an analog representation of buffer health. The total resistance of the memristor array is directly proportional to the amount of media data stored. As streamlets are written to the buffer, the resistance decreases; as they are read for playback, it increases. The agent controller module uses a simple analog-to-digital converter (ADC) to read this single resistance value as its primary performance factor. A resistance value below a critical threshold triggers a request for lower-quality streamlets, while a value above a high threshold triggers an upshift. This substitutes a complex software-based buffer calculation with a more efficient, low-power hardware-based system.
  • Mermaid Diagram:
    sequenceDiagram
        participant N as Network Controller
        participant M as Memristor Buffer
        participant A as Agent Controller
        participant V as Video Player
        N->>M: Write Streamlet(t)
        A->>M: Read Resistance (Analog Value)
        M-->>A: Return Resistance
        A->>A: Compare Resistance to Thresholds
        A->>N: Request Next Streamlet (Quality_t+1)
        V->>M: Read Streamlet(t) for Playback
    

Derivative Set 2: Operational Parameter Expansion

Derivative 2.1: Nanoscale Streaming for In-Vivo Biological Monitoring

  • Enabling Description: The adaptive streaming method is scaled down for use in a network of biological nanosensors monitoring a patient's bloodstream. The "video stream" is a real-time data feed of telemetry (e.g., blood glucose, oxygen levels, pathogen detection) from thousands of nanobots. The "content server" is a subcutaneous data hub. The "streamlets" are microsecond-long packets of telemetry data. Due to the chaotic and high-loss environment of the bloodstream, the "quality" of the stream is not video resolution but data redundancy and error-correction encoding (e.g., low quality = 8-bit data, high quality = 32-bit data with heavy FEC). The data hub constantly monitors the packet loss rate (the performance factor) from each nanobot and instructs them to increase or decrease the redundancy of their next transmission, ensuring a continuous and reliable stream of vital medical data.
  • Mermaid Diagram:
    graph LR
        subgraph Patient's Body
            A(Nanobot Cluster) -- Telemetry Packets --> B[Subcutaneous Hub];
        end
        subgraph Data Processing
            B -- Monitors Packet Loss Rate --> C{Performance Analyzer};
            C -- "High Loss?" --> D{Decision Logic};
            D -- Yes --> E[Instruct: Increase Redundancy];
            D -- No --> F[Instruct: Decrease Redundancy];
            E --> A;
            F --> A;
        end
        B --> G[External Medical Monitor];
    

Derivative 2.2: Deep-Space Adaptive-Rate Communication

  • Enabling Description: The method is applied to communication between a Mars rover and a Deep Space Network (DSN) antenna on Earth. The "video stream" is high-resolution panoramic image data. The "streamlets" are data blocks corresponding to specific tiles of the panorama. The "quality" levels correspond to different image compression algorithms (e.g., low quality = high JPEG compression, high quality = lossless RAW). The DSN antenna monitors the signal-to-noise ratio (SNR) as its performance factor, which is affected by atmospheric interference and solar flares. Based on the real-time SNR, the DSN sends a command to the rover to adjust the compression level for the next tile to be transmitted. This ensures that in poor conditions, a usable (though lower quality) image is received, and in good conditions, bandwidth is maximized for scientific data. The round-trip time is minutes, so the decision logic is highly predictive, based on solar weather models.
  • Mermaid Diagram:
    stateDiagram-v2
        [*] --> Transmitting_Low_Quality
        Transmitting_Low_Quality --> Transmitting_High_Quality: SNR > Threshold_UP
        Transmitting_High_Quality --> Transmitting_Low_Quality: SNR < Threshold_DOWN
        Transmitting_High_Quality --> Transmitting_RAW: SNR > Threshold_MAX
        Transmitting_RAW --> Transmitting_High_Quality: SNR < Threshold_MAX_Return
    

Derivative Set 3: Cross-Domain Application

Derivative 3.1: Aerospace - Adaptive Flight Control Data

  • Enabling Description: The system is adapted for transmitting flight control data between a remote pilot and a hypersonic drone. The "stream" is not video but a multi-channel feed of control surface telemetry (aileron position, engine thrust, etc.). The "quality" levels represent the update frequency and precision of the data (e.g., low quality = 10 Hz updates with 8-bit precision, high quality = 100 Hz updates with 32-bit precision). The ground control station monitors the round-trip time (RTT) and packet jitter of the control link. If RTT increases beyond a critical threshold, indicating a potential link failure, the system automatically "downshifts" to a lower update frequency but maintains control, preventing catastrophic failure. This ensures the drone remains controllable even with a severely degraded communication link.
  • Mermaid Diagram:
    flowchart TD
        A[Pilot Input] --> B{Ground Control};
        B -- Control Data Stream --> C(Hypersonic Drone);
        C -- Telemetry Feedback --> B;
        B -- Measures RTT/Jitter --> D{Link Quality Monitor};
        D -- Degraded? --> E{Decision Engine};
        E -- Yes --> F[Command: Downshift to 10Hz/8-bit];
        E -- No --> G[Command: Upshift to 100Hz/32-bit];
        F --> B;
        G --> B;
    

Derivative 3.2: AgTech - Precision Irrigation Control

  • Enabling Description: The invention is applied to a large-scale precision agriculture system. A central server manages thousands of smart irrigation valves. The "stream" is a set of control commands and sensor readings for a field sector. The "quality" relates to the granularity of control (e.g., low quality = uniform watering command for the whole sector, high quality = unique commands for each of the 100 valves in the sector based on soil moisture data). The central server monitors the response time and data integrity from the sensors in each sector over a low-power wide-area network (LPWAN). If a sector shows high packet loss, the server "downshifts" and sends a single, robust, low-bandwidth command for baseline watering. For sectors with strong connectivity, it sends high-bandwidth, differentiated commands for maximal water efficiency.
  • Mermaid Diagram:
    erDiagram
        SERVER ||--o{ SECTOR : manages
        SECTOR ||--|{ VALVE : contains
        SECTOR ||--|{ SENSOR : contains
        SERVER {
            string ServerID
        }
        SECTOR {
            string SectorID
            string ConnectivityStatus
        }
        VALVE {
            string ValveID
            string WaterFlow
        }
        SENSOR {
            string SensorID
            string MoistureLevel
        }
    

Derivative 3.3: Consumer Electronics - Adaptive Haptic Feedback

  • Enabling Description: The method is used in a networked multiplayer virtual reality (VR) game to deliver haptic feedback. The "stream" is haptic data for a user's vest or gloves. The "quality" levels correspond to the complexity and resolution of the haptic effect (e.g., low quality = simple vibration, high quality = complex, location-specific force feedback). The user's VR system monitors the game server's network latency. If latency is high, the system requests low-quality haptic data to ensure the feedback, however simple, is synchronized with the on-screen action. If latency is low, it requests high-resolution haptic data for maximum immersion. This prevents the disorienting effect of delayed or out-of-sync haptic feedback.
  • Mermaid Diagram:
    sequenceDiagram
        autonumber
        participant GameServer
        participant VRSystem
        participant HapticVest
        GameServer->>VRSystem: Game State Update
        VRSystem->>GameServer: Measure Latency
        alt Low Latency
            VRSystem->>GameServer: Request High-Quality Haptics
            GameServer->>VRSystem: High-Res Haptic Data
        else High Latency
            VRSystem->>GameServer: Request Low-Quality Haptics
            GameServer->>VRSystem: Low-Res Haptic Data
        end
        VRSystem->>HapticVest: Render Haptic Effect
    

Derivative Set 4: Integration with Emerging Tech

Derivative 4.1: AI-Driven Predictive Quality Shifting

  • Enabling Description: The agent controller module is replaced with a trained neural network (specifically, a Long Short-Term Memory or LSTM model). Instead of reacting to past performance, the AI model predicts the likely network bandwidth for the next 5-10 seconds. It uses a wide range of inputs: historical performance on that network, time of day, the user's geolocation, and even the type of video content being watched (e.g., high-action scenes are more sensitive to buffering). Based on its prediction, it pre-emptively requests higher or lower-quality streamlets before the network conditions actually change. This results in a much smoother viewing experience with fewer noticeable quality shifts.
  • Mermaid Diagram:
    graph TD
        A[Historical Data] --> C{LSTM Model};
        B[Real-time Network Stats] --> C;
        D[Content Analysis] --> C;
        C -- Predicts Future Bandwidth --> E{Decision Engine};
        E --> F[Request Streamlet(t+n)];
        F --> G[Web Server];
    

Derivative 4.2: IoT Sensor Network for Performance Factoring

  • Enabling Description: The client device no longer relies solely on its own measurements. It subscribes to a real-time data feed from a mesh network of IoT sensors in its environment (e.g., public Wi-Fi access points, cellular micro-cells, other nearby streaming devices). These sensors provide a composite, hyperlocal view of network congestion and RF interference. The agent controller's performance factor is a weighted average of its own measurements and the data from the surrounding IoT network. This allows the device to distinguish between a problem with its own connection and a wider network outage, making more intelligent shifting decisions.
  • Mermaid Diagram:
    classDiagram
        class AgentController {
            +calculatePerformanceFactor()
            +requestStreamlet()
        }
        class LocalMonitor {
            +measureThroughput()
        }
        class IoT_Data_Feed {
            +getNeighborhoodCongestion()
        }
        AgentController --> LocalMonitor : uses
        AgentController --> IoT_Data_Feed : uses
    

Derivative 4.3: Blockchain-Verified Streamlets

  • Enabling Description: This variation is for secure, premium content delivery where authenticity is critical (e.g., legal depositions, pay-per-view events). Each streamlet, upon creation by the content server, has its hash registered on a public blockchain. When the client's network controller module receives a streamlet, it immediately calculates the hash and verifies it against the blockchain record. If the hash does not match, the streamlet is discarded as potentially tampered with, and a request is sent to a different web server. This ensures end-to-end integrity of the content, and the quality-shifting decisions are now based on a combined performance factor of both network speed and successful blockchain verification rate.
  • Mermaid Diagram:
    sequenceDiagram
        ContentServer->>Blockchain: Register Hash(Streamlet_N)
        Client->>WebServer: Request Streamlet_N
        WebServer->>Client: Deliver Streamlet_N
        Client->>Client: Calculate Hash(Received_Streamlet_N)
        Client->>Blockchain: Verify Hash
        alt Hash Matches
            Client->>Player: Stage Streamlet for Playback
        else Hash Mismatches
            Client->>OtherWebServer: Request Streamlet_N
        end
    

Derivative Set 5: The "Inverse" or Failure Mode

Derivative 5.1: Graceful Degradation to Audio-Only Mode

  • Enabling Description: A "downshift floor" is defined. If the performance factor indicates that even the lowest quality video streamlet cannot be delivered in time, the system does not buffer. Instead, it enters a failure mode where it stops requesting video streamlets entirely and requests a separate, audio-only streamlet. The screen displays a static "Network Connection Unstable" image while the audio continues uninterrupted. When the performance factor recovers above the floor threshold for a sustained period, the system resumes requesting the lowest-quality video streamlets and seamlessly transitions back to video playback. This prioritizes continuity of the audio experience over a frustrating, constantly-buffering video feed.
  • Mermaid Diagram:
    stateDiagram-v2
        state "Video Playback" as VP
        state "Audio Only" as AO
    
        [*] --> VP
        VP --> AO : Performance < Floor_Threshold
        AO --> VP : Performance > Recovery_Threshold
    

Combination Prior Art Scenarios with Open-Source Standards

  1. Combination with MPEG-DASH (Dynamic Adaptive Streaming over HTTP): The method of US 8,868,772 is combined with the MPEG-DASH standard. The "streamlets" are defined as MPEG-DASH segments, and the different quality streams are defined as different "Representations" within the Media Presentation Description (MPD) file. The client's agent controller module is a standard DASH client which parses the MPD and uses the described performance monitoring logic to select the appropriate Representation for the next segment request. This combines the patent's core logic with a universally adopted public standard, rendering the specific combination obvious.

  2. Combination with WebRTC (Web Real-Time Communication): The adaptive streaming logic is applied not to pre-encoded files on a web server, but to a live peer-to-peer video stream using WebRTC. One peer acts as the "server," encoding the video feed at multiple resolutions in real-time. The receiving peer monitors the WebRTC DataChannel statistics (latency, packet loss) as its performance factor. Based on these stats, it sends a message back to the transmitting peer requesting it to switch to a higher or lower quality encoding for the next set of frames. This applies the patent's client-driven adaptive logic to the open standard for real-time, browser-to-browser communication.

  3. Combination with HTTP/2 or HTTP/3 (QUIC): The patent's concept of "virtual pipelining" using multiple TCP connections to overcome head-of-line blocking is combined with the native multiplexing capabilities of the HTTP/2 and HTTP/3 standards. Instead of opening multiple TCP connections, the client opens a single HTTP/2 or QUIC connection and requests multiple streamlets simultaneously over different "streams" within that single connection. The agent controller monitors the completion time of each stream to calculate its performance factor. This achieves the same goal as virtual pipelining but uses a standardized, more efficient protocol, making the combination an obvious evolution for a person skilled in the art seeking to optimize performance.

Generated 5/8/2026, 6:49:22 PM