Patent 9112934
Derivative works
Defensive disclosure: derivative variations of each claim designed to render future incremental improvements obvious or non-novel.
Active provider: Google · gemini-2.5-pro
Derivative works
Defensive disclosure: derivative variations of each claim designed to render future incremental improvements obvious or non-novel.
Defensive Disclosure and Prior Art Derivations for U.S. Patent 9,112,934
Publication Date: May 7, 2026
Subject: Derivative Implementations and Obvious Variations of an Apparatus and Method for Configuring On-Demand Content Delivering Overlay Networks.
Reference Patent: U.S. Patent 9,112,934 B2 ("the '934 patent")
This document details a series of derivative works, extensions, and combinations related to the core teachings of the '934 patent. The intent is to place these concepts into the public domain, thereby establishing them as prior art for any subsequent patent applications. The descriptions are enabling for a Person Having Ordinary Skill in the Art (PHOSITA).
Core Claim Reference (Independent Claim 1 & 7 of '934 Patent)
The central mechanism involves: receiving a network configuration request and information from a service provider; managing available network resource information; and setting a plurality of network nodes to configure a specific transfer path for content delivery in a temporary content delivery overlay network.
1. Material & Component Substitution Derivatives
Derivative 1.1: FPGA-Based Network Node Configuration
- Enabling Description: This variation replaces the general-purpose CPU-based network nodes described implicitly in the '934 patent with nodes built on Field-Programmable Gate Arrays (FPGAs). The
controllerof the management apparatus, upon receiving a network configuration request, generates and transmits a specific FPGA bitstream instead of just "node setting information." This bitstream reconfigures the hardware logic of the network nodes to create a highly optimized data plane specifically for the requested content type (e.g., live-streamed 8K video vs. VOD file distribution). The bitstream defines packet forwarding rules, multicast logic, and content caching policies directly in the hardware, reducing latency below what is achievable with a software-based approach on a general-purpose processor. Theresource managerwould track not only CPU/memory but also available logic blocks and I/O pins on the FPGAs across the network. - Mermaid Diagram:
sequenceDiagram participant SP as Service Provider participant MgmtApp as Management Apparatus participant FPGA_Node as FPGA-based CDN Node SP->>MgmtApp: Network Config Request (Codec=AV1, Bitrate=50Mbps) MgmtApp->>MgmtApp: Analyze Request & Select FPGA Nodes MgmtApp->>MgmtApp: Generate AV1-Optimized Bitstream MgmtApp->>FPGA_Node: Transmit Bitstream & Node Settings FPGA_Node->>FPGA_Node: Reconfigure Hardware Logic FPGA_Node-->>MgmtApp: Configuration Acknowledged MgmtApp-->>SP: Overlay Network Ready (Path: A->B->C)
Derivative 1.2: Software-Defined Networking (SDN) with P4-Programmable Switches
- Enabling Description: This derivative substitutes the overlay network nodes with P4-programmable network switches. The
controlleracts as a central SDN controller. When a service provider sends a configuration request, the controller translates these requirements into a P4 program. This program defines custom packet processing pipelines for the provider's specific traffic. The P4 program is then compiled and deployed to the data plane of the switches along the chosen transfer path. This allows for fine-grained, per-provider traffic engineering, such as custom load balancing, in-network caching decisions, and real-time analytics, all performed at line rate. Theresource managermonitors the available table entries and processing stages in the P4 switches. - Mermaid Diagram:
graph TD A[Service Provider Request] --> B{SDN Controller ('934 Controller)}; B --> C{Translate Request to P4 Program}; C --> D[Deploy P4 Program to Switches]; D --> E(P4 Switch 1); D --> F(P4 Switch 2); D --> G(P4 Switch 3); subgraph Configured Transfer Path E --> F; F --> G; end B --> H{Resource Manager}; H --> I[Monitor P4 Switch Resources];
Derivative 1.3: Neuromorphic Processing Units for Predictive Caching
- Enabling Description: In this variation, the network nodes are augmented with neuromorphic processing units (NPUs) designed for low-power pattern matching. The
controllerconfigures the overlay network and also deploys a pre-trained machine learning model to the NPUs in each node. This model is trained to predict user content requests based on real-time network traffic patterns and metadata. The NPU allows each node to perform predictive pre-fetching and caching of content from upstream nodes with extremely low latency and power consumption, anticipating user demand before it occurs. Thenetwork configuration informationfrom the service provider would include hints or a specific model to use for their user base. - Mermaid Diagram:
classDiagram class NetworkNode { +GeneralPurposeCPU cpu +NeuromorphicPU npu +Storage cache +predictiveCache(metadata) } class Controller { +configureNetwork(request) +deployModel(node, model) } class ResourceMgr { +getAvailableNPUNodes() } Controller "1" -- "N" NetworkNode : Deploys Model to Controller "1" -- "1" ResourceMgr : Queries
2. Operational Parameter Expansion Derivatives
Derivative 2.1: Nanoscale Network-on-Chip (NoC) Configuration
- Enabling Description: This derivative applies the '934 patent's method to configure overlay networks on a Network-on-Chip (NoC) within a many-core processor or System-on-Chip (SoC). The "service providers" are individual software processes or virtual machines running on the chip. The
management apparatusis a hardware scheduler or hypervisor. When a process requests a high-bandwidth, low-latency communication path to other cores (e.g., for a distributed machine learning task), the scheduler acts as thecontroller. It analyzes the resource information (available router buffers, link bandwidth on the NoC) and configures a dedicated virtual channel or transfer path across the on-chip routers (network nodes) for the duration of the task. - Mermaid Diagram:
graph LR subgraph SoC A(Core 1 - Process A) --> R1(Router); B(Core 8 - Process B) --> R2(Router); C(Core 64 - Process C) --> R3(Router); R1 --> R2; R2 --> R3; end subgraph Management Scheduler(Hypervisor/Scheduler) -- Configures --> R1; Scheduler -- Configures --> R2; Scheduler -- Configures --> R3; RM(Resource Monitor) -- Reports to --> Scheduler; end ProcA_Req(Process A Request) --> Scheduler;
Derivative 2.2: Interplanetary Delay-Tolerant Network (DTN) Configuration
- Enabling Description: This variation adapts the invention for configuring overlay paths in a deep-space network characterized by extreme latency and intermittent connectivity. The
network nodesare satellites, planetary rovers, and ground stations. Aservice provider(e.g., a space mission control) submits anetwork configuration requestto pre-schedule a store-and-forward data path for a large scientific data bundle. Thecontroller, aware of orbital mechanics and predicted communication windows, uses the Bundle Protocol (RFC 5050) to set up a transfer path. It instructs each node on the path when to expect a bundle, where to store it temporarily, and which downstream node to forward it to during the next available contact window. The "predetermined time period" for the overlay network could be weeks or months. - Mermaid Diagram:
gantt title Interplanetary Data Bundle Transfer Path dateFormat YYYY-MM-DD axisFormat %m-%d section Mars Rover -> Orbiter Data Capture & Store :done, r1, 2026-05-07, 2d Transfer Window 1 : r1_tx, 2026-05-09, 6h section Mars Orbiter -> Deep Space Relay Store & Await Relay :done, o1, 2026-05-09, 5d Transfer Window 2 : o1_tx, 2026-05-14, 8h section Deep Space Relay -> Earth Ground Station Store & Await Earth :done, dsn1, 2026-05-14, 10d Transfer to Earth : dsn1_tx, 2026-05-24, 12h
3. Cross-Domain Application Derivatives
Derivative 3.1: Aerospace - Dynamic Air Traffic Control Corridors
- Enabling Description: The '934 method is applied to dynamically configure safe and efficient flight corridors for unmanned aerial vehicles (UAVs). The
service provideris a commercial drone delivery service. They submit anetwork configuration requestfor a fleet of drones, specifying payload, destination, and required ETA. Themanagement apparatusis a centralized air traffic control system. It analyzes available airspace (resource information), weather data, and potential conflicts. Thecontrollerthen configures a 4D (3D space + time)transfer pathby setting waypoints and communication protocols (network nodesare ground-based beacons and satellite links) for the drone fleet, creating a temporary, reserved "overlay" flight corridor that is deleted after the mission is complete. - Mermaid Diagram:
flowchart TD UAV_Fleet[UAV Delivery Request] --> ATC{ATC Management System}; ATC -- Analyzes --> AirspaceDB[(Airspace & Weather DB)]; ATC -- Configures --> Corridor(Temporary 4D Corridor); Corridor --> N1(Beacon 1); Corridor --> N2(SatCom Link); Corridor --> N3(Beacon 2); N1 --> N2 --> N3; ATC -- Deletes when complete --> Corridor;
Derivative 3.2: AgTech - Precision Irrigation Scheduling
- Enabling Description: This derivative applies the concept to water distribution in a large-scale smart farm. The
service provideris a farm management software platform. It submits anetwork configuration requestbased on soil moisture data and crop growth models. Themanagement apparatusis the farm's central irrigation controller. Thenetwork nodesare smart valves and pumps in the water pipeline network. Thecontrolleranalyzes water availability (resource information) and configures atransfer pathby opening and closing specific valves in sequence to deliver a precise amount of water to a specific field for a set duration. This creates a temporary "overlay network" for water flow on top of the physical pipe infrastructure. - Mermaid Diagram:
stateDiagram-v2 [*] --> Idle Idle --> Configuring: Irrigation Request Received Configuring --> Pumping: Path (Valves V2, V5, V9) set Pumping --> Idle: Timer Elapsed / Quota Met state Pumping { V2_Open --> V5_Open V5_Open --> V9_Open V9_Open --> [*] }
4. Integration with Emerging Tech Derivatives
Derivative 4.1: AI-Driven Self-Optimizing Overlay Networks
- Enabling Description: This variation integrates a Reinforcement Learning (RL) agent into the
controller. Thecontrollerinitially sets up a transfer path based on the provider's request. It then continuously monitors real-time network telemetry (latency, packet loss, jitter) from IoT sensors embedded in the network nodes. The RL agent's state is the current network topology and performance metrics, its action space is the set of possible alternative nodes or paths, and its reward function is based on maintaining the Quality of Service (QoS) specified in the original request. If performance degrades, the RL agent autonomously re-configures parts of the transfer path in real-time to optimize content delivery, without human intervention. - Mermaid Diagram:
graph TD subgraph RL_Loop A[Observe Network State] --> B{RL Agent}; B -- Action: Re-route Path --> C[Configure New Node]; C --> D[Observe New State]; D -- Reward Signal --> B; end E(Service Provider Request) --> F{Controller}; F -- Initial Config --> G(Overlay Network); G -- Telemetry --> A; B -- Updates --> G;
Derivative 4.2: Blockchain for CDN Service Auditing and Billing
- Enabling Description: The '934 system is integrated with a private blockchain for transparent auditing and settlement. When the
controllerconfigures an overlay network, it writes the terms of the service (provider ID, duration, bandwidth, cost) to a smart contract on the blockchain. Eachnetwork nodeon the transfer path records proof-of-delivery metrics (e.g., data volume transferred, uptime) to the same blockchain ledger. When the service duration ends, the smart contract automatically executes, calculating the final bill based on the immutable node-reported metrics and transferring payment from the service provider's wallet. This eliminates billing disputes and provides a verifiable audit trail of the CDN service delivery. - Mermaid Diagram:
sequenceDiagram autonumber Service Provider->>+Controller: Network Config Request Controller->>+Blockchain: Deploy Smart Contract (SLA) Controller->>+CDN Nodes: Configure Transfer Path CDN Nodes->>Blockchain: Log Proof-of-Delivery Metrics Note right of CDN Nodes: (Tx Volume, Uptime) Controller->>Blockchain: Signal Service End Blockchain->>Blockchain: Smart Contract Executes Billing Blockchain->>Service Provider: Debit Payment Blockchain->>Network Owner: Credit Payment
5. The "Inverse" or Failure Mode Derivatives
Derivative 5.1: Graceful Degradation Overlay Path
- Enabling Description: This derivative describes a "low-power" or "best-effort" configuration mode. A service provider can issue a
network configuration requestwith a "low-priority" flag. Theresource managerprioritizes nodes that are underutilized or have lower performance characteristics (e.g., lower CPU clock speeds, slower storage). Thecontrollerthen configures a transfer path using these non-premium resources. The resulting overlay network provides a functional but lower-QoS content delivery service at a significantly reduced cost. This system is designed to fail safely: if a low-tier node fails, the controller automatically attempts to re-route to another available low-tier node, accepting a temporary service interruption as a trade-off for cost, rather than failing over to expensive high-tier resources. - Mermaid Diagram:
stateDiagram-v2 state "High QoS Path" as High state "Degraded QoS Path" as Low state "Service Interrupted" as Fail [*] --> High : Premium Request High --> Low : High-Tier Node Failure Low --> High : High-Tier Node Recovers [*] --> Low : Low-Priority Request Low --> Fail : No Low-Tier Nodes Available Fail --> Low : Low-Tier Node Becomes Available
Combination Prior Art Scenarios
Combination with HTTP/3 and QUIC: The method of the '934 patent is combined with the QUIC protocol (an open IETF standard). The
network configuration informationfrom the service provider includes specific QUIC parameters, such as desired stream multiplexing priorities and congestion control algorithms (e.g., BBR, CUBIC). Thecontrollerconfigures thenetwork nodesnot just as IP-level forwarders but as QUIC-aware proxies. These nodes can terminate and re-originate QUIC connections, allowing them to make intelligent routing decisions based on application-layer metrics (like stream-specific packet loss) that are visible within the QUIC headers, enabling more robust delivery than a simple IP-based overlay.Combination with Prometheus and Grafana (Open-Source Monitoring): The
resource managerin the '934 patent is implemented by deploying Prometheus Node Exporter agents on every potentialnetwork node. The resource manager scrapes metrics from these agents into a centralized Prometheus time-series database. When anetwork configuration requestis received, thecontrollerqueries this Prometheus database using PromQL to find nodes that meet the specified resource requirements (e.g.,avg_over_time(node_cpu_seconds_total[5m]) < 0.5). The configured overlay network's performance is then monitored via pre-built Grafana dashboards, which are made accessible to the service provider for the duration of their service, leveraging a full open-source stack for resource management and reporting.Combination with IPFS (InterPlanetary File System): The content delivery overlay network is configured to act as a private, managed swarm within the public InterPlanetary File System. The
network configuration requestspecifies a set of content identifiers (CIDs) for the content to be delivered. Thecontrollerselects a plurality ofnetwork nodesand instructs them to join a private IPFS swarm and pin the specified CIDs. The configuredtransfer pathis not a linear pipe but a managed mesh of IPFS nodes that can efficiently serve the content to users (who are also IPFS clients) via the protocol's content-addressed, peer-to-peer mechanisms. The overlay network is "deleted" by instructing the nodes to unpin the content and leave the private swarm.
Generated 5/7/2026, 5:29:54 PM