Patent 9313101
Derivative works
Defensive disclosure: derivative variations of each claim designed to render future incremental improvements obvious or non-novel.
Active provider: Google · gemini-2.5-pro
Derivative works
Defensive disclosure: derivative variations of each claim designed to render future incremental improvements obvious or non-novel.
Defensive Disclosure and Prior Art Generation for U.S. Patent 9,313,101
Publication Date: April 26, 2026
Reference ID: DDPUB-2026-0426-ETRI1
Title: System and Method for Distributed, Time-Variant Policy Enforcement Across Heterogeneous Architectures and Emergent Technologies
This document discloses a series of derivative works, extensions, and combinations related to the core teachings of U.S. Patent 9,313,101 ("Method of controlling traffic by time-based policy"). The purpose of this disclosure is to place these extensions into the public domain, thereby establishing them as prior art for any future patent applications in this domain.
Derivatives Based on Independent Claim 1: Server-Side Time Determination
Claim 1 describes a Policy Server (PS) determining the execution time point of a policy and transmitting it to a Policy Management System (PMS) only when that time arrives. The following are derivative implementations.
1. Material & Component Substitution: Hardware-Accelerated Policy Server
- Enabling Description: The Policy Server (PS) is implemented not as a general-purpose CPU running software, but as a dedicated network appliance utilizing a Field-Programmable Gate Array (FPGA) or Application-Specific Integrated Circuit (ASIC). The time-determination logic (
whether an execution time point of the policy arrives) is synthesized into hardware logic blocks. This provides deterministic, nanosecond-level precision for time checks, which is critical for applications like high-frequency trading or industrial control systems. The PS uses a temperature-compensated crystal oscillator (TCXO) or an oven-controlled crystal oscillator (OCXO) as its time source for high stability, and synchronizes via a hardware implementation of the Precision Time Protocol (PTP, IEEE 1588). Upon a successful time-check in the hardware logic, the policy is passed to a network interface controller (NIC) for transmission to the PMS. - Mermaid Diagram:
graph TD subgraph FPGA-based Policy Server (PS) A[PTP Hardware Clock] --> B{Time-Check Logic Block}; C[Policy Storage on BRAM] --> B; B -- Time Match True --> D[DMA Engine]; D --> E[Integrated NIC]; end E -- Policy Transmission --> F[Policy Management System (PMS)];
2. Operational Parameter Expansion: Policies in Relativistic Time-Frames
- Enabling Description: The system is applied to manage communications between a ground station (PS) and multiple low-earth orbit (LEO) or deep-space satellites (PEEs). The PS must pre-calculate the policy execution time points by factoring in relativistic effects, including time dilation due to velocity (Special Relativity) and gravitational potential (General Relativity), as well as signal propagation delay. The PS determines that a policy's execution time point has arrived on the ground when the calculated transmission time plus propagation delay will cause the policy to arrive at the satellite precisely at the satellite's relativistically-adjusted target time. This method uses the SPICE toolkit from NASA for orbital mechanics and time-frame calculations.
- Mermaid Diagram:
sequenceDiagram participant GroundStation_PS as PS (Ground Time) participant Satellite_PEE as PEE (Satellite Time) GroundStation_PS ->> GroundStation_PS: 1. Calculate T_exec_sat (target exec time on PEE) GroundStation_PS ->> GroundStation_PS: 2. Calculate T_prop (propagation delay) GroundStation_PS ->> GroundStation_PS: 3. Calculate T_rel_delta (relativistic time offset) GroundStation_PS ->> GroundStation_PS: 4. Determine T_tx_ground = T_exec_sat - T_prop - T_rel_delta loop Check Ground Time GroundStation_PS ->> GroundStation_PS: Is current_time >= T_tx_ground? end Note right of GroundStation_PS: Execution time point arrives GroundStation_PS ->> Satellite_PEE: 5. Transmit Policy Satellite_PEE -->> GroundStation_PS: (Policy arrives at T_exec_sat in PEE frame)
3. Cross-Domain Application: Agricultural Technology (AgTech)
- Enabling Description: A central AgTech cloud platform (PS) manages irrigation and fertilization policies for thousands of distributed farms. To conserve battery life and bandwidth for in-field IoT controllers (PEEs), the PS holds all weekly policies. It runs a daily cron job that determines which policies are active for the next 24-hour period (e.g., "Irrigate Zone A from 04:00-05:00"). Only at 00:01 local time each day does the PS transmit this subset of active policies to the regional farm hub (PMS), which then distributes them to the field controllers.
- Mermaid Diagram:
graph TD A[AgTech Cloud PS] --> B{Time Check: T == 00:01}; B -- True --> C[Select Active Policies for Next 24h]; C --> D[Transmit Policy Subset]; D --> E[Farm Hub PMS]; E --> F[Field Irrigation Controller PEE]; E --> G[Fertilizer Drone PEE];
4. Integration with Emerging Tech: AI-Predicted Policy Generation
- Enabling Description: The PS integrates a machine learning model (e.g., a recurrent neural network - RNN) that continuously analyzes network traffic metadata. The model predicts future congestion hotspots or security threats. When a future event is predicted with high confidence (e.g., a DDoS attack is likely at 3:00 PM), the PS pre-generates a mitigation policy. It then sets the execution time point for that policy to T-5 minutes (2:55 PM). The PS holds this policy and only transmits it to the PMS at the designated time, allowing for proactive, automated network defense.
- Mermaid Diagram:
sequenceDiagram participant Monitor as Network Monitor participant AI_Model as Predictive AI participant PS as Policy Server participant PMS as Policy Management System Monitor->>AI_Model: Real-time traffic data AI_Model->>PS: Prediction: DDoS likely at 15:00 PS->>PS: Generate mitigation policy PS->>PS: Set Execution Time = 14:55 loop Until 14:55 PS->>PS: Hold policy, check time end PS->>PMS: Transmit mitigation policy
5. Inverse/Failure Mode: "Heartbeat" Policy Transmission
- Enabling Description: The PS is designed for high-reliability networks where the health of the downstream PMS/PEE is unknown. Instead of a single transmission, the PS determines the policy's execution time window (start and end). Once the start time arrives, it begins transmitting the active policy to the PMS repeatedly at a configurable interval (e.g., every 60 seconds). This acts as both a policy enforcement mechanism and a heartbeat signal. If the PMS fails to receive the policy for N consecutive intervals, it triggers an alarm. The PS stops transmitting when the policy's end time arrives.
- Mermaid Diagram:
stateDiagram-v2 [*] --> Inactive Inactive --> Active: Execution start time arrives Active --> Active: Transmit policy to PMS (every 60s) Active --> Inactive: Execution end time arrives Active --> Alarm: PMS fails to ack after N intervals Alarm --> Inactive: Manual Reset
Derivatives Based on Independent Claim 3: PMS-Side Time Determination
Claim 3 describes a PMS downloading a policy, storing it, and determining the execution time point before transmitting it to a PEE.
1. Material & Component Substitution: Distributed Consensus-Based PMS
- Enabling Description: The PMS is not a single server but a distributed cluster of nodes running a consensus algorithm like Raft or Paxos. A policy from the PS is downloaded and replicated across all nodes in the PMS cluster. The "determining whether an execution time point arrives" step requires a quorum of PMS nodes to independently agree on the current time (sourced from their local, synchronized clocks) and the validity of the policy. Only after consensus is reached does the leader node of the cluster parse and transmit the policy to the relevant PEEs. This prevents a single, compromised, or time-drifted PMS node from incorrectly activating a policy.
- Mermaid Diagram:
graph TD subgraph Distributed PMS Cluster A[Node 1] -- Time & Policy Hash --> B{Raft Leader}; C[Node 2] -- Time & Policy Hash --> B; D[Node 3] -- Time & Policy Hash --> B; B -- Quorum Achieved --> E[Parse & Transmit Policy]; end PS -- Download Policy --> A; PS --> C; PS --> D; E --> PEE;
2. Cross-Domain Application: Smart Grid Energy Management
- Enabling Description: In an electrical smart grid, a Regional Operations Center acts as the PMS. It downloads various demand-response policies from the national grid operator (PS), such as "Reduce load by 10% from 17:00-20:00". The regional PMS stores these policies and monitors grid conditions and its local, high-precision clock. At 17:00, it determines the execution time has arrived. It then parses this high-level policy into specific commands for different types of PEEs: "Increase thermostat set-point by 2 degrees" for smart thermostats, and "Temporarily halt charging cycle" for electric vehicle charging stations.
- Mermaid Diagram:
sequenceDiagram participant GridOperator_PS as PS participant RegionalCenter_PMS as PMS participant Thermostat_PEE as PEE-1 participant EV_Charger_PEE as PEE-2 GridOperator_PS->>RegionalCenter_PMS: Download Policy: ReduceLoad(10%, 17:00-20:00) activate RegionalCenter_PMS loop Until 17:00 RegionalCenter_PMS->>RegionalCenter_PMS: Check time end RegionalCenter_PMS->>RegionalCenter_PMS: Time Match! Parse Policy. RegionalCenter_PMS->>Thermostat_PEE: Command: SetTemp(current+2) RegionalCenter_PMS->>EV_Charger_PEE: Command: PauseCharging() deactivate RegionalCenter_PMS
3. Integration with Emerging Tech: Blockchain-Verified Policy Auditing
- Enabling Description: The PMS includes a blockchain client. When the PMS downloads a policy from the PS, it stores the policy hash in its local database. When the PMS determines the execution time point has arrived, it performs two actions: 1) it transmits the policy to the PEE, and 2) it writes a transaction to a private, permissioned blockchain. This transaction contains the policy hash, the timestamp of the execution decision, and the ID of the target PEE. This creates an immutable, tamper-proof audit trail for compliance, verifying exactly when a policy was deemed active and to whom it was sent.
- Mermaid Diagram:
flowchart LR subgraph PMS A[Download Policy] --> B[Store in DB]; B --> C{Determine Execution Time}; C -- Yes --> D[Transmit to PEE]; C -- Yes --> E[Create Transaction: {PolicyID, Timestamp, PEE_ID}]; E --> F[Write to Blockchain Ledger]; end PS --> A; D --> PEE;
4. Operational Parameter Expansion: Geo-Temporal Policy Activation in CDNs
- Enabling Description: A Content Delivery Network (CDN) provider uses a globally distributed PMS layer. The PMS downloads a single policy from the central PS with a "local time" condition, e.g., "Serve video at lower bitrate from 08:00-11:00 AM local time." Each PMS node, located in a different geography (e.g., Tokyo, Frankfurt, Ashburn), is responsible for determining when 08:00 AM arrives in its own timezone. Upon reaching this local execution time, it pushes the bitrate-limiting policy to all PEEs (caching servers) under its regional control.
- Mermaid Diagram:
graph TD PS -- Policy: LimitVideo(08:00-11:00 Local) --> PMS_Tokyo; PS --> PMS_Frankfurt; PS --> PMS_Ashburn; subgraph Tokyo PMS_Tokyo --> T_Check{Time == 08:00 JST?}; T_Check -- Yes --> PEE_Japan; end subgraph Frankfurt PMS_Frankfurt --> F_Check{Time == 08:00 CET?}; F_Check -- Yes --> PEE_Germany; end subgraph Ashburn PMS_Ashburn --> A_Check{Time == 08:00 EST?}; A_Check -- Yes --> PEE_Virginia; end
5. Inverse/Failure Mode: "Stale Policy" Graceful Degradation
- Enabling Description: The PMS is designed to maintain operation if it loses its connection to the PS. The PMS stores all downloaded policies with their last-known validity window. If the connection is lost, the PMS enters a "fail-static" mode. It continues to determine execution time points for the policies it already has stored. However, once a policy's end-time passes, it is purged and not replaced. The system's functionality gracefully degrades as policies expire, rather than failing completely. A special, permanent "fail-safe" policy (e.g., "block all new connections") is activated if the connection to the PS is down for more than a specified TTL (e.g., 24 hours).
- Mermaid Diagram:
stateDiagram-v2 state "Connected" as C state "Disconnected" as D [*] --> C C --> D : Connection to PS Lost D --> C : Connection Restored state "Normal Operation" as C_Normal C: state C_Normal { [*] --> Download Download --> Store Store --> Determine_Time Determine_Time --> Transmit_to_PEE Transmit_to_PEE --> Store } state "Fail-Static Mode" as D_FailStatic D: state D_FailStatic { [*] --> Determine_Time Determine_Time --> Transmit_to_PEE : If policy is valid & active Determine_Time --> Purge_Policy : If policy has expired } D_FailStatic --> Activate_FailSafe : TTL Exceeded
Derivatives Based on Independent Claim 6: PEE-Side Time Determination
Claim 6 describes the Policy Execution Equipment (PEE) itself downloading the policy and determining the execution time point before executing it.
1. Material & Component Substitution: TPM-Secured Time Source
- Enabling Description: The PEE is a hardware device that includes a Trusted Platform Module (TPM) with a secure, monotonic counter and a hardware-backed real-time clock (RTC). Policies are downloaded from the PMS and stored in encrypted memory. For the PEE to "determine whether an execution time point arrives," it must receive a time signal from its TPM-secured RTC. The TPM signs the timestamp, preventing local software-based attacks from spoofing the time to prematurely activate or disable a policy. Execution only proceeds if the policy's time condition is met by a validly signed timestamp from the TPM.
- Mermaid Diagram:
flowchart LR subgraph PEE A[PMS] -- Encrypted Policy --> B(Encrypted Storage); C[TPM] -- Signed Timestamp --> D{Time Check Logic}; B -- Decrypt w/ TPM Key --> D; D -- Time Match & Valid Signature --> E[Execute Policy]; end
2. Cross-Domain Application: Automotive Over-the-Air (OTA) Updates
- Enabling Description: A vehicle's Telematics Control Unit (TCU) acts as the PEE. The manufacturer (PS/PMS) pushes an OTA update package containing new software and a set of activation policies. One policy might be: "Apply powertrain update only when (vehicle_speed == 0) AND (local_time is between 02:00-04:00)." The TCU downloads this package and continuously monitors its own internal clock and vehicle CAN bus data. When it sees that the vehicle is parked and its clock is within the specified window, it self-determines that the execution time point has arrived and initiates the software flash.
- Mermaid Diagram:
graph TD A[OTA Server] -- Update Package & Policy --> B[Vehicle TCU (PEE)]; subgraph B C[Internal RTC] --> D{Time Check (02:00-04:00?)}; E[CAN Bus Sensor] -- Speed=0 --> D; D -- All Conditions Met --> F[Initiate Powertrain Update]; end
3. Integration with Emerging Tech: IoT Sensor-Triggered Time Windows
- Enabling Description: The PEE is an edge computing device controlling factory machinery. It downloads a policy: "Activate high-power diagnostic mode for 10 minutes." This policy is dormant until triggered. An attached IoT acoustic sensor, running a local ML model, detects an anomalous machine vibration. This sensor event triggers the start of the 10-minute time window. The PEE determines the "execution time point" has arrived upon receiving the trigger from the IoT sensor and then executes the policy, self-disabling it after the 10-minute duration has elapsed according to its internal clock. This combines an external event with an internal, time-based execution.
- Mermaid Diagram:
stateDiagram-v2 [*] --> Idle Idle --> Executing : IoT sensor detects anomaly Executing --> Idle : 10-minute timer expires Executing: On Entry: Activate diagnostic policy Executing: On Exit: Deactivate diagnostic policy
4. Operational Parameter Expansion: Cryogenic Computing Environment
- Enabling Description: The PEE is a control module for a quantum computer, operating at cryogenic temperatures near absolute zero. It downloads experimental control sequences (policies) from a room-temperature PMS. Each policy is tagged with a valid execution window defined in terms of the number of clock cycles elapsed on a master cryogenic reference clock. The PEE determines the execution time point has arrived by comparing the cycle count from the reference clock to the policy's valid cycle window. This is necessary because standard RTCs do not function at such low temperatures and time must be tracked as discrete, high-frequency cycles.
- Mermaid Diagram:
sequenceDiagram participant PMS as Room-Temp PMS participant PEE as Cryo-Control PEE participant RefClock as Cryo Reference Clock PMS->>PEE: Download Policy (Valid: Cycles 1M to 1.1M) loop RefClock->>PEE: Current Cycle Count PEE->>PEE: Is Count >= 1,000,000? alt Yes PEE->>PEE: Execute Policy end end
5. Inverse/Failure Mode: "Time-Drift Failsafe" Execution
- Enabling Description: The PEE is a low-cost IoT device with an imprecise internal clock. It periodically synchronizes with an NTP server. The PEE is programmed with a "maximum allowable time drift" parameter (e.g., 5 seconds). Before determining a policy's execution time, it first checks the time elapsed since its last successful NTP sync. If this duration exceeds a threshold, or if its calculated drift is greater than the parameter, it refuses to execute any time-based policies. Instead, it enters a limited-functionality mode and executes a static, pre-defined default policy until it can successfully re-synchronize its clock, preventing erratic behavior from time inaccuracy.
- Mermaid Diagram:
flowchart TD A[Start Time Check for Policy X] --> B{Time since last NTP sync < Threshold?}; B -- No --> C{Calculated Drift < Max Drift?}; B -- Yes --> D{Is current time within Policy X window?}; C -- No --> E[Enter Failsafe Mode: Execute Default Policy]; C -- Yes --> D; D -- Yes --> F[Execute Policy X]; D -- No --> G[Continue Monitoring];
Combination Prior Art Scenarios with Open-Source Standards
Combination with Kubernetes NetworkPolicy API: The PS-PMS-PEE architecture is mapped directly onto a Kubernetes cluster. A custom Kubernetes controller (acting as the PS) watches for
NetworkPolicyobjects that have a new, non-standardspec.timeConditionfield (e.g.,timeCondition: {tz: "UTC", startTime: "22:00", endTime: "06:00"}). This controller does not apply the policy directly. Instead, a DaemonSet running on each node (acting as the PMS) receives allNetworkPolicyobjects. The PMS on each node determines, based on the node's local clock, if thetimeConditionis met. If so, it translates the Kubernetes policy into rules for the underlying CNI plugin (e.g., Calico, Cilium), which acts as the PEE.Combination with Prometheus and Alertmanager: An Alertmanager instance acts as the PS. A time-based policy is defined as an alert rule in Prometheus (e.g.,
ALERT TimePolicyActive IF hour() >= 9 AND hour() < 17). When this alert becomesFiring, Alertmanager (PS) sends a webhook containing the policy details to a generic policy orchestrator (PMS). The PMS parses the webhook and transmits the appropriate configuration to a network device (PEE). In this model, the "determination of the execution time" is performed by the Prometheus query engine, and the "transmission" is the Alertmanager webhook action.Combination with IETF QUIC and BGP: A time-based policy is used to control BGP route advertisements for a QUIC-enabled service. A central BGP controller (PS) holds two potential route advertisements for a service prefix: a high-performance path and a low-cost, high-latency path. The PS determines, based on time-of-day, which policy is active. During peak business hours (09:00-17:00), it "transmits" the high-performance route to its BGP peers (PMS/PEE). During off-peak hours, it withdraws that route and transmits the low-cost route. This uses time-based policy logic to influence routing decisions for a specific application protocol (QUIC) at the network infrastructure layer (BGP).
Generated 4/30/2026, 11:57:32 PM