Patent 9179359
Derivative works
Defensive disclosure: derivative variations of each claim designed to render future incremental improvements obvious or non-novel.
Active provider: Google · gemini-2.5-pro
Derivative works
Defensive disclosure: derivative variations of each claim designed to render future incremental improvements obvious or non-novel.
Defensive Disclosure: US Patent 9,179,359
Publication Date: May 13, 2026
Subject Matter: Enhancements, derivatives, and alternative embodiments of systems and methods for differentiated network access control on a wireless end-user device, as described in US Patent 9,179,359. This document is intended to enter the public domain and serve as prior art.
Preamble
The disclosures herein relate to the field of wireless network traffic management, specifically to device-assisted services for controlling application access to network resources based on network state. The following descriptions of systems, methods, and apparatuses are provided to disclose novel and non-obvious extensions and alternatives to the core teachings of US Patent 9,179,359 ('359 patent). These disclosures are intended to be enabling for a person having ordinary skill in the art (PHOSITA).
Claim Scope Analysis
The core invention of the '359 patent covers a method (claim 1), a device (claim 14), and a computer-readable medium (claim 23) for:
- Determining a network busy state.
- Associating a network access status (e.g., allow, block, throttle) with a device application based on the busy state and a predefined policy.
- Controlling the application's network access according to the associated status.
The following disclosures expand upon each of these core concepts.
Derivative Embodiments
Axis 1: Material & Component Substitution
1.1. Policy Enforcement via Hardware Co-Processor/FPGA
- Enabling Description: The "service processor" functionality described in claims 14 and 23 is implemented not in software running on the main CPU, but within a dedicated, low-power hardware co-processor, such as a Field-Programmable Gate Array (FPGA) or an Application-Specific Integrated Circuit (ASIC). This hardware component directly interfaces with the device's network interface controller (NIC). The policy, defining the mapping between network busy states and application access statuses, is compiled into a hardware description language (e.g., Verilog or VHDL) and synthesized into the FPGA's logic gates or etched into the ASIC. The hardware processor monitors packet headers (e.g., 5-tuple of source/destination IP, port, and protocol) at line speed, matching them against application signatures. It concurrently receives a network busy state signal (e.g., a simple integer value from 0-255) from the baseband processor. Based on this value, it consults its hard-wired Finite State Machine (FSM) to either pass, drop, or shape the traffic for that packet's associated application, offloading this task entirely from the main CPU. This reduces latency from milliseconds (in a software implementation) to nanoseconds and significantly lowers power consumption.
- Mermaid Diagram:
graph TD A[Baseband Processor] -- Network Busy State (NBS) --> C{Policy Engine FPGA}; B[Application CPU] -- IP Packets --> D[Network Interface Controller]; D -- Raw Packet Stream --> C; C -- Classified & Controlled Traffic --> E[RF Transceiver]; subgraph On-Chip C; D; end
1.2. Policy Execution in a Trusted Execution Environment (TEE)
- Enabling Description: To ensure policy integrity and prevent tampering by the end-user or malicious applications, the entire service processor agent is instantiated within a hardware-isolated Trusted Execution Environment (TEE), such as ARM TrustZone or Intel SGX. The policy rules are encrypted and signed by the network operator. Upon boot, a "Secure World" OS loads the service processor and its encrypted policy. The "Normal World" OS, where user applications run, can only communicate with the service processor via a secure monitor call (SMC). When an application attempts to open a network socket, the call is trapped and forwarded to the TEE. The service processor inside the TEE inspects the request, checks the network busy state (which is also securely passed from the baseband modem), and enforces the policy. The key and policy store are inaccessible from the Normal World, making the system robust against reverse-engineering or modification.
- Mermaid Diagram:
sequenceDiagram participant App in Normal World participant Kernel in Normal World participant TEE Monitor participant ServiceProcessor in Secure World participant Modem App->>Kernel: socket.connect() Kernel->>TEE Monitor: SMC: Request_Network_Access(AppID) TEE Monitor->>ServiceProcessor: Forward Request(AppID) Modem-->>ServiceProcessor: Secure_Channel: Report_Busy_State(value) ServiceProcessor->>ServiceProcessor: Evaluate_Policy(AppID, Busy_State) ServiceProcessor-->>TEE Monitor: Return_Decision(ALLOW/DENY) TEE Monitor-->>Kernel: Resume with Status Kernel-->>App: Connection Allowed/Refused
Axis 2: Operational Parameter Expansion
2.1. High-Frequency Trading (HFT) Latency Jitter Management
- Enabling Description: The system is applied to an HFT device operating on a private 5G/6G network where "network busy state" is defined not by bandwidth saturation but by latency jitter in the air interface, measured in microseconds. The service processor continuously monitors the round-trip time (RTT) and jitter of a dedicated control channel to the mobile edge compute (MEC) server. The policy defines multiple jitter thresholds (e.g., <10µs, 10-50µs, >50µs). If jitter exceeds 10µs, a "Degraded" access status is triggered, causing the service processor to immediately suspend all non-essential network traffic, including OS telemetry, analytics reporting, and secondary market data feeds. This frees up MAC layer scheduling resources to prioritize the single, critical trading application's traffic, ensuring its latency remains within the sub-millisecond execution window. The policy is dynamically updated based on the VIX (Volatility Index), tightening jitter thresholds during high market volatility.
- Mermaid Diagram:
stateDiagram-v2 state "Low Jitter (<10µs)" as Low state "Medium Jitter (10-50µs)" as Medium state "High Jitter (>50µs)" as High [*] --> Low: Initialize Low --> Medium: Jitter increases Medium --> High: Jitter increases Medium --> Low: Jitter decreases High --> Medium: Jitter decreases state Low { description All Apps: Full Access } state Medium { description Trading App: Full Access description Market Data Feeds: Throttled description OS Telemetry: Blocked } state High { description Trading App: Priority Access description All Other Apps: Blocked }
2.2. UUV Acoustic Mesh Network Coordination
- Enabling Description: The invention is applied to a swarm of Unmanned Underwater Vehicles (UUVs) communicating via a shared, low-bandwidth (e.g., 9600 bps) acoustic modem network. In this context, the "wireless end-user device" is each UUV. The "network busy state" is a composite score derived from the acoustic channel's Bit Error Rate (BER), ambient noise level, and the number of active transmitting nodes (packet collision probability). The service processor on each UUV prioritizes traffic as follows: P0 (Emergency/Collision Avoidance), P1 (Command & Control/Telemetry), P2 (Collaborative Sonar Data Exchange), P3 (Bulk Science Data Upload). If the busy state score crosses a threshold, the service processor automatically downgrades the access status for lower-priority applications. For example, it will buffer P3 data locally and cease transmission, while throttling the transmission rate of P2 packets to reduce channel occupancy, ensuring P0 and P1 messages have a clear channel.
- Mermaid Diagram:
flowchart TD subgraph UUV_Node A[Acoustic Channel Monitor] --> B{Network Busy State?}; B -- High BER/Collision --> C[Policy: HIGH_CONGESTION]; B -- Low BER/Collision --> D[Policy: NORMAL]; C --> E{Control App Traffic}; D --> E; E --> F[P0: Emergency - Unrestricted]; E --> G[P1: C&C - Unrestricted]; E --> H[P2: Sonar - Throttled]; E --> I[P3: Science Data - Buffered/Blocked]; end
Axis 3: Cross-Domain Application
3.1. Aerospace: LEO Satellite Bandwidth Allocation
- Enabling Description: Each satellite in a Low Earth Orbit (LEO) constellation acts as a "wireless end-user device" managing multiple data streams over shared inter-satellite laser links and ground station downlinks. The "service processor" is the satellite's onboard router. The "network busy state" is determined by the current buffer occupancy of the laser link transceivers and the scheduled ground station contact window. A "policy" prioritizes data types: 1) Satellite Health & Telemetry, 2) High-Value Customer Data (e.g., military communication), 3) Consumer Broadband Backhaul, 4) Earth Observation Imagery. When a high-priority tactical data burst is routed through the satellite (e.g., from another satellite), the service processor assigns a "deprioritized" status to consumer backhaul and a "hold" status to imagery data, clearing the laser link buffers for the critical traffic. Once the high-priority traffic has passed, the processor restores normal access status to the other services.
- Mermaid Diagram:
graph TD subgraph LEO_Satellite Telemetry[Health & Telemetry] --> Router; Tactical[Tactical Comms] --> Router; Broadband[Consumer Broadband] --> Router; Imaging[Earth Observation] --> Router; Router{Service Processor} -- Policy Logic --> Laser_Link[Inter-Satellite Laser Link]; Router -- Policy Logic --> Downlink[Ground Station Downlink]; StateMonitor[Link Buffer Monitor] -->|Busy State| Router; style Telemetry fill:#c9ffc9 style Tactical fill:#ffb3b3 style Broadband fill:#d1d1ff style Imaging fill:#ffffcc end
3.2. AgTech: Smart Irrigation Network Prioritization
- Enabling Description: A farm's wireless mesh network, comprising thousands of soil moisture sensors, weather stations, and automated irrigation valve controllers, is managed by a central gateway ("wireless end-user device"). The "network busy state" is determined by the collision rate in the LoRaWAN/802.11s mesh. The service processor in the gateway runs a policy engine where applications are defined by data type. During normal operation, all data types (sensor readings, valve status reports) are given equal access. However, if the weather station detects a critical event (e.g., sudden frost warning, high wind speed), it sends a P0 priority message. The gateway's service processor immediately assigns a "Restricted" access status to all routine soil moisture reporting "applications" and a "Blocked" status to firmware update downloads. This ensures the command-and-control traffic to activate anti-frost sprinklers or close valves is propagated through the network with minimum delay and maximum reliability.
- Mermaid Diagram:
sequenceDiagram participant WeatherStation participant Gateway participant SoilSensor participant IrrigationValve WeatherStation->>Gateway: High-Priority Alert (Frost Warning) Gateway->>Gateway: Determine Network Busy State: CRITICAL_EVENT Gateway->>Gateway: Apply Frost Policy Gateway-->>SoilSensor: Set Access Status: RESTRICTED Gateway-->>IrrigationValve: Set Access Status: PRIORITY_COMMAND Gateway->>IrrigationValve: Send Command: ACTIVATE_SPRINKLERS SoilSensor--xGateway: Data Upload Throttled/Delayed
Axis 4: Integration with Emerging Tech
4.1. AI-Driven Predictive Network Access Management
- Enabling Description: The service processor on the device incorporates a lightweight, on-device Recurrent Neural Network (RNN) model. This model is trained to predict future network congestion and application demand based on a time-series analysis of historical data, including: time of day, device location (via GPS), cell tower ID, currently running foreground application, and historical bandwidth usage patterns. For example, the model learns that at 5:00 PM when the user's device connects to their home Wi-Fi, they typically launch a video streaming app, causing a spike in demand. At 4:59 PM, the AI-powered service processor proactively assigns a "pre-emptive throttle" status to background applications (e.g., cloud photo uploads, app store updates) before the user launches the video app, thereby reserving network capacity and ensuring a smooth streaming experience from the first second. The model is periodically retrained and updated by a central server.
- Mermaid Diagram:
flowchart LR subgraph Device A[Sensor Data (Time, Location, Cell ID)] --> B[RNN Model]; C[App Usage History] --> B; D[Network State History] --> B; B -- Prediction: High Congestion @ T+1min --> E{Policy Engine}; E -- Proactive Policy --> F[Traffic Controller]; G[App Traffic] --> F; F -- Controlled Traffic --> H[Radio]; end subgraph Cloud I[Central Model Training Server] -.-> B; end
4.2. Blockchain-Based Service Level Agreement (SLA) Enforcement
- Enabling Description: The system uses a private blockchain to create an immutable and verifiable log of all network management actions. The user's Service Level Agreement (SLA) is encoded as a smart contract on the blockchain. The service processor on the device and the network's Policy and Charging Rules Function (PCRF) are both nodes on this blockchain. When the service processor throttles an application due to network congestion (a state verified and published by the PCRF), it writes a transaction to the blockchain containing a cryptographic hash of the action details (App ID, timestamp, throttling factor, network state proof). The smart contract automatically validates this action against the SLA rules. This provides a transparent, auditable trail for billing and disputes. A user could, for example, have a "premium data" allowance which, when used, prevents throttling; the smart contract would automatically reject any throttling transaction from the service processor while the user's premium data balance is positive.
- Mermaid Diagram:
graph TD subgraph Device SP[Service Processor] SP -- 1. Detects Congestion --> SP SP -- 2. Throttles App_X --> SP SP -- 3. Creates 'Throttle' Transaction --> BC_Node_Device[Blockchain Node] end subgraph Network PCRF[Policy/Charging Rules Function] PCRF -- Publishes Network State --> BC_Node_Network[Blockchain Node] end subgraph Blockchain BC_Node_Device --> SmartContract{SLA Smart Contract} BC_Node_Network --> SmartContract SmartContract -- Validates Transaction against SLA rules --> Ledger[Append to Ledger] end User[User Portal] -->|Reads| Ledger
Axis 5: The "Inverse" or Failure Mode
5.1. Graceful Degradation with Interactive User Override
- Enabling Description: Instead of silently throttling or blocking applications, this embodiment focuses on user-centric graceful degradation. When the service processor determines the network is busy and a policy dictates throttling a user-interactive application (e.g., a social media feed), it does not immediately block the data. Instead, it instructs the application (via a local API) to request lower-quality content (e.g., lower-resolution images, shorter video clips). Simultaneously, it triggers a non-intrusive UI notification (e.g., a small banner) stating, "Network is busy. Showing lite content. [Tap for full quality]". If the user taps the override, the service processor assigns that specific application a temporary "priority" access status for a limited duration (e.g., 5 minutes), allowing full-quality access while potentially throttling other background tasks more aggressively to compensate. This user feedback is logged and can be used to personalize the policy over time.
- Mermaid Diagram:
stateDiagram-v2 [*] --> Normal_Access Normal_Access: App requests full quality content Normal_Access --> Degraded_Mode: Network becomes busy Degraded_Mode: Processor signals App to request lite content Degraded_Mode: Display 'Tap for full quality' UI Degraded_Mode --> Priority_Override: User taps UI Degraded_Mode --> Normal_Access: Network congestion eases Priority_Override: App gets full quality for 5 mins Priority_Override --> Degraded_Mode: Timer expires Priority_Override --> Normal_Access: Network congestion eases
Combination with Open-Source Standards
Differentiated Control via D-Bus and systemd: On a Linux-based device (e.g., Android, automotive Linux), the service processor is implemented as a privileged system daemon. It uses D-Bus, an open-source Inter-Process Communication (IPC) system, to receive network access requests from applications. Applications are categorized by their
systemduser slice (e.g.,app-background.slice,app-interactive.slice). The service processor applies broad policies based on these slices. For example, when network congestion is high, it instructs the kernel's firewall (nftables) to severely rate-limit all traffic originating from theapp-background.slicecgroup, providing an OS-integrated mechanism for enforcing the access status.Policy Enforcement using eBPF: The control logic is implemented as an extended Berkeley Packet Filter (eBPF) program loaded into the kernel. The user-space service processor agent monitors the network state and application status. When a policy change is needed, it updates a set of "maps" (key-value stores) in the kernel. The eBPF program, attached to the network interface's traffic control (TC) ingress/egress hooks, reads these maps for every packet. It can then make instantaneous decisions to drop, re-route, or re-classify packet QoS bits based on the application's current access status stored in the map, all without context switching from the kernel, offering extremely high performance.
Network State Determination via Prometheus Metrics: In an enterprise or private 5G setting, the network infrastructure (gNodeB, UPF) is configured to export its performance metrics (e.g., Physical Resource Block (PRB) utilization, RTT, packet drop rate) in the Prometheus open-source monitoring format. The service processor on the device periodically scrapes this metrics endpoint (or receives metrics pushed from a Prometheus Alertmanager instance) to determine the "network busy state". This replaces proprietary or heuristic-based detection with a standardized, data-rich source of truth about the real-time condition of the network infrastructure.
Generated 5/13/2026, 12:48:26 AM