Patent US5768528
Derivative works
Defensive disclosure: derivative variations of each claim designed to render future incremental improvements obvious or non-novel.
Active provider: Google · gemini-2.5-pro
Derivative works
Defensive disclosure: derivative variations of each claim designed to render future incremental improvements obvious or non-novel.
Defensive Disclosure and Technical Variations of U.S. Patent No. 5,768,528
Publication Date: May 6, 2026
Subject: Technical Disclosures Related to Client-Server Information Delivery Systems
Purpose: This document discloses a series of technical variations, enhancements, and alternative implementations related to the system and methods described in U.S. Patent No. 5,768,528, titled "Client-server system for delivery of online information." The purpose of this disclosure is to place these concepts into the public domain, thereby establishing them as prior art for any future patent applications.
Section 1: Derivatives of the Core Server/Client Data Transfer Method (Claims 1, 14, 38)
Axis 1: Component Substitution
Derivative 1.1: Quantum-Resistant File Identification and Verification
- Enabling Description: The Cyclic Redundancy Check (CRC) file identification code is replaced with a cryptographic hash derived from a Quantum Key Distribution (QKD) protocol. A server and subscriber client first establish a shared secret key using a QKD channel (e.g., BB84 protocol). For file transfer verification, particularly for resuming interrupted downloads as in Claim 38, the identification code sent by the subscriber is not a simple CRC of the partial file. Instead, it is a SHA-3 hash of the partial file's content, which is then XORed with a one-time pad segment from the pre-shared quantum key. The server performs the same operation on its version of the file segment. A match provides quantum-resistant assurance that the partial file is both intact and authentic, preventing spoofing.
- Diagram:
sequenceDiagram participant Client participant Server participant QKD_Channel Client->>QKD_Channel: Establish Shared Secret Key Server->>QKD_Channel: Establish Shared Secret Key Note over Client, Server: Pre-download key exchange complete Client->>Server: Request File Download Server-->>Client: Transmit File (interrupted) Client->>Client: 1. Calculate SHA-3 hash of partial file Client->>Client: 2. XOR hash with QKD one-time-pad Client->>Server: Send(FileSize, Quantum-Verified-Hash) Server->>Server: 1. Get corresponding file segment Server->>Server: 2. Calculate SHA-3 hash of segment Server->>Server: 3. XOR hash with QKD one-time-pad Server->>Server: Compare Hashes alt Hashes Match Server-->>Client: Transmit remaining portion of file else Hashes Do Not Match Server-->>Client: Request full re-transmission end
Derivative 1.2: Neuromorphic Predictive Scheduling
- Enabling Description: The static
schedule of events fileis replaced by a dynamic, predictive scheduling system managed by a neuromorphic processing unit (NPU) on the server. The NPU implements a spiking neural network that continuously processes real-time data streams, including the subscriber's historical access patterns, current network latency metrics for the subscriber's ISP, time-of-day, and the publisher's content release cadence. The NPU predicts optimal, personalized download windows for each subscriber to maximize transfer success and minimize network cost. The schedule is no longer a fixed file but a probabilistic model that pushes a "next optimal time" to the client after each successful or failed connection. - Diagram:
flowchart TD subgraph Server A[Real-time Data Streams<br/>- User History<br/>- Network Latency<br/>- Publisher Cadence] --> B{Neuromorphic Processor (NPU)}; B -- Predicts --> C(Probabilistic Schedule Model); C -- Pushes --> D[Next Optimal Download Time]; end subgraph Client E[Client Scheduler] -- Receives --> D; E -- Triggers at Optimal Time --> F(Initiate Connection); end Server -- Transmits Data --> Client; F --> G{Download Success?}; G -- Yes/No --> H(Send Result to Server); H --> A;
Axis 2: Operational Parameter Expansion
Derivative 2.1: Interplanetary Data Synchronization Protocol
- Enabling Description: The system is adapted for synchronizing mission-critical data (e.g., habitat telemetry, geological surveys) between a Mars habitat (subscriber) and Earth-based Mission Control (server). The protocol explicitly accounts for communication windows and extreme latency (4 to 24 minutes one-way). The
schedule of events filebecomes a "communications window manifest" based on orbital mechanics. The error-recovery mechanism of Claim 38 is modified to handle multi-day interruptions. The client stores a journal of received and verified data blocks. Upon re-establishing a connection, the client transmits a bitfield representing the journal of successfully received blocks, allowing the server to calculate the exact set of missing blocks and transmit them without requiring a CRC check of a contiguous partial file. Forward Error Correction (FEC) using LDPC codes is applied to all transmissions to mitigate data corruption from cosmic radiation. - Diagram:
sequenceDiagram participant Mars_Client participant Earth_Server Note over Mars_Client, Earth_Server: Latency: 4-24 mins each way Mars_Client->>Earth_Server: Request Data Sync (during comms window) Earth_Server-->>Mars_Client: Transmit File Segments (with FEC) Note right of Mars_Client: Connection Lost (e.g., dust storm, planet rotation) Mars_Client->>Mars_Client: Store received/verified segments in journal loop Next Comms Window (Hours/Days Later) Mars_Client->>Mars_Client: Generate bitfield from journal Mars_Client->>Earth_Server: Request Resume, send(FileID, Block_Bitfield) end Earth_Server->>Earth_Server: Compare bitfield to original file map Earth_Server-->>Mars_Client: Transmit only missing segments
Axis 3: Cross-Domain Application
Derivative 3.1: Agricultural Technology (AgTech) Drone Fleet Management
- Enabling Description: A central farm server pushes mission-critical data files (e.g., multispectral imagery analysis, variable-rate fertilization maps, firmware updates) to a fleet of autonomous agricultural drones. The drones, acting as subscribers, connect to the server via a mesh Wi-Fi network when they return to their charging pads. The connection is often intermittent. The drone client transmits a list of its current file versions (filename, size, SHA-256 hash). The server compares this to the master repository and transmits only the delta (new or updated files). The error-resume protocol is used for large map files to ensure that a drone does not depart for a mission with a corrupted or incomplete instruction set.
- Diagram:
graph TD subgraph Farm_Control_Hub Server[Central Server] Repo[Mission File Repository] Server --- Repo end subgraph Drone_Fleet Drone1[Drone A<br/>- Client Software] Drone2[Drone B<br/>- Client Software] Drone3[Drone C<br/>- Client Software] end Pad1[Charging Pad 1] -- Wi-Fi Mesh --> Server Pad2[Charging Pad 2] -- Wi-Fi Mesh --> Server Pad3[Charging Pad 3] -- Wi-Fi Mesh --> Server Drone1 -- Lands on --> Pad1 Drone2 -- Lands on --> Pad2 Drone3 -- Lands on --> Pad3 Pad1 -- Connection established --> ClientA_Request{Drone A requests sync} ClientA_Request -- "filename, size, hash" --> Server Server -- "Sends delta files/patches" --> ClientA_Request
Axis 4: Integration with Emerging Tech
Derivative 4.1: AI-Driven Predictive Content Delivery
- Enabling Description: An AI/ML model on the server analyzes each subscriber's profile, content consumption history, and real-world event triggers (via news APIs) to predict which files the subscriber will need. The system pre-emptively pushes these predicted files to a local cache on the subscriber's device during off-peak hours. The scheduled download event is then transformed from a full download into a lightweight "manifest verification" event. The client simply reports the hashes of the files in its predictive cache, and the server sends a small confirmation message or a patch for any files that were updated since being cached. This dramatically reduces perceived download times.
- Diagram:
flowchart LR subgraph Server A[AI/ML Engine] --> B{Predicts User Needs}; C[Publisher Content] --> A; D[Network & User Data] --> A; B --> E[Push Predictive Cache]; end subgraph Subscriber_Device F[Local Predictive Cache] G[Client Agent] end E -- Off-peak hours --> F; G -- Scheduled "download" --> H((Verify Cache Manifest)); H -- "List of cached file hashes" --> Server; Server -- "Confirmation or small patch" --> H; I[User] --> J{Requests Content}; J -- "Instant load" --> F;
Derivative 4.2: Blockchain-Verified File Provenance and Integrity
- Enabling Description: The system is integrated with a permissioned blockchain to provide immutable proof of file origin and integrity. When a publisher uploads a file to the server, the server calculates the file's hash (e.g., IPFS CID) and registers it in a smart contract on the blockchain, creating a permanent record of the file's content, publisher, and timestamp. When the subscriber client receives a file, it re-calculates the hash and queries the smart contract to verify it matches the publisher's registered version. For resumed downloads, the CRC is supplemented with a Merkle proof; the client provides the root hash of the partial file's Merkle tree, which the server can efficiently verify against the full file's Merkle tree.
- Diagram:
sequenceDiagram participant Publisher participant Server participant Blockchain participant Subscriber Publisher->>Server: Upload File Server->>Server: Calculate File Hash (CID) Server->>Blockchain: Register(File_CID, Publisher_ID) via Smart Contract Subscriber->>Server: Request File Server-->>Subscriber: Transmit File Subscriber->>Subscriber: Calculate received file's hash Subscriber->>Blockchain: Verify(File_CID) alt Verification OK Subscriber->>Subscriber: Install File else Verification Fails Subscriber->>Subscriber: Discard File, Report Error end
Axis 5: The "Inverse" or Failure Mode
Derivative 5.1: Graceful Degradation for Metered/Unstable Networks
- Enabling Description: The subscriber client actively monitors network status (e.g., using
netinfoAPI) and device power level. If it detects a metered connection (non-WiFi), low signal strength (<2 bars), or low battery (<20%), it enters a "graceful degradation" mode. It sends a special flag in its information request to the server. The server responds by sending a low-fidelity version of the publication: text-only files, images replaced with low-resolution placeholders (e.g., 10KB LQIP), and video/audio files replaced with metadata-only stubs. The download schedule is also automatically deferred until a stable, unmetered network is available. - Diagram:
stateDiagram-v2 [*] --> Stable_Network Stable_Network: Full content downloads Stable_Network --> Low_Power: Battery < 20% Stable_Network --> Unstable_Network: Low signal or metered network detected Low_Power --> Stable_Network: Device charging Unstable_Network --> Stable_Network: Stable WiFi detected state Degraded_Mode { direction LR Low_Power Unstable_Network [*] --> Request_Low_Fi: Client sends 'degraded' flag to server Request_Low_Fi --> Receive_Low_Fi: Server sends text-only, LQIPs Receive_Low_Fi --> Defer_Schedule: Postpone next check }
Section 2: Derivatives of the User Interface (Claim 32)
Derivative 6.1: Augmented Reality (AR) Contextual Channel Display
- Enabling Description: The user interface is implemented as an augmented reality overlay on smart glasses. The "channel selection menu" is context-aware and triggered by the user's gaze. For example, looking at a stock market terminal brings up the "Bloomberg" channel. The "scrolling ticker" is not fixed to the screen but is world-locked, appearing as a persistent holographic display in the user's environment. The publisher's logo is a 3D object that anchors the ticker in place. The user changes channels through gesture control (e.g., a swiping motion) or voice commands.
- Diagram:
flowchart TD A[User wearing AR Glasses] -- Gazes at --> B(Object of Interest<br/>e.g., a Tesla car); C[AR System] -- Recognizes object --> D{Trigger Contextual Channel}; D -- "Automotive News Channel" --> E[Display 3D Logo & Ticker]; E -- World-locked in user's view --> F((Holographic Ticker scrolls news about Tesla)); A -- Performs swipe gesture --> G{Change Channel}; G -- "Financial News" --> H((Ticker content switches to TSLA stock price));
Section 3: Combination with Open-Source Standards
Combination 1: Delivery System Over the Tor Network
- Enabling Description: The entire client-server communication protocol is tunneled through The Onion Router (Tor) open-source network to provide privacy and anonymity for subscribers. The server hosts its service as a Tor onion service, and the subscriber client is configured to route all its traffic through a local Tor proxy. The scheduled connection and error-resume protocol function as described, but the underlying TCP/IP connection is anonymized. This is applicable for publishers and subscribers in environments with heavy censorship or surveillance, ensuring that both the content being delivered and the identity of the subscriber are protected.
- Diagram:
graph TD subgraph Subscriber A[Client Application] -- traffic --> B(Local Tor Proxy) end subgraph Internet C(Tor Entry Node) D(...) E(Tor Exit Node) B --> C --> D --> E end subgraph Server_Infrastructure F[Server Onion Service] end E --> F
Combination 2: Using WebSockets for Real-Time Ticker Updates
- Enabling Description: While the main file downloads operate on the scheduled request-response model, the scrolling ticker display (Claim 32) is powered by a persistent WebSocket connection (RFC 6455). After the initial content download, the client opens a WebSocket connection to a specific ticker service endpoint on the server. The server can then push real-time, low-latency updates (e.g., stock price changes, sports scores) to the ticker without waiting for the next scheduled download event. This creates a hybrid system combining scheduled bulk downloads with a real-time stream for immediate updates, using an open web standard.
- Diagram:
sequenceDiagram participant Client participant Server Client->>Server: Initiate scheduled download (HTTP) Server-->>Client: Respond with publication files Client->>Server: Open WebSocket connection for Ticker Server-->>Client: Connection established loop Real-time Server-->>Client: Push Ticker Update (JSON payload) Client->>Client: Update scrolling ticker UI end
Combination 3: Integration with ActivityPub Federated Protocol
- Enabling Description: The system is decentralized using the W3C ActivityPub standard. Publishers run their own ActivityPub-compatible servers (e.g., Mastodon, PeerTube instances). A subscriber's client is an ActivityPub client that "follows" various publisher actors. The "server" component of the patent becomes a personalized caching and delivery agent that pulls content from the federated network on behalf of the user. This agent subscribes to the user's followed publishers, aggregates the content, and then uses the scheduled, resumable download protocol to push a personalized "digest" to the user's end device. This combines the open, decentralized social networking standard with the patent's efficient and reliable client-side delivery mechanism.
- Diagram:
graph LR P1[Publisher 1<br/>(ActivityPub Server)] P2[Publisher 2<br/>(ActivityPub Server)] P3[Publisher 3<br/>(ActivityPub Server)] subgraph User's Ecosystem Agent[Personal Caching & Delivery Agent] Client[End-User Client Device] Agent -- Follows --> P1 Agent -- Follows --> P2 Agent -- Follows --> P3 Agent -- Scheduled/resumable push --> Client end P1 -- Pushes content --> Agent P2 -- Pushes content --> Agent P3 -- Pushes content --> Agent
Generated 5/6/2026, 8:50:07 PM