Patent 8352584
Derivative works
Defensive disclosure: derivative variations of each claim designed to render future incremental improvements obvious or non-novel.
Active provider: Google · gemini-2.5-pro
Derivative works
Defensive disclosure: derivative variations of each claim designed to render future incremental improvements obvious or non-novel.
Defensive Disclosure and Prior Art Generation for U.S. Patent 8,352,584
Publication Date: May 1, 2026
Subject: Derivatives and extensions of concepts for hosting multiple, customized, and isolated computing clusters as described in U.S. Patent 8,352,584. This document is intended to enter the public domain to serve as prior art.
Derivatives Based on Core Architectural Claims (Claims 1 & 10)
Axis 1: Material & Component Substitution
1.1. Hosted Clusters Isolated by Optical Switching Fabric
- Enabling Description: A system for hosting multiple client clusters where the private cluster networks and gateways are replaced by a reconfigurable optical switching fabric. Each cluster's nodes are connected to the fabric. A central controller, upon a client's request for a cluster, configures the optical cross-connects (OXCs) and MEMS-based switches to create dedicated, isolated lightpaths between the nodes of a specific cluster. A separate, designated lightpath serves as the "gateway" connection, linking one node of the cluster to the hosting provider's private electronic network for monitoring and management. This architecture provides complete Layer 1 isolation between clusters, eliminating electronic crosstalk and contention, and offers latency measured in nanoseconds. The configuration of a first HPC cluster for a financial client would involve creating lightpaths optimized for the lowest possible latency, while a second cluster for a VFX rendering client would have its lightpaths configured for maximum bandwidth.
- Mermaid.js Diagram:
graph TD subgraph Hosting Facility subgraph Client_A_Cluster [First HPC Cluster] NodeA1[Node] --- NodeA2[Node] NodeA2 --- NodeA3[Node] end subgraph Client_B_Cluster [Second HPC Cluster] NodeB1[Node] --- NodeB2[Node] end OSF[Optical Switching Fabric] subgraph Provider_Network [Private Company Network] Monitoring[Monitoring System 220] MgmtGateway[Management Gateway] end NodeA1 -- lightpath A --> OSF NodeA2 -- lightpath A --> OSF NodeA3 -- lightpath A --> OSF NodeB1 -- lightpath B --> OSF NodeB2 -- lightpath B --> OSF OSF -- lightpath A (Gateway) --> MgmtGateway OSF -- lightpath B (Gateway) --> MgmtGateway end ClientA[Client A System 208] --> PublicNet[Public Network 204] ClientB[Client B System 208] --> PublicNet PublicNet --> Firewall[Firewall/Auth 210] Firewall --> Provider_Network
1.2. FPGA-Based Network Isolation Gateways
- Enabling Description: The gateway devices (240, 241, 242) are implemented not as general-purpose routers but as specialized network interface cards (NICs) or appliances based on Field-Programmable Gate Arrays (FPGAs). For each client cluster, a specific bitstream is loaded onto the FPGA. This bitstream implements stateful packet inspection, traffic shaping, and access control lists directly in hardware logic. This allows for network isolation rules to be enforced at multi-gigabit line rates with deterministic, microsecond-level latency. A first cluster for a real-time data processing task could have its FPGA gateway programmed with rules to prioritize specific UDP streams, while a second cluster for batch analytics could have its gateway programmed to enforce strict bandwidth caps and perform protocol filtering.
- Mermaid.js Diagram:
sequenceDiagram participant Client participant PublicNet participant Firewall participant CompanyNet participant FPGAGateway as FPGA Gateway (240) participant Cluster as Custom HPC Cluster (250) Client->>PublicNet: Access Request PublicNet->>Firewall: Forward Request Firewall->>CompanyNet: Authenticated Request CompanyNet->>FPGAGateway: Route to Cluster 250 FPGAGateway->>Cluster: Forward Packet (Line-rate check) Note over FPGAGateway: Bitstream enforces isolation and traffic shaping rules in hardware logic. Cluster->>FPGAGateway: Response FPGAGateway->>CompanyNet: Forward Response CompanyNet->>Firewall: Route to Client Firewall->>PublicNet: Forward Response PublicNet->>Client: Deliver Response
1.3. Software-Defined Perimeter for Client Access
- Enabling Description: The centralized firewall and authentication mechanism (210) is replaced with a Software-Defined Perimeter (SDP), also known as a "zero trust" architecture. There is no single entry point. Instead, an SDP Controller is linked to an identity provider. When a client attempts to connect, their device and identity are first authenticated by the Controller. Upon successful authentication, the Controller dynamically instructs a specific SDP Gateway, co-located with the client's assigned cluster, to create a temporary, encrypted TLS tunnel directly to the client's machine. This creates a secure, individualized network segment of one, preventing any lateral network movement. The configuration differs in that a client for a high-security cluster may require multi-factor authentication and device posture checking before the tunnel is established, while a client for a development cluster may only require a username/password.
- Mermaid.js Diagram:
graph TD subgraph SDP_Architecture ClientA[Client A] --> SDPController[SDP Controller] SDPController -- Authenticate & Authorize --> ClientA SDPController -- Push Rules --> SDP_Gateway_A[SDP Gateway A] ClientA -- mTLS Tunnel --> SDP_Gateway_A SDP_Gateway_A --- ClusterA[HPC Cluster A] ClusterA --- PrivateNet[Private Company Network] ClientB[Client B] --> SDPController SDPController -- Authenticate & Authorize --> ClientB SDPController -- Push Rules --> SDP_Gateway_B[SDP Gateway B] ClientB -- mTLS Tunnel --> SDP_Gateway_B SDP_Gateway_B --- ClusterB[HPC Cluster B] ClusterB --- PrivateNet PrivateNet --- MonitoringSystem[Monitoring System] end
1.4. Disaggregated Persistent Memory as Shared Storage
- Enabling Description: The shared data storage within a cluster is implemented using disaggregated persistent memory modules (e.g., Intel Optane or other Storage Class Memory) connected over a high-speed, low-latency fabric like Compute Express Link (CXL) or Gen-Z. Instead of a dedicated storage node (320), pools of persistent memory are directly accessible by all processing nodes in the cluster via the fabric. A first cluster can be configured with memory-mapped access to this persistent memory pool, treating it like extended DRAM for in-memory database tasks. A second cluster, for the same client, can be configured to access a different portion of the same physical memory pool as a block device, providing an ultra-fast scratch space for checkpointing large simulations. The configuration difference lies in the software-defined access mode (memory vs. block) and the quality-of-service policies applied by the fabric manager.
- Mermaid.js Diagram:
graph TD subgraph Custom_Cluster_350 Node1[Processing Node] Node2[Processing Node] Node3[Processing Node] CXL_Fabric[CXL Fabric Switch] Node1 -- CXL Link --> CXL_Fabric Node2 -- CXL Link --> CXL_Fabric Node3 -- CXL Link --> CXL_Fabric end subgraph Disaggregated_P-Mem_Pool PMem1[Persistent Memory Module] PMem2[Persistent Memory Module] end CXL_Fabric -- CXL Link --> PMem1 CXL_Fabric -- CXL Link --> PMem2 Gateway[Gateway 240] -- Connects one node to --> PrivateNet[Private Network 230] Node1 --- Gateway
Axis 2: Operational Parameter Expansion
2.1. Hosted Cryogenic and Ambient Temperature Clusters
- Enabling Description: A hosting system offers different clusters operating at extreme temperature differentials. The
first clusteris a standard HPC cluster using CMOS-based processors operating at ambient data center temperatures (e.g., 20°C). Thesecond clustercomprises computing elements based on superconducting logic (e.g., circuits with Josephson junctions) that require cryogenic cooling to near absolute zero (< 4 Kelvin). Both clusters are connected to the same private company network via their respective gateways for monitoring and job submission. The configuration for the second cluster is vastly different, involving specialized hardware, cryogenic cooling infrastructure, and different software compilers, and is intended for quantum simulation or other tasks that benefit from superconducting electronics. The monitoring system is adapted to track both standard hardware metrics and cryogenic-specific parameters like liquid helium levels and operating temperatures. - Mermaid.js Diagram:
stateDiagram-v2 direction LR state "Client Task Definition" as Task Task --> Ambient_Config Task --> Cryo_Config state "Ambient HPC Cluster (20°C)" as Ambient { direction LR [*] --> Processing Processing --> Completed } state "Cryogenic HPC Cluster (<4K)" as Cryo { direction LR [*] --> Superconducting_Processing Superconducting_Processing --> Completed } Ambient_Config --> Ambient Cryo_Config --> Cryo state "Monitoring System" as Monitor Monitor: Tracks CPU temp, fan speed Monitor: Tracks Helium levels, Kelvin temp Ambient --> Monitor Cryo --> Monitor
2.2. Globally Distributed, Logically Centralized Hosted Clusters
- Enabling Description: The system is expanded to a global scale where the "private company network" is a secure, high-speed global WAN. A
first clusteris physically deployed in a data center in a specific legal jurisdiction (e.g., Germany) to satisfy a client's data residency requirements under GDPR. Asecond clusterfor the same client is deployed in a U.S. data center to be closer to their American end-users for latency-sensitive processing. Both clusters are managed and monitored by a single, logically centralized monitoring system. The gateways (240, 241) connect their respective local cluster networks to the global WAN. This configuration is customized based on geopolitical and network latency parameters, not just hardware specifications. - Mermaid.js Diagram:
graph TD subgraph EU_Datacenter Cluster_A[First HPC Cluster - GDPR Compliant] Gateway_A[Gateway 240] Cluster_A -- private net --> Gateway_A end subgraph US_Datacenter Cluster_B[Second HPC Cluster - Low Latency] Gateway_B[Gateway 241] Cluster_B -- private net --> Gateway_B end subgraph Global_WAN [Private Company Network 230] Monitoring[Centralized Monitoring System 220] end Gateway_A -- WAN Link --> Global_WAN Gateway_B -- WAN Link --> Global_WAN Client[Client System 208] --> PublicInternet[Public Internet] PublicInternet --> Firewall[Firewall 210] Firewall --> Global_WAN
Axis 3: Cross-Domain Application
3.1. Aerospace: Unified Digital Twin Simulation
- Enabling Description: An aerospace company leases two distinct, isolated clusters for creating a comprehensive "digital twin" of a new aircraft. The
first clusteris configured with high-frequency CPUs, large memory per node, and a high-speed parallel file system; it is used to run complex Computational Fluid Dynamics (CFD) and structural mechanics simulations on the airframe. Thesecond clusteris configured with a real-time operating system, lower-power processors, and specialized I/O cards (e.g., MIL-STD-1553) to perform hardware-in-the-loop (HIL) simulation of the aircraft's avionics and control software. A firewall and gateway system ensures that the aerodynamics team can only access the CFD cluster, while the avionics team can only access the HIL cluster, preventing cross-contamination of simulation environments while allowing a central project manager to monitor both. - Mermaid.js Diagram:
flowchart LR subgraph Hosted_Service subgraph CFD_Environment ClusterA[Cluster A: High-Mem, Fast I/O] end subgraph HIL_Environment ClusterB[Cluster B: RTOS, Specialized I/O] end ClusterA -- isolated by Gateway A --> PrivateNet[Mgmt Network] ClusterB -- isolated by Gateway B --> PrivateNet end Aero_Team[Aerodynamics Team] --> Auth[Firewall] Avionics_Team[Avionics Team] --> Auth Auth -- access grant --> Aero_Team -- only to --> CFD_Environment Auth -- access grant --> Avionics_Team -- only to --> HIL_Environment
3.2. AgTech: Genomic Selection and Climate Forecasting
- Enabling Description: An agricultural technology company uses the hosting service to accelerate crop development. The
first clusteris configured as a high-throughput computing (HTC) system with vast amounts of attached storage. It is used to perform genomic sequencing analysis and identify desirable genetic markers from thousands of plant samples. Thesecond clusteris a classic HPC system with a low-latency interconnect (e.g., InfiniBand) used to run complex, long-range weather and climate models to predict growing conditions in target regions. The configurations are starkly different: one is optimized for I/O-bound, embarrassingly parallel tasks, while the other is for tightly-coupled, communication-intensive simulations. The system isolates the work of the geneticists from the climatologists. - Mermaid.js Diagram:
erDiagram CLIENT { string Name } CLUSTER { string Type string Configuration } TASK { string Name string Requirements } CLIENT ||--o{ TASK : "defines" TASK ||--|{ CLUSTER : "requires" CLIENT { Name "AgTech Corp" } TASK { Name "Genomic Analysis" Requirements "High I/O, Parallel" } TASK { Name "Climate Modeling" Requirements "Low Latency Interconnect" } CLUSTER { Type "First Cluster (HTC)" Configuration "Large Storage, HTC Sched." } CLUSTER { Type "Second Cluster (HPC)" Configuration "InfiniBand, MPI" }
Axis 4: Integration with Emerging Tech
4.1. AI-Driven Dynamic Cluster Reconfiguration
- Enabling Description: The hosting system incorporates an AI-based resource manager. A client submits a job not with a pre-defined cluster configuration, but with a high-level description of their task, its dataset, and performance goals (e.g., "Train ResNet-50 model on ImageNet dataset, minimize time-to-solution"). A reinforcement learning agent, pre-trained on performance data from thousands of previous jobs, selects the optimal hardware configuration from a heterogeneous pool of resources (CPUs, GPUs, TPUs, various network fabrics). It provisions a temporary, custom cluster for the duration of the job. A
first clusterfor one client might be dynamically configured with GPUs and NVLink, while asecond clusterfor another client's data analytics task might be configured with high-memory CPU nodes and a Spark environment. The AI acts as the "configuration engine" described in the patent. - Mermaid.js Diagram:
sequenceDiagram participant Client participant AI_Manager as AI Resource Manager participant ResourcePool as Heterogeneous Hardware Pool participant Provisioner participant Cluster Client->>AI_Manager: Submit Task (e.g., 'Train Model') AI_Manager->>ResourcePool: Query available resources ResourcePool-->>AI_Manager: Return inventory (GPUs, CPUs, etc.) AI_Manager->>AI_Manager: RL Agent selects optimal configuration AI_Manager->>Provisioner: Instruct to build Cluster with config X Provisioner->>ResourcePool: Allocate specific nodes/links Provisioner->>Cluster: Configure software and network Provisioner-->>AI_Manager: Cluster Ready AI_Manager-->>Client: Provide Cluster endpoint
4.2. Blockchain-Secured Cluster Provenance and Audit
- Enabling Description: The hosting system integrates a private, permissioned blockchain (e.g., Hyperledger Fabric) to provide an immutable audit trail for each cluster, targeted at clients in regulated industries like finance or healthcare. When a
first clusteris provisioned, a transaction is written to the blockchain ledger detailing the exact hardware components (by serial number), the software image hashes, the network configuration, and the client's request ID. All subsequent administrative actions (e.g., patching a kernel, replacing a failed node, a client accessing data) are recorded as new transactions, digitally signed by the entity performing the action. This provides the client with a verifiable, tamper-proof log of their cluster's entire lifecycle, which can be used to satisfy regulatory compliance audits. - Mermaid.js Diagram:
flowchart TD A[Client Requests Cluster] --> B{Provision Cluster}; B --> C[Generate Genesis Block]; C -- Contains --> D["Hardware IDs\nSoftware Hashes\nClient ID"]; D --> E{Add Block to Ledger}; E --> F[Cluster is Active]; F --> G{Admin Action: Patch OS}; G --> H[Create New Transaction]; H -- Signed by Admin --> I["Action: Patch\nImage Hash: 0xabc...\nTimestamp"]; I --> J{Add Block to Ledger}; J --> K[Client Action: Run Job]; K --> L[Create New Transaction]; L -- Signed by Client --> M["Action: Run Job\nJobID: 123\nTimestamp"]; M --> N{Add Block to Ledger}; subgraph Private_Blockchain direction LR E -- links to --> J J -- links to --> N end
Axis 5: The "Inverse" or Failure Mode
5.1. Quarantinable Gateway for Security Incident Response
- Enabling Description: The system is designed for high-security operations where cluster isolation must be guaranteed even during a security breach. The monitoring system (220) is integrated with an intrusion detection system (IDS). If the IDS detects anomalous activity within the
first cluster(e.g., traffic patterns indicative of malware), it triggers an alert. The central management system automatically reconfigures the cluster's associated gateway (240). The gateway's primary function is inverted: instead of forwarding traffic, it drops all connections to the public and private company networks and redirects all outbound traffic from the cluster to a dedicated, isolated forensic analysis environment (a "honeynet"). This "quarantine mode" ensures the compromised cluster cannot attack other clusters while preserving its state for investigation. - Mermaid.js Diagram:
stateDiagram-v2 [*] --> Normal_Operation Normal_Operation: Gateway routes traffic to Private/Public nets. Quarantined: Gateway drops external traffic, redirects internal to forensics. Normal_Operation --> Quarantined: IDS detects threat Quarantined --> Normal_Operation: Security team clears incident Quarantined --> Decommissioned: Cluster is terminated after analysis
Combination Prior Art with Open-Source Standards
C.1. Combination with OpenStack and Neutron
- Enabling Description: A method for hosting customized computing clusters is implemented using the open-source cloud platform OpenStack. Each client is mapped to a unique OpenStack "tenant" (or "project"). The
first clusterfor a first client is a collection of "Nova" virtual machine instances provisioned within the first tenant. Thesecond clusterfor a second client is a separate collection of Nova VMs in a second tenant. The cluster network isolation and gateway functionality are achieved entirely through "Neutron," OpenStack's networking component. A dedicated virtual L2 network is created for each tenant's cluster. A Neutron "virtual router" is attached to each tenant's network, acting as thegateway(240, 241). This virtual router handles traffic between the cluster's private network and the sharedprivate company network(the OpenStack provider network). The firewall (210) is implemented using Neutron's Security Groups and Floating IP addresses, which control access from the public internet to specific instances within each tenant's cluster. The monitoring system (220) is implemented using OpenStack's "Ceilometer" and "Monasca" projects to collect metrics from all tenant resources.
C.2. Combination with Kubernetes and Cilium/eBPF
- Enabling Description: A system for hosting customized container-based clusters using Kubernetes. The provider manages a large, multi-tenant physical infrastructure running Kubernetes. Each client's "cluster" is a dedicated Kubernetes "namespace." To achieve the network isolation claimed in the patent, the system utilizes the open-source Cilium CNI plugin, which leverages eBPF in the Linux kernel. Instead of physical gateways, Cilium network policies are created on a per-namespace basis. These policies explicitly whitelist allowed traffic flows (e.g., within the namespace) and deny all other traffic by default, including any attempt to communicate with pods in another client's namespace. The "gateway" function is provided by a dedicated Ingress controller (e.g., NGINX Ingress) deployed within each namespace, which is the only component exposed to the provider's private network for routing external client traffic. The firewall function is handled by the same Ingress controller, which can enforce authentication and TLS termination. This provides high-performance, kernel-level isolation between different client clusters running on the same underlying nodes.
C.3. Combination with Slurm Workload Manager and OpenFlow
- Enabling Description: A system for hosting bare-metal HPC clusters where the physical network is a software-defined fabric controlled by an OpenFlow controller. The open-source Slurm Workload Manager is used to manage compute resources. When a client requests a
first cluster, they submit a Slurm job that reserves a set of physical nodes. Upon allocation, the Slurm controller communicates with the OpenFlow controller. The OpenFlow controller then programs the physical switches in the fabric to create a virtual, isolated Layer 2 network connecting only the nodes allocated to that job. A single node in the allocation is designated as the "head node" and the OpenFlow controller installs specific flow rules that allow only this node to communicate with theprivate company networkfor management and monitoring, effectively making it thegateway. A separate job from a second client would result in the OpenFlow controller creating a completely separate set of flow rules for its allocated nodes, ensuring the two clusters are isolated at the network hardware level.
Generated 5/1/2026, 10:53:51 PM