Patent 11907553

Derivative works

Defensive disclosure: derivative variations of each claim designed to render future incremental improvements obvious or non-novel.

Active provider: Google · gemini-2.5-pro

Derivative works

Defensive disclosure: derivative variations of each claim designed to render future incremental improvements obvious or non-novel.

✓ Generated

Defensive Disclosure for U.S. Patent No. 11,907,553

Publication Date: May 9, 2026
Subject: Methods and Systems for Policy-Driven, Distributed, and Environmentally-Adaptive Data Storage

This document discloses novel variations and applications of the technology described in U.S. Patent No. 11,907,553. The purpose of this disclosure is to establish prior art for subsequent inventions that may be considered obvious extensions or applications of the foundational concepts. The core concept involves a storage device with a controller that manages data placement, retention, and access based on a configurable policy, including the ability to store metadata in a remote location.


Derivative Variations on Core Claims

1. Material & Component Substitution

  • 1.1. Non-Volatile Memory Express (NVMe) over Fabrics with In-Situ Cryogenic Memory

    • Enabling Description: This variation replaces the conventional solid-state or magnetic media with cryogenic random-access memory (CRAM) modules, which exhibit near-zero power state retention when held at temperatures below 77 Kelvin. The device controller is a System-on-a-Chip (SoC) with an integrated NVMe-oF (over Fabrics) controller, communicating directly over a RoCE v2 (RDMA over Converged Ethernet) network. The "storage device policy" includes thermal management directives, which dictate data placement based on the thermal stability of specific CRAM modules and the anticipated power cycling of the cryogenic cooling system. Storage information, including cryo-stability--logs and cell-level temperature mappings, is transmitted via NVMe-MI (Management Interface) to a remote policy server.

    • Mermaid Diagram:

      graph TD
          A[User Device] -- NVMe-oF Write Request --> B(Storage Device);
          B --> C{Device Controller SoC};
          C -- Policy Fetch --> D[Remote Policy Server];
          D -- Thermal & Placement Policy --> C;
          C -- Write Command --> E[Cryogenic RAM Array];
          C -- NVMe-MI Telemetry --> D;
          subgraph Storage Device
              C
              E
          end
      
  • 1.2. Ferroelectric RAM (FeRAM) with Optical Interconnects

    • Enabling Description: The storage media is composed of Ferroelectric RAM (FeRAM), chosen for its low power consumption and high radiation tolerance. The internal device controller and the primary interface connector (125) are replaced with integrated silicon photonics components. Data and policy instructions are transmitted via optical signals, significantly reducing electromagnetic interference (EMI) and increasing bandwidth. The storage policy dictates not only the logical block mapping but also the specific charge-trapping levels within the FeRAM cells, allowing for multi-level cell (MLC) behavior to be dynamically configured for either high-speed/low-retention (cache) or low-speed/high-retention (archive) on a per-object basis. Metadata, including ferroelectric domain state maps, is transmitted optically to a remote management node.

    • Mermaid Diagram:

      sequenceDiagram
          participant Host
          participant PhotonicController
          participant FeRAM
          participant RemotePolicyStore
          Host->>PhotonicController: Optical Write Request + Policy Hint
          PhotonicController->>RemotePolicyStore: Request Policy for Data Class
          RemotePolicyStore-->>PhotonicController: Return Charge-Trapping Level & Location
          PhotonicController->>FeRAM: Modulate FeRAM domains
          PhotonicController->>RemotePolicyStore: Store Domain State Map (Metadata)
          PhotonicController-->>Host: Acknowledge Write
      

2. Operational Parameter Expansion

  • 2.1. Nanos-Scale DNA-based Archival Storage with Error Correction Policy

    • Enabling Description: This variation adapts the policy-based control for DNA data storage. The "storage device" is a microfluidic synthesis and sequencing chip. The device controller, upon receiving a data object, consults a remote "synthesis policy server." The policy dictates the DNA encoding scheme (e.g., base-4 direct, or more complex codon-based schemes), the level of redundancy, and the type of error-correcting codes (ECC) to be synthesized into the DNA strands. For highly critical data ("heroic" policy), the controller synthesizes multiple, geographically dispersed copies and uses a layered ECC. For transient data ("ephemeral" policy), it uses a single-strand, low-ECC scheme. The storage information, a digital twin of the DNA library's structure and checksums, is stored remotely on a conventional solid-state drive. Delete requests are refused by policy by simply "forgetting" the primer sequence required to retrieve and sequence the specific data-encoding DNA strands.

    • Mermaid Diagram:

      graph TD
          subgraph DataCenter
              A[User Application] -- Store(data, policy='archive') --> B[Device Controller];
              B -- Get Synthesis Rules --> C[Remote Policy Server];
          end
          subgraph MicrofluidicDevice
              B -- Synthesize DNA --> D{DNA Synthesis Module};
              D -- Physical DNA Strands --> E[Storage Medium];
          end
          subgraph MetadataStore
              B -- Record(primer_key, ecc_scheme) --> F[Remote SSD];
          end
      
  • 2.2. High-Temperature Geothermal-Powered Edge Storage Node

    • Enabling Description: The device is designed for extreme environments, such as down-hole drilling or geothermal vent monitoring, operating at ambient temperatures exceeding 300°C. The storage medium is a specialized phase-change memory (PCM) or silicon-carbide (SiC) based non-volatile memory, and the controller is a rad-hardened FPGA. The storage policy is dynamically adjusted based on real-time temperature and pressure sensor readings. For example, in a high-temperature state, the policy might dictate writing data with wider margins and higher-voltage cell programming to ensure data retention, sacrificing speed and density. When temperatures are lower, the policy switches to a high-density, high-performance mode. The storage metadata, including environmental logs and the applied policy state for each write operation, is transmitted via a low-bandwidth, high-reliability acoustic modem to a surface-level remote server.

    • Mermaid Diagram:

      stateDiagram-v2
          [*] --> LowTemp
          LowTemp: Policy = High-Density
          LowTemp --> HighTemp: Temp > 300°C
          HighTemp: Policy = High-Durability
          HighTemp --> LowTemp: Temp <= 300°C
          state LowTemp {
              direction LR
              [*] --> WriteRequest
              WriteRequest --> ApplyPolicy: Read Policy from Controller
              ApplyPolicy --> WriteToPCM: Use high-density parameters
              WriteToPCM --> LogMetadata: Transmit to remote server
              LogMetadata --> [*]
          }
          state HighTemp {
              direction LR
              [*] --> WriteRequest
              WriteRequest --> ApplyPolicy: Read Policy from Controller
              ApplyPolicy --> WriteToPCM: Use high-voltage parameters
              WriteToPCM --> LogMetadata: Transmit to remote server
              LogMetadata --> [*]
          }
      

3. Cross-Domain Application

  • 3.1. Aerospace: Black Box Flight Data Recorder

    • Enabling Description: In an avionics context, the storage device is a crash-survivable memory unit. The device controller receives a "flight phase policy" from the Flight Management System (FMS). During "takeoff" or "landing" phases, the policy mandates maximum data redundancy and write-protection, storing sensor data (attitude, speed, control inputs) with high-density error correction. During "cruise" phase, the policy allows for lower-redundancy recording of non-critical data like in-flight entertainment system logs. A "delete" request for any flight-critical data is permanently refused by the policy. The storage information (metadata index) is continuously streamed via a satellite uplink to a remote ground-based server, ensuring data recoverability even if the physical recorder is destroyed.

    • Mermaid Diagram:

      sequenceDiagram
          participant FMS as Flight Mgmt System
          participant FDR as Flight Data Recorder
          participant GroundServer as Remote Server
          loop Flight
              FMS->>FDR: Update Flight Phase Policy (e.g., 'Takeoff')
              FDR->>FDR: Store Sensor Data per Policy (high redundancy)
              FDR->>GroundServer: Stream Metadata Index via Satellite
          end
      
  • 3.2. AgTech: Smart Irrigation and Soil Sensor Logging

    • Enabling Description: The storage device is embedded in a field-deployed IoT sensor hub for precision agriculture. The device controller receives a "crop cycle policy" from a central farm management server. The policy dictates data sampling rates and storage priorities based on the crop's growth stage. For example, during germination, moisture sensor data is stored with high-frequency and write-protection (element e). During the fallow season, the policy allows for lower-frequency logging and overwriting of old data. The storage metadata (linking sensor IDs, timestamps, and GPS coordinates to data blocks) is transmitted nightly via a LoRaWAN gateway to a cloud-based agricultural analytics platform, which acts as the remote location (element f). This keeps the on-device storage footprint minimal.

    • Mermaid Diagram:

      graph TD
          A[Farm Mgmt Server] -- Crop Cycle Policy --> B{IoT Sensor Hub};
          subgraph Field
              C[Soil Sensor] -- data --> B;
              B -- store per policy --> D[Onboard Flash Memory];
          end
          B -- Transmit Metadata ( nightly) --> E((Cloud Analytics Platform));
      
  • 3.3. Consumer Electronics: Wearable Health Monitor

    • Enabling Description: The storage device is within a medical-grade wearable (e.g., a smartwatch). The controller receives a "user state policy" from a companion smartphone app. If the policy is 'Normal Activity', the device stores heart rate and SpO2 data at low resolution. If the app detects a 911 call or the user manually triggers an 'Emergency' mode, the policy shifts to 'High-Fidelity', recording a full EKG trace and raw accelerometer data in a write-once, non-deletable format. All storage metadata, including cryptographic signatures and timestamps, is immediately transmitted via Bluetooth LE to the smartphone, which then relays it to a secure cloud server for access by emergency responders, making the phone and cloud the "remote location."

    • Mermaid Diagram:

      stateDiagram-v2
          state "Normal Activity" as Normal {
            description "Store low-res data"
          }
          state "Emergency" as Emergency {
            description "Store high-fi, non-deletable EKG data"
          }
          [*] --> Normal
          Normal --> Emergency: User Trigger or 911 Call
          Emergency --> Normal: Reset by Authorized App
      

4. Integration with Emerging Tech

  • 4.1. AI-Driven Predictive Data Placement

    • Enabling Description: The device controller integrates a lightweight, on-chip neural processing unit (NPU). The storage device policy is no longer a static set of rules but a trained machine learning model. The NPU analyzes incoming write request patterns (e.g., data size, frequency, source application) in real-time. Based on this analysis, it predicts the data's "access temperature" (how often it will be read) and "lifespan." It then dynamically places hot, short-lived data on high-speed SLC NAND flash and cold, long-term data on dense QLC NAND flash, all within the same physical device. The storage information, including the AI's placement decision and confidence score, is recorded in a remote MLOps monitoring server for model retraining.

    • Mermaid Diagram:

      flowchart LR
          subgraph Storage Drive
              A[Write Request] --> B{NPU/ML Model};
              B -- Predicts 'Hot' --> C[Place on SLC NAND];
              B -- Predicts 'Cold' --> D[Place on QLC NAND];
              C --> E{Record Metadata};
              D --> E;
          end
          E -- Send {ObjectID, Location, AI_Confidence} --> F[(Remote MLOps Server)];
      
  • 4.2. IoT-Informed Wear-Leveling and Data Refresh

    • Enabling Description: The storage device is part of an array in a data center. Each drive is equipped with environmental sensors (temperature, vibration, humidity) that stream data to a central IoT platform. The storage device policy is received from this platform. The policy for a specific drive is adjusted based on its real-time operating conditions. For example, a drive experiencing higher-than-average temperatures may have its policy updated to reduce write amplification and proactively refresh data in blocks nearing their retention limit, even if not explicitly requested. This prevents data degradation due to environmental stress. The decision to refresh and the corresponding metadata update are logged in a remote, centralized "digital twin" of the storage array.

    • Mermaid Diagram:

      graph TD
          A[IoT Sensors on Drive] -- Temp, Vibration --> B((Central IoT Platform));
          B -- Generates/Updates --> C(Dynamic Storage Policy);
          C -- Pushes to --> D{Device Controller};
          subgraph Drive
              D -- Applies Policy --> E[NAND Flash];
              D -- Manages --> F(Wear-Leveling & Refresh);
          end
          D -- Logs Actions --> G[(Remote Digital Twin)];
      
  • 4.3. Blockchain-Verified Data Immutability and Custody

    • Enabling Description: This variation uses a blockchain to guarantee the integrity of write-protected data. When a storage request is marked with an "immutable" policy, the device controller calculates a cryptographic hash (e.g., SHA-256) of the content. After writing the content to the physical media, the controller stores the standard metadata (content identifier, physical location) on a remote server. Crucially, it then creates a transaction on a private or consortium blockchain containing the content hash, a timestamp, and the unique ID of the storage device. Any subsequent request to delete this content is refused by the controller's policy. The validity and timestamp of the data can be independently verified by querying the blockchain, providing an immutable, auditable chain of custody.

    • Mermaid Diagram:

      sequenceDiagram
          participant User
          participant StorageDevice
          participant RemoteMetadataDB
          participant Blockchain
      
          User->>StorageDevice: Write(Content, Policy:Immutable)
          StorageDevice->>StorageDevice: Hash(Content) -> contentHash
          StorageDevice->>StorageDevice: Store Content on Media
          StorageDevice->>RemoteMetadataDB: Store(ContentID, Location)
          StorageDevice->>Blockchain: CreateTransaction(contentHash, timestamp, deviceID)
          User->>StorageDevice: Delete(ContentID)
          StorageDevice->>StorageDevice: Check Policy for ContentID
          Note right of StorageDevice: Policy is Immutable
          StorageDevice-->>User: Error: Deletion Refused
      

5. "Inverse" or Failure Mode

  • 5.1. Failsafe Read-Only Mode with Remote Key Invalidation

    • Enabling Description: The device is designed for high-security environments. The storage information required to decrypt and locate data blocks is stored on a remote "key device." The device controller periodically "heartbeats" this key device. If the heartbeat fails for a configurable duration (e.g., the device is physically removed from the secure network), the storage policy instructs the controller to enter a "failsafe" mode. In this mode, all write and delete operations are rejected. The device becomes purely read-only for any data that was already cached in its local memory. Furthermore, the remote key device, upon detecting the loss of heartbeat, can invalidate the keys associated with that specific drive, rendering the encrypted data on the physical media permanently unrecoverable, even if the drive's local cache is later compromised.

    • Mermaid Diagram:

      stateDiagram-v2
          [*] --> Connected
          Connected: R/W Enabled
          Connected --> Failsafe_ReadOnly: Heartbeat Timeout
          Failsafe_ReadOnly: Writes Disabled
      
          state Connected {
              direction LR
              Controller ->> KeyDevice: Heartbeat
              KeyDevice -->> Controller: ACK + Keys
          }
          state Failsafe_ReadOnly {
              Controller ->> KeyDevice: Heartbeat (fails)
              KeyDevice -->> KeyDevice: Invalidate Keys for Device
          }
      

Combination Prior Art Scenarios

  • Scenario 1: Integration with Ceph Object Storage

    • Description: The policy-based storage device of '553 is integrated as an Object Storage Daemon (OSD) backend in a Ceph cluster. The Ceph CRUSH map algorithm determines which physical device should store an object. The '553 device receives the object and a "storage class" policy from the Ceph OSD software (e.g., 'hot-replicated', 'cold-erasure-coded', 'archival-immutable'). The device's internal controller, instead of the Ceph software, manages the physical block placement, wear-leveling, and write-protection according to the received policy. The object's location metadata (the "storage information") is stored not on the device's local memory but in Ceph's central BlueStore metadata database, which acts as the "remote location." This offloads fine-grained media management from the Ceph software to the intelligent drive itself.
  • Scenario 2: Integration with Kubernetes and Container Storage Interface (CSI)

    • Description: The storage device acts as a storage backend for a Kubernetes cluster, exposed via a custom Container Storage Interface (CSI) driver. When a developer provisions a Persistent Volume (PV), they can specify a StorageClass with custom parameters (e.g., retention: "7-years", performance: "high_iops"). The CSI driver translates these parameters into a "storage device policy" and sends it to the '553 device's controller when creating the volume. The device then autonomously enforces this policy. For instance, a retention parameter would cause the device to refuse TRIM or UNMAP commands (delete requests) for that volume. The storage information mapping the PV to the internal content identifiers is stored remotely in Kubernetes's etcd key-value store, managed by the CSI driver.
  • Scenario 3: Integration with Apache Parquet for Analytical Workloads

    • Description: The storage device is used in a data lakehouse environment. A query engine like Apache Spark or Trino writes data in the Parquet format. The application passes a policy hint along with the data. The '553 device controller is "Parquet-aware." The policy instructs it to physically co-locate specific column chunks from the Parquet file that are frequently queried together, even if they are logically separate in the file. This optimizes for I/O patterns common in analytical queries. The storage information, containing the custom map of Parquet row groups to physical media locations, is stored in a remote metadata catalog service like AWS Glue or Hive Metastore, allowing query planners to be aware of the custom data layout on the physical device.

Generated 5/9/2026, 6:48:21 PM