Patent 12265715

Derivative works

Defensive disclosure: derivative variations of each claim designed to render future incremental improvements obvious or non-novel.

Active provider: Google · gemini-2.5-pro

Derivative works

Defensive disclosure: derivative variations of each claim designed to render future incremental improvements obvious or non-novel.

✓ Generated

Defensive Disclosure for U.S. Patent 12,265,715

Publication Title: Methods and Architectures for Dynamic, Policy-Driven Data Storage at the Device Level

Publication Date: May 9, 2026

Abstract: This publication discloses a series of technical implementations that extend the concept of configurable, policy-based storage device behavior. The disclosed methods, apparatuses, and software architectures describe advanced techniques for embedding intelligent decision-making directly into storage device controllers. These include adaptations for novel memory technologies, operation in extreme environments, applications in non-traditional domains (automotive, agriculture, medical), integration with emerging technologies such as Artificial Intelligence (AI), Internet of Things (IoT), and blockchain, and the implementation of fail-safe and security-oriented operational modes. These disclosures are intended to enter the public domain to serve as prior art for subsequent patent applications in this field.


Derivatives of Core Claims (Claims 1, 8, 15)

The following disclosures elaborate on derivative inventions based on the core concept of a storage device with a configurable controller that uses a policy to select storage locations.

Axis 1: Material & Component Substitution

1.1. Phase-Change Memory (PCM) with Thermal-Aware Policy Engine

  • Enabling Description: A storage device is constructed using Phase-Change Memory (PCM) or Resistive RAM (ReRAM) as the non-volatile storage medium. The device controller's firmware includes a policy engine specifically adapted for the write endurance and thermal characteristics of PCM. The policy considers the "write temperature" of adjacent memory cells. When a storage request is received, the policy engine consults a real-time thermal map of the PCM array, which is populated by data from on-chip thermal sensors. To prevent thermal crosstalk and premature cell degradation, the policy selects a storage location in a cooler region of the die or enforces a write-throttling delay to allow a recently written adjacent region to dissipate heat. The storage information recorded by the controller includes not just the logical-to-physical block address (LBA-to-PBA) mapping but also the timestamp and temperature at the time of the write, to be used by adaptive garbage collection and wear-leveling algorithms.
  • Mermaid Diagram:
    graph TD
        A[Receive Write Request + Data] --> B{Retrieve Storage Policy};
        B --> C{Query On-Chip Thermal Sensor Array};
        C --> D[Generate Real-Time Thermal Map of PCM];
        D --> E{Policy Engine: Analyze Data Type & Thermal Map};
        E --> F[Select Coolest Physical Block with Sufficient Endurance];
        F --> G[Write Data to Selected PCM Block];
        G --> H[Record LBA, PBA, Timestamp, Write-Temp];
        H --> I[Acknowledge Write Completion];
    

1.2. DNA-Based Archival Storage with a Bio-Chemical Policy Controller

  • Enabling Description: The storage device is an integrated DNA synthesis and sequencing system. The "storage medium" comprises a set of reservoirs containing synthesized DNA oligonucleotides. The "device controller" is a microfluidics controller coupled with a control processor. The storage policy dictates the encoding redundancy and the error-correction scheme (e.g., Fountain codes) applied to the binary data before it is converted into a DNA nucleotide sequence (A, T, C, G). A "high-reliability" policy directs the system to encode data with higher redundancy and synthesize multiple physical copies, which are then distributed across separate, temperature-controlled reservoirs. A "high-density" policy directs the system to use minimal redundancy for storage in a single, concentrated solution. The storage information recorded is a catalog that maps a unique object identifier to the specific reservoir(s) and the DNA sequence primers required for retrieval via Polymerase Chain Reaction (PCR).
  • Mermaid Diagram:
    sequenceDiagram
        participant UserDevice
        participant MicrofluidicsController as M-Controller
        participant DNA_Synthesizer as Synthesizer
        participant StorageReservoir
        UserDevice->>M-Controller: Store Data + Policy (e.g., High-Reliability)
        M-Controller->>M-Controller: Apply Policy: High Redundancy ECC
        M-Controller->>Synthesizer: Synthesize DNA sequence for Data
        Synthesizer-->>M-Controller: DNA strands created
        M-Controller->>StorageReservoir: Deposit DNA into designated reservoir
        M-Controller->>UserDevice: Return Object_ID and Primer_Info
    

Axis 2: Operational Parameter Expansion

2.1. Cryogenic Superconducting Memory Controller

  • Enabling Description: The storage device is designed to operate in a cryogenic environment (e.g., below 77 Kelvin) and utilizes superconducting memory elements like Josephson junctions. The device controller is implemented using Single Flux Quantum (SFQ) logic circuits, enabling clock speeds in the hundreds of gigahertz. The storage policy is optimized for this extreme environment. For instance, a "quantum-safe" policy instructs the controller to use a specific portion of the memory array that is physically shielded from external magnetic fields and to store the data using quantum error correction codes. The recorded storage information includes not only the physical address but also the quantum state parameters used during encoding, which are essential for an accurate readout process.
  • Mermaid Diagram:
    stateDiagram-v2
        [*] --> Idle
        Idle --> ReceivingRequest: Storage Request
        ReceivingRequest --> PolicyLookup: Request Received
        PolicyLookup --> QuantumSafeWrite: Policy == 'Q-Safe'
        PolicyLookup --> StandardWrite: Policy == 'Standard'
        QuantumSafeWrite --> WriteToShieldedArray: Select Shielded Location
        StandardWrite --> WriteToGeneralArray: Select Standard Location
        WriteToShieldedArray --> Logging: Write Complete
        WriteToGeneralArray --> Logging: Write Complete
        Logging --> Idle: Record Location & Quantum Params
    

2.2. High-G/High-Vibration Solid-State Recorder for Aerospace & Defense

  • Enabling Description: The device is a solid-state drive (SSD) environmentally hardened for aerospace applications, capable of withstanding extreme g-forces (>100 G) and intense vibrational stress. The storage device policy is dynamic and state-aware, receiving real-time inputs from an integrated MEMS accelerometer and gyroscope. A "high-g event" policy is triggered when acceleration exceeds a configurable threshold. This policy automatically switches the device into a "safe write" mode, wherein the controller triples data redundancy by writing each data block to three physically distinct NAND chips. It also increases the strength of the ECC algorithm and may reduce the write speed to guarantee data integrity during the high-stress event. The storage information log includes the g-force and vibration profile captured at the moment of the write, enabling post-mission analysis of data integrity and device health.
  • Mermaid Diagram:
    graph TD
        A[Receive Write Request] --> B{Query Accelerometer};
        B -- G-Force < Threshold --> C[Standard Policy];
        B -- G-Force > Threshold --> D[High-G Event Policy];
        C --> E[Write Data (1x Redundancy)];
        D --> F[Write Data (3x Redundancy) to separate chips];
        E --> G[Record LBA->PBA];
        F --> H[Record LBA->(PBA1, PBA2, PBA3) & G-Force Data];
        G --> I[Complete];
        H --> I;
    

Axis 3: Cross-Domain Application

3.1. Automotive: Context-Aware Event Data Recorder (EDR)

  • Enabling Description: The storage device is integrated into a vehicle's EDR ("black box") system, with the device controller connected to the vehicle's CAN bus. The storage device policy is context-aware and multi-modal. During normal operation, it employs a "Volatile" policy, using a low-endurance, high-speed partition for ephemeral data like infotainment settings. Upon receiving a trigger from the airbag control unit or collision sensors, the policy immediately switches to a "Critical Event" mode. This policy directs the controller to write the last 30 seconds of high-fidelity sensor data (vehicle speed, braking input, steering angle, g-forces) from a circular buffer into a write-once, read-many (WORM), high-endurance, physically protected section of memory. The storage metadata for this event is encrypted with a key held by a trusted authority, and its location is written to a separate, easily accessible index to facilitate authorized post-crash data retrieval.
  • Mermaid Diagram:
    activityDiagram
        title Automotive EDR Storage Policy
        start
        :Monitor CAN Bus for events;
        if (Crash Sensor Signal?) then (Yes)
          :Switch to 'Critical Event' Policy;
          :Capture 30s Sensor Data Buffer;
          :Write Buffer to Secure WORM Partition;
          :Encrypt Storage Location Metadata;
          :Log Encrypted Pointer to Index;
        else (No)
          :Use 'Normal Operation' Policy;
          :Process standard read/write requests;
        endif
        stop
    

3.2. Agricultural Technology: Adaptive Soil Sensor Data Aggregator

  • Enabling Description: The storage device is a low-power, ruggedized unit deployed in an agricultural field, serving as a data logger for a network of wireless soil sensors (e.g., LoRaWAN). The device controller's storage policy is time- and condition-based. The default "routine" policy dictates that sensor readings are aggregated and compressed before being written once per hour to conserve power and storage. If the policy engine detects a parameter crossing a critical threshold (e.g., soil moisture below a pre-set value), it triggers a "High-Alert" policy. This policy increases the data sampling rate to once per minute, stores the raw, uncompressed data from the relevant sensor, and flags the storage location with a high-priority metadata tag for immediate cloud synchronization on the next network connection cycle.
  • Mermaid Diagram:
    graph TD
        A[Start Hourly Timer] --> B[Aggregate Sensor Data];
        B --> C{Sensor Value > Threshold?};
        C -- No --> D[Apply 'Routine' Policy];
        D --> E[Compress & Write Aggregated Data];
        E --> A;
        C -- Yes --> F[Apply 'High-Alert' Policy];
        F --> G[Write Raw, High-Frequency Data];
        G --> H[Flag Data as High-Priority];
        H --> A;
    

3.3. Medical Technology: Secure Implantable Device Logger

  • Enabling Description: The technology is embodied in a miniaturized, ultra-low-power storage device integrated into an implantable medical device, such as a next-generation pacemaker or continuous glucose monitor. The device controller is an Application-Specific Integrated Circuit (ASIC). The storage policy is architected for extreme reliability and data privacy, partitioning the storage media into a "Clinical" region and a "Research" region. All patient-identifiable data and critical event logs (e.g., arrhythmia detections) are directed by policy to the "Clinical" partition, which mandates high-redundancy encoding, hardware-level AES-256 encryption, and a write-once, read-many (WORM) attribute. In contrast, anonymized, low-frequency telemetry data for research is written to the "Research" partition under a different policy that prioritizes storage density and power efficiency over redundancy. Access to the clinical partition's data map requires a two-factor authentication key from a physician's external reader device.
  • Mermaid Diagram:
    classDiagram
      DeviceController {
        -currentPolicy: StoragePolicy
        +receiveData(data, type)
        +selectPartition(type)
        +applyPolicy(data)
        +writeToMedia(location, data)
      }
      StoragePolicy <|-- ClinicalPolicy
      StoragePolicy <|-- ResearchPolicy
      class StoragePolicy {
          <<interface>>
          +getStorageLocation()
          +getEncryptionMethod()
          +getRedundancyLevel()
      }
      class ClinicalPolicy{
          +partition: "Clinical_WORM"
          +encryption: "AES-256 Hardware"
          +redundancy: "Triple"
      }
      class ResearchPolicy{
          +partition: "Research_RW"
          +encryption: "None"
          +redundancy: "Single"
      }
      DeviceController --> StoragePolicy : uses
    

Axis 4: Integration with Emerging Tech

4.1. AI/ML-Driven Predictive Wear-Leveling

  • Enabling Description: The device controller integrates a lightweight, on-device machine learning (ML) model (e.g., a quantized neural network) trained to predict I/O patterns. The storage device policy is a dynamic engine driven by this model. The ML model analyzes incoming write requests, considering features like data size, frequency, and logical block address (LBA) locality to predict whether data blocks are likely to become "hot" (frequently rewritten) or "cold" (archival). The policy then proactively directs data predicted to be "hot" to high-endurance SLC (Single-Level Cell) NAND flash, while data predicted to be "cold" is placed in lower-endurance, high-density QLC (Quad-Level Cell) regions. The resulting physical placement and subsequent access patterns are logged and used as a feedback loop to periodically retrain and improve the on-device ML model's accuracy.
  • Mermaid Diagram:
    graph TD
        subgraph Device Controller
            A[Receive Write Request] --> B[Extract Features: Size, LBA, Frequency];
            B --> C[ML Inference Engine: Predict Data Temperature (Hot/Cold)];
            C -- Hot --> D[Policy: Select SLC Partition];
            C -- Cold --> E[Policy: Select QLC Partition];
            D --> F[Write to High-Endurance Media];
            E --> G[Write to High-Density Media];
            F --> H{Log Write Metadata};
            G --> H;
            H --> I[Update ML Model with Ground Truth];
        end
    

4.2. IoT-Aware Environmental Policy Adaptation

  • Enabling Description: The storage device is a component of an Internet of Things (IoT) edge gateway, equipped with on-board sensors for ambient temperature, humidity, and vibration. The device controller subscribes to an MQTT message broker to receive environmental data from other IoT devices in its local network. The storage policy dynamically adjusts data protection schemes based on this real-time, aggregated environmental data. For example, if the operating temperature exceeds a safety threshold, the policy can automatically switch from a standard striping configuration (RAID 0) across multiple memory chips to a fully mirrored configuration (RAID 1) to protect against heat-induced component failure. If high vibration levels are detected, the policy can increase the strength of Error-Correcting Code (ECC) and enable a write-verify mode for all incoming data. The storage information log for each write operation includes a snapshot of the environmental data at the time of the write.
  • Mermaid Diagram:
    sequenceDiagram
        participant IoT_Sensor as Sensor
        participant MQTT_Broker as Broker
        participant Device_Controller as Controller
        participant Storage_Media as Media
    
        loop Real-time Monitoring
            Sensor->>Broker: Publish(topic="env/temp", payload="45C")
            Broker->>Controller: Notify(topic="env/temp", payload="45C")
        end
    
        Controller->>Controller: Policy Check: Temp > 40C? -> True
        Controller->>Controller: Activate 'High-Temp' Policy (Mirroring)
    
        User->>Controller: Write Data
        Controller->>Media: Write Data to Chip A
        Controller->>Media: Write Data to Chip B (Mirror)
        Controller->>Controller: Log Write + Temp=45C
    

4.3. Blockchain-Based Data Provenance Log

  • Enabling Description: The storage device is designed for applications requiring a verifiable and immutable audit trail, such as legal evidence management or pharmaceutical supply chain tracking. The device controller incorporates a lightweight blockchain client. When a write request is received, the controller stores the data on its media according to the active storage policy. Simultaneously, it generates a cryptographic hash of the data, creates a transaction containing this hash, a timestamp, an object identifier, and the user's digital signature, and broadcasts this transaction to a permissioned blockchain network. The "storage information" recorded locally on the device's memory includes the blockchain transaction ID and the block number where the transaction was confirmed. To verify data integrity, a user requests the object; the controller retrieves the data, re-calculates its hash, and uses the stored transaction ID to fetch the original hash from the immutable blockchain ledger for comparison.
  • Mermaid Diagram:
    graph TD
        A[Receive Write Request (Data + Signature)] --> B[Policy Engine: Select Storage Location];
        B --> C[Store Data on Media];
        C --> D[Calculate Hash(Data)];
        D --> E[Create Blockchain Transaction: {Hash, Timestamp, ID, Signature}];
        E --> F[Broadcast Transaction to P2P Network];
        F --> G[Receive Transaction Confirmation];
        G --> H[Record Storage Info: {LBA, PBA, Blockchain TxID}];
        H --> I[Acknowledge Write to User];
    

Axis 5: The "Inverse" or Failure Mode

5.1. Graceful Degradation and Read-Only Safe Mode

  • Enabling Description: The device controller actively monitors the health of the storage media using metrics such as SSD spare block count, write amplification factor, and HDD S.M.A.R.T. attributes. The storage policy defines multiple degradation thresholds. When a "Warning" threshold is crossed (e.g., spare blocks fall below 5%), the policy disables non-essential, high-wear background operations like garbage collection and forces all new data to be written sequentially to a pre-allocated journal area. This minimizes random writes and preserves remaining endurance. If a "Critical" threshold is crossed (e.g., spare blocks fall below 1%), the policy engages a hardware-enforced read-only mode. All subsequent write requests are rejected with a specific error code ("LOCKED_FOR_RECOVERY"), while all existing data remains fully accessible for retrieval, preventing further media degradation and catastrophic data loss.
  • Mermaid Diagram:
    stateDiagram-v2
        state "Normal Operation" as Normal
        state "Degraded Mode" as Degraded
        state "Read-Only Lock" as ReadOnly
    
        [*] --> Normal
        Normal --> Degraded: Health Metric < Warning_Threshold
        Normal --> ReadOnly: Health Metric < Critical_Threshold
        Degraded --> ReadOnly: Health Metric < Critical_Threshold
    
        state Normal {
            description "Full R/W, All Background Ops Enabled"
        }
        state Degraded {
            description "Writes redirected to Journal, High-Wear Ops Disabled"
        }
        state ReadOnly {
            description "All Writes Rejected, Reads Permitted for Data Evacuation"
        }
    

5.2. Failsafe Data Purge Policy

  • Enabling Description: This device is designed for high-security applications where data remanence is a critical risk. It features a physically isolated memory region containing a "sanitization" policy. This policy can be invoked by a mutually authenticated software command, a physical tamper-detection switch, or a "dead man's switch" that triggers if a periodic cryptographic heartbeat signal from a host is not received within a specified window. Upon activation, the policy directs the device controller to immediately cease normal operations and perform an irreversible data purge. For SSDs, the policy triggers the ATA "SECURE ERASE" command on all NAND blocks and can subsequently overwrite the entire media with a random data pattern according to NIST SP 800-88 guidelines. The policy itself may be the last item erased to prevent analysis of the device's capabilities.
  • Mermaid Diagram:
    activityDiagram
        title Failsafe Purge Policy
        start
        :Monitoring for Trigger;
        if (Tamper Switch Activated?) then (Yes)
          :Trigger_Sanitization;
        elseif (Heartbeat Signal Lost?) then (Yes)
          :Trigger_Sanitization;
        elseif (Secure Erase Command Received?) then (Yes)
          :Trigger_Sanitization;
        else (No)
          :Continue Normal Operation;
          stop
        endif
        :Trigger_Sanitization;
        :Disable Host I/O Interface;
        :Execute ATA SECURE ERASE on all blocks;
        :Overwrite Media with Random Data (3 passes);
        :Verify Overwrite;
        :Self-destruct Controller Firmware (optional);
        stop
    

Combination with Open-Source Standards

1. NVMe over Fabrics (NVMe-oF) with Policy Extensions

  • Enabling Description: The device firmware is designed for an NVMe-oF SSD. The standard NVMe command set, governed by NVM Express, Inc., is extended with vendor-specific commands for policy management. A host system uses a standard NVMe-oF driver to communicate with the drive over an Ethernet or Fibre Channel network. To configure the drive, the host sends a "Set Feature - Storage Policy" command (using a vendor-unique identifier) containing the policy rules in a structured format like JSON. Subsequent standard NVMe "Write" or "Write Uncorrectable" commands are then processed by the device controller according to this pre-loaded policy. The standard "Get Log Page" command is extended with a custom log identifier to retrieve a history of policy decisions and storage metadata, allowing for management and monitoring through existing, open-standard toolchains.
  • Mermaid Diagram:
    sequenceDiagram
        participant Host
        participant NVMe_Drive as Drive Controller
        Host->>Drive Controller: NVMe Admin Command: Set Feature (Policy_JSON)
        Drive Controller->>Drive Controller: Store Policy in On-Controller Memory
        Drive Controller-->>Host: Command Completion
        Host->>Drive Controller: NVMe I/O Command: Write(LBA, Data)
        Note over Drive Controller: Execute Policy Logic
        Note over Drive Controller: Select Physical Block based on Policy
        Drive Controller->>Drive Controller: Write Data to Flash
        Drive Controller-->>Host: Command Completion
    

2. Ceph OSD with WebAssembly (WASM) Policy Modules

  • Enabling Description: The storage device operates as a highly specialized Object Storage Daemon (OSD) within a Ceph distributed storage cluster. The device controller runs a minimal Linux kernel and a modified Ceph OSD process. The core "application" is a WebAssembly (WASM) runtime environment integrated into the OSD. The "storage device policy" is a user-submitted WASM module, which can be deployed via Ceph's management tools. When Ceph's CRUSH algorithm directs a data placement group to this device, the OSD invokes the loaded WASM module instead of performing a simple write. The sandboxed WASM code contains the logic to inspect the object's metadata and apply a specific physical placement strategy (e.g., density, error correction, media tier) on the device's local media. This architecture combines the distributed, open-source Ceph framework with the patent's device-level, user-defined policy execution.
  • Mermaid Diagram:
    graph TD
        subgraph Ceph Cluster
            A[Client] -- OSD Map --> B[Monitor]
            B -- CRUSH Map --> A
        end
        subgraph Intelligent_OSD_Drive
            D[Ceph OSD Process] --> E{WASM Runtime};
            E -- Executes --> F[User-Supplied Policy.wasm];
            F -- Placement Decision --> G[Media Abstraction Layer];
            G -- Write/Read --> H[Physical Storage Media];
        end
        A -- Write Object --> D
    

3. RISC-V SoC Controller with a Trusted Execution Environment (TEE)

  • Enabling Description: The device controller (130) is implemented as a System-on-a-Chip (SoC) using the open-source RISC-V instruction set architecture. The policy execution "application" runs within a hardware-isolated Trusted Execution Environment (TEE), such as Keystone or a similar open-source framework. This ensures that the policy engine and its decisions are protected from tampering, even from other software running on the controller's main processor. The "storage device policy" is securely provisioned to the TEE using a standardized, open API, such as a profile defined by the Storage Networking Industry Association (SNIA), over an attested, encrypted channel. This design leverages open hardware (RISC-V) and open security standards to create a verifiable and secure implementation of the policy-based storage controller.
  • Mermaid Diagram:
    graph TD
        subgraph Host_System
            A[Management_Console] -->|SNIA Policy API over TLS| B[Device_Driver]
        end
        subgraph Drive_Controller_SoC [RISC-V SoC]
            C[Network Interface]
            D{Normal World<br>(e.g., Linux)}
            E[Secure World (TEE)]
            subgraph E
                F[Policy Engine Applet]
                G[Cryptographic Keys]
            end
            H[Storage Media I/F]
        end
        subgraph Storage_Media
            I[NAND Flash / Magnetic Disk]
        end
    
        B --> C
        C --> D
        C -->|Secure Channel via Attestation| E
        E --> H
        H <--> I
    

Generated 5/9/2026, 6:48:19 PM