Patent 10776023

Derivative works

Defensive disclosure: derivative variations of each claim designed to render future incremental improvements obvious or non-novel.

Active provider: Google · gemini-2.5-pro

Derivative works

Defensive disclosure: derivative variations of each claim designed to render future incremental improvements obvious or non-novel.

✓ Generated

Defensive Disclosure: Configurable Policy-Based Storage Device Behavior

Publication Date: May 9, 2026
Relevant Technology Field: Data Storage, Embedded Systems, System-on-Chip (SoC) Architecture, Information Lifecycle Management (ILM).

This document discloses derivative inventions and enhancements to the concepts described in U.S. Patent 10,776,023. The purpose of this disclosure is to place these concepts in the public domain, thereby establishing them as prior art for any future patent applications.


Derivative Set 1: Material & Component Substitution

1.1. Phase-Change Memory (PCM) with Thermal-Aware Policy Engine

  • Enabling Description: This embodiment replaces the NAND flash or magnetic media of the original patent with Phase-Change Memory (PCM). The device controller's policy engine is specifically adapted to manage the unique properties of PCM. A "write policy" for PCM would not control recording density in a magnetic sense, but would instead modulate the amplitude and duration of the heating pulse used to set the phase (amorphous or crystalline) of the PCM cells. A high-reliability policy would use a longer, more precise pulse to ensure a stable phase change, at the cost of slower write speeds and higher power consumption. A high-speed policy would use a shorter pulse, accepting a slightly higher bit-error rate that is compensated for by a more robust Error Correction Code (ECC) algorithm, also selected by the policy. The policy engine interfaces with on-chip temperature sensors. If a write-intensive operation causes a localized temperature increase, the policy can dynamically switch to a lower-power write mode or throttle requests to prevent thermal crosstalk between adjacent PCM cells, thus preserving data integrity.
  • Mermaid Diagram:
    graph TD
        A[Storage Request] --> B{Policy Engine};
        B -- Policy: High-Reliability --> C[PCM Controller: Long-Pulse Write];
        B -- Policy: High-Speed --> D[PCM Controller: Short-Pulse Write + Strong ECC];
        E[Thermal Sensor] -- Temp > Threshold --> B;
        B -- Thermal Throttling --> F[PCM Controller: Low-Power Pulse / Queue Writes];
        C --> G[PCM Media];
        D --> G;
        F --> G;
    

1.2. Ferroelectric RAM (FeRAM) with Endurance-Balancing Policy

  • Enabling Description: This variation utilizes Ferroelectric RAM (FeRAM) as the storage media, prized for its low power consumption and high write endurance. The device controller's policy engine implements an "endurance-balancing" algorithm. Storage information for each data object includes not only its location but also a "write-volatility" attribute provided by the host (e.g., 'temporary', 'long-term-archive', 'frequently-updated'). The layout library within the policy engine uses this attribute to segregate data. Frequently-updated data is written to a dedicated, high-endurance FeRAM partition, while archival data is placed in a separate partition. The policy engine periodically remaps the logical-to-physical addresses for the high-traffic partition to ensure write operations are evenly distributed, a form of advanced wear-leveling. The "refuse delete" instruction could be implemented by setting a permanent polarization state on a block of FeRAM cells that cannot be reversed by standard write commands, creating a hardware-level WORM (Write-Once-Read-Many) capability.
  • Mermaid Diagram:
    sequenceDiagram
        participant Host
        participant DeviceController
        participant PolicyEngine
        participant FeRAM
        Host->>+DeviceController: Write(Data, {volatility: 'high'})
        DeviceController->>+PolicyEngine: AnalyzeRequest(Data, Metadata)
        PolicyEngine->>-DeviceController: Instruct: Use High-Endurance Zone
        DeviceController->>+FeRAM: Write to PhysicalAddr_A
        FeRAM-->>-DeviceController: Ack
        DeviceController-->>-Host: Write OK, ContentID: 123
        Host->>+DeviceController: Write(Data, {volatility: 'archive'})
        DeviceController->>+PolicyEngine: AnalyzeRequest(Data, Metadata)
        PolicyEngine->>-DeviceController: Instruct: Use Archive Zone
        DeviceController->>+FeRAM: Write to PhysicalAddr_B
        FeRAM-->>-DeviceController: Ack
        DeviceController-->>-Host: Write OK, ContentID: 456
    

1.3. Neuromorphic Controller for Predictive Data Placement

  • Enabling Description: The general-purpose "device controller" is substituted with a neuromorphic processor core co-located with a traditional CPU. This neuromorphic core runs a Spiking Neural Network (SNN) that is trained to recognize complex I/O patterns in real-time. The storage device policy is no longer a static set of rules but a goal-oriented directive (e.g., "minimize 99th percentile read latency" or "maximize device lifespan"). The SNN observes sequences of reads and writes, cluster sizes, and inter-command delays. It predicts which data blocks are likely to be accessed together in the near future. The layout library then uses these predictions to physically co-locate related data blocks on the storage media, even if they were written at different times. For a hard disk, this minimizes actuator arm movement. For an SSD, this ensures that data for a predicted workload is placed in the same erase block to minimize read-disturb and write-amplification. The SNN's predictive model is the "storage information" and is continuously updated.
  • Mermaid Diagram:
    flowchart LR
        subgraph Device Controller
            A[I/O Command Queue] --> B(Neuromorphic Core - SNN);
            B --Predicts Access Pattern--> C(Policy Engine);
            C --Generates Placement Policy--> D[Layout Library];
            D --Physical Address--> E[Media Interface];
        end
        E --> F[Storage Media];
        A --Data--> E;
    

Derivative Set 2: Operational Parameter Expansion

2.1. Policy-Based Storage for Cryogenic Quantum Computing

  • Enabling Description: This application applies the invention to a storage device operating within a dilution refrigerator at milli-Kelvin temperatures, providing data storage for a quantum computer. The storage media consists of superconducting memory elements. The "storage device policy" is critical for minimizing heat generation, which is a primary source of quantum decoherence. The policy engine, operating at a warmer stage of the refrigerator, receives a "computation schedule" from the quantum computer's control system. The policy dictates that data read/write operations are only performed during specific, non-critical phases of the quantum computation to avoid RF interference. For critical qubit state data, the policy enforces a "triplicate redundant storage" mode across physically separated memory chips to mitigate loss from cosmic ray strikes or localized heating events. Storage information is stored remotely outside the cryogenic environment.
  • Mermaid Diagram:
    stateDiagram-v2
        [*] --> Idle
        Idle --> Receiving_Policy: Quantum Control System
        Receiving_Policy --> Idle: Policy Updated
    
        state "Quantum Computation Active" as Active {
            state "Qubit Measurement" as Measure
            state "Gate Operation" as Gate
            
            [*] --> Gate
            Gate --> Measure: Read Qubit State
            Measure --> Gate: Apply Correction
        }
    
        Idle --> Active: Start Computation
        Active --> Idle: End Computation
    
        state "Storage Operation" as Storage {
            policy_check: Policy allows access?
            read_op: Read from Superconducting RAM
            write_op: Write to Superconducting RAM
        }
    
        Idle --> Storage: Host I/O Request
        Storage --> Idle: Operation Complete
        
        note right of Active
          During this state, policy
          disallows any storage I/O
          to prevent decoherence.
        end note
    

2.2. Planetary-Scale Inter-Satellite Storage Network

  • Enabling Description: A constellation of satellites (e.g., in Earth orbit, lunar orbit, and on Mars) forms a single, logical, distributed storage device. Each satellite contains a '023-style storage device. The "storage device policy" is location and-link-aware. A policy manager on Earth transmits policies that account for orbital mechanics and communication windows. When a satellite in low Earth orbit collects high-resolution imagery, its local policy dictates "local-first, high-density storage." As the satellite's orbit approaches a ground station, the policy dynamically shifts to "prepare for downlink," re-ordering data in the buffer for high-speed transmission. If the data is critical and the satellite will soon be out of contact, the policy instructs the device controller to transmit a copy to a nearby satellite in the constellation, which acts as a "remote location" for redundant storage information and content. The "refuse delete" command is used to protect raw scientific data until it has been confirmed as received by at least two ground stations.
  • Mermaid Diagram:
    graph TD
        subgraph Satellite_A
            A1[Device Controller]
            A2[Storage Media]
            A1 -- policy: local_store --> A2
        end
        subgraph Satellite_B
            B1[Device Controller]
            B2[Storage Media]
        end
        subgraph GroundStation
            GS[Policy Manager]
        end
    
        GS -- Update Policy (Orbital Position) --> A1
        A1 -- policy: replicate_critical_data --> B1
        B1 -- store_copy --> B2
        A1 -- policy: downlink_data --> GS
    

Derivative Set 3: Cross-Domain Application

3.1. Automotive: Black Box with Dynamic Event-Triggered Policies

  • Enabling Description: The invention is embodied in the central data recorder ("black box") of an autonomous vehicle. In normal operation, the device controller uses a "looping-cache" policy, storing high-bandwidth sensor data (LIDAR, camera feeds) with low retention, constantly overwriting the oldest data. This storage information is kept locally. However, the controller is also connected to the vehicle's CAN bus. If it detects an event from the inertial measurement unit (IMU) indicating a crash (e.g., deceleration > 5G), the policy immediately changes to "event-lockdown." The last 30 seconds of sensor data are flagged as non-deletable ("refuse delete"). Furthermore, a secondary policy triggers, storing a compressed summary of the event (location, speed, G-force data) with a cryptographic signature to a separate, physically hardened section of the memory. This "storage information" and the summary data are also transmitted via a cellular link to a remote server ("remote location") managed by the manufacturer or insurer.
  • Mermaid Diagram:
    stateDiagram-v2
        state "Normal Operation" as Normal
        state "Event Lockdown" as Lockdown
    
        [*] --> Normal
        Normal --> Normal: Write Sensor Data (Overwrite Old)
        Normal --> Lockdown: IMU Event (G-force > 5g)
        
        Lockdown --> [*]: Power Off
    
        state "In Lockdown" as S1 {
            direction LR
            [*] --> Mark_Immutable
            Mark_Immutable --> Store_Summary: Compress & Sign Event Data
            Store_Summary --> Transmit_Remote: Send Summary to Cloud
            Transmit_Remote --> [*]
        }
    

3.2. AgTech: Smart Implement with Soil-Condition-Based Policies

  • Enabling Description: A smart agricultural seed drill uses a policy-driven storage device. The device is connected to real-time sensors measuring soil moisture, pH, and nitrogen levels. The "storage device policy" is downloaded from a central farm management system based on the specific field's prescription map. As the drill moves across the field, the device controller receives sensor data. The policy contains rules such as: "IF soil_moisture < 20% THEN set_data_priority=HIGH and store_geotagged_data_redundantly." This ensures that data from problem areas is preserved with higher fidelity. The layout policy also adapts; for uniform field sections, it uses a highly compressed format to save space. For highly variable sections, it stores raw sensor readings. The storage information, including the precise GPS coordinates linked to each data point, is periodically synced to a "remote location" (the farm's cloud database) via a low-power LoRaWAN or satellite uplink.
  • Mermaid Diagram:
    flowchart TD
        A[GPS + Soil Sensors] --> B{Device Controller};
        C[Farm Cloud Server] -- Prescription Map / Policy --> B;
        
        subgraph "Policy Execution"
            B -- Sensor Readings --> P{Policy Engine};
            P -- "Moisture < 20%"? --> R1[Rule 1: High Reliability];
            P -- "pH > 7.5"? --> R2[Rule 2: Store Raw Data];
            P -- "Default" --> R3[Rule 3: Compressed Storage];
        end
    
        R1 --> S[Store on Media];
        R2 --> S;
        R3 --> S;
        S -- Storage Info --> B;
        B -- Sync Storage Info --> C;
    

3.3. Consumer Electronics: Wearable Health Monitor with Privacy Policies

  • Enabling Description: A smartwatch or health tracker incorporates a storage device with a user-configurable privacy policy. The user, via a smartphone app, defines the policy. Options could include: "Store heart rate data locally only," "Anonymize and upload activity data," or "In case of fall detection, make location and vital signs data available to emergency contacts." The device controller receives this policy. When it logs a heart rate measurement, it checks the policy. If the policy is "local only," the storage information is recorded to internal memory and marked as non-exportable. If a fall is detected, the policy instructs the controller to retrieve the latest vital signs and location data, package it, and transmit it, overriding the normal privacy restrictions. The "remote location" for storage can be a user's personal cloud account, and the policy can dictate that the storage information (metadata) sent to this remote location is encrypted with a key held only on the user's phone, preventing the cloud provider from analyzing the raw data.
  • Mermaid Diagram:
    sequenceDiagram
        autonumber
        participant App
        participant DeviceController
        participant StorageMedia
        participant EmergencyContact
        
        App->>DeviceController: Set Policy (privacy_mode='local', emergency_unlock=true)
        loop Normal Operation
            DeviceController->>StorageMedia: Store Vitals (Heart Rate, etc.)
        end
        
        DeviceController->>DeviceController: Event: Fall Detected!
        DeviceController->>DeviceController: Policy Check: emergency_unlock is true
        DeviceController->>StorageMedia: Retrieve Last 60s Vitals + GPS
        DeviceController->>EmergencyContact: Transmit Emergency Data Packet
    

Derivative Set 4: Integration with Emerging Tech

4.1. AI-Optimized Wear Leveling and Data Freshening

  • Enabling Description: The device controller integrates a TinyML model trained to predict data "temperature" (access frequency) and media degradation. The "storage device policy" is not a fixed set of rules but a target goal, e.g., "Achieve 5-year lifespan under 90th percentile enterprise workload." The AI model continuously monitors Logical Block Address (LBA) access patterns and internal media health metrics (e.g., NAND block erase counts, SSD temperature). It predicts which data blocks will become "hot" (frequently written) and preemptively moves them to fresh, low-wear blocks. Conversely, it identifies "cold" data that has not been accessed for a long time and schedules a "data freshening" operation, where the data is read and re-written to mitigate charge leakage or magnetic bit decay. This entire process is autonomous within the drive, using the AI model to dynamically generate and execute the optimal layout and maintenance policy to meet the high-level goal.
  • Mermaid Diagram:
    graph LR
        subgraph "AI-Driven Controller"
            A[I/O Stream] --> B[Pattern Recognition ML Model];
            C[Media Health Sensors] --> B;
            B -- "Predicts Hot/Cold Data" --> D{Policy Generator};
            D -- "Target: 5-Yr Lifespan" --> E{Policy};
            E -- "Wear-Leveling & Freshening Ops" --> F[Flash Translation Layer];
        end
        F <--> G[NAND Flash Media];
    

4.2. IoT-Aware Environmental Adaptation

  • Enabling Description: The storage device is designed for edge IoT deployments and includes an integrated environmental sensor suite (temperature, humidity, vibration, power quality). The device controller's policy library includes multiple modes optimized for different conditions. A remote IoT management platform (like AWS IoT Core or Azure IoT Hub) can push a new policy to the device. For example, a device on a vibrating industrial robot might receive a policy that increases the robustness of its ECC and physically duplicates critical configuration data. If the power quality sensor detects unstable voltage, the controller can activate a "safe-write" policy that uses more power to verify every write operation and journals all metadata to a separate, power-fail-safe memory region (e.g., MRAM) before committing it to the primary media. This ensures data integrity even in harsh and unpredictable physical environments.
  • Mermaid Diagram:
    graph TD
        subgraph "IoT Device"
            A[Vibration Sensor] --> C;
            B[Power Sensor] --> C;
            C[Device Controller] -- reads --> D[Storage Media];
            C -- writes --> D;
        end
        subgraph "Cloud Management"
            E[IoT Platform]
        end
        E -- "Push Policy: HighVibration" --> C;
        C -- "Current Policy: HighVibration" --> F{"Execute Write with Extra ECC & Verification"};
        F --> D;
    

Derivative Set 5: The "Inverse" or Failure Mode

5.1. Policy-Driven Graceful Degradation and Forensics

  • Enabling Description: The device controller actively monitors media health (e.g., reallocated sector count, wear-leveling delta). The storage policy includes a set of degradation thresholds. When a threshold is crossed (e.g., >5% of blocks are reallocated), the controller autonomously triggers a "degraded mode" policy. This policy might: 1) Mark the device as "read-only" to prevent further wear and data loss. 2) Reduce the reported capacity to the host, taking the weakest parts of the media offline. 3) Increase the strength of the ECC applied to all reads to maximize the chance of recovering data from failing cells. Critically, before entering a read-only state, the controller stores a final "state-of-health" log, including all SMART data and a map of bad blocks, in a reserved, immutable area. This log, protected by a "refuse delete" policy, can be retrieved for forensic analysis to understand the cause of the failure.
  • Mermaid Diagram:
    stateDiagram-v2
        [*] --> Healthy: Power On
        Healthy: Normal R/W Operations
        Healthy --> Degraded_Mode: Wear > Threshold 1
        Degraded_Mode: Reduced Capacity, Stronger ECC
        Degraded_Mode --> Read_Only_Mode: Wear > Threshold 2
        Read_Only_Mode: Writes disabled, Final Log Stored
        Read_Only_Mode --> Failed: Critical Error
        Failed --> [*]
    

Combination Prior Art Scenarios

  1. Integration with Ceph via CRUSH Ruleset Extension: A storage policy as defined in the '023 patent is mapped to a new device class attribute within the Ceph CRUSH map. A CRUSH rule can be written to explicitly place data based on this policy attribute. For example: rule replicated_worm { ruleset 0; type replicated; min_size 1; max_size 10; step take default; step chooseleaf firstn 0 type host; step chooseleaf firstn 3 type drive_policy_worm; step emit; }. A Ceph client requesting WORM storage would use this rule. The Ceph OSD daemon, running on the storage node, would not simply write to a generic block device. It would first issue a command to the '023-enabled drive to activate its internal, hardware-enforced "WORM" policy for the specified logical blocks before writing the object data. This moves policy enforcement from the software OSD layer to the device firmware, providing a more robust and secure implementation.

  2. Integration with Kubernetes via Container Storage Interface (CSI): A CSI driver is developed for the '023-enabled storage device. The StorageClass object in Kubernetes is extended with a parameters field for policy definition. A DevOps engineer can define a class like this:

    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: high-reliability-db
    provisioner: csi.gaea.com
    parameters:
      read_retry: "heroic"
      write_policy: "replicated_nvram"
      defect_avoidance: "aggressive"
    

    When a PersistentVolumeClaim requests this StorageClass, the CSI driver communicates with the device controller via a proprietary vendor command or a standardized protocol like NVMe-MI. It passes these parameters, which the controller uses to load the corresponding algorithms from its firmware libraries (as described in FIG. 7A/7B of the '023 patent), thus configuring the physical volume for the specific needs of the database application running in the pod.

  3. Integration with Apache Zookeeper for Distributed Policy Management: A cluster of '023-enabled storage devices uses Zookeeper as the "remote location" for storing both policies and storage information (metadata). Each device controller runs a lightweight Zookeeper client. The canonical storage policies are stored in a ZNode, e.g., /storage_policies/active_policy. Each device controller places a watch on this ZNode. When an administrator updates the policy, all devices are notified and atomically switch to the new policy. Furthermore, when a device writes an object, it stores the content ID and its physical location information in an ephemeral ZNode (e.g., /object_locations/object-xyz-123). This provides a fault-tolerant, distributed, and consistent metadata map that is managed by the storage devices themselves, reducing the need for a traditional centralized file system metadata server. If a device fails, its ephemeral nodes disappear, signaling to the cluster that its objects are offline.

Generated 5/9/2026, 6:49:29 PM