Patent 10979693

Derivative works

Defensive disclosure: derivative variations of each claim designed to render future incremental improvements obvious or non-novel.

Active provider: Google · gemini-2.5-pro

Derivative works

Defensive disclosure: derivative variations of each claim designed to render future incremental improvements obvious or non-novel.

✓ Generated

Defensive Disclosure and Prior Art Derivations Based on U.S. Patent 10,979,693

Publication Date: May 1, 2026
Subject: Methods and Systems for Video Stabilization and Transformation in Imaging Applications.
Disclaimer: This document is a defensive publication intended to place the described concepts into the public domain, thereby establishing prior art for the purposes of patent law.

This disclosure details several derivative implementations, applications, and variations of the core methods described in U.S. Patent 10,979,693. The purpose is to elaborate on foreseeable and obvious extensions of the patented technology to prevent future claims on these incremental improvements.


Derivative Variations of Independent Claim 1

Independent Claim 1 describes a method for mapping and filtering stereoscopic data by comparing a reference frame to preceding and succeeding frames, characterizing motion, and applying a three-matrix operation (based on motion, camera intrinsic projection, and its inverse) to modify the reference frame.

Axis 1: Material & Component Substitution

Derivative 1.1: Solid-State Beam-Steering Stabilization

  • Enabling Description: The stereoscopic camera apparatus is constructed not with traditional mechanical lens assemblies but with solid-state phased arrays, such as those used in LiDAR systems (e.g., MEMS mirrors or optical phased arrays). Instead of physically capturing frames with unwanted motion and correcting them in software, the motion characterized from the preceding/succeeding frame analysis is used to generate a corrective signal fed to the beam-steering arrays. This signal adjusts the optical path prior to hitting the sensor, effectively creating an optically pre-stabilized image. The matrix calculations (motion, projection, inverse projection) are performed in real-time by a dedicated ASIC, and the output is a set of voltage adjustments for the phased array, canceling out the detected motion on a nanosecond scale. This method replaces post-processing stabilization with pre-capture optical stabilization using the same core motion-characterization logic.
  • Mermaid Diagram:
    graph TD
        A[Video Stream Input] --> B{Frame Buffer};
        B --> C[Identify Frames: Prev, Ref, Next];
        C --> D{Motion Characterization Engine};
        D --> E{Matrix Calculation ASIC};
        E -- Motion Matrix --> F;
        E -- Projection Matrix --> F;
        F[Calculate Corrective Voltages] --> G[MEMS Phased Array];
        H[Optical Path] --> G;
        G --> I[Image Sensor];
        I --> J[Pre-Stabilized Frame Output];
    end
    

Derivative 1.2: FPGA-Based In-Sensor Matrix Computation

  • Enabling Description: This variation integrates the stabilization processing directly onto the image sensor package. A Field-Programmable Gate Array (FPGA) is co-located with the CMOS sensor die. As the sensor reads out pixel data for a reference frame, the pixel data from the preceding and succeeding frames (held in a small, high-speed SRAM buffer on the same package) is simultaneously processed by the FPGA. The FPGA is configured with dedicated hardware logic blocks to perform the specific floating-point matrix multiplications required by the claim. This architecture eliminates the need for a separate GPU/CPU and the associated data bus latency. The output from the sensor package is the already-modified and stabilized video frame, significantly reducing power consumption and system complexity.
  • Mermaid Diagram:
    sequenceDiagram
        participant CMOS as CMOS Sensor
        participant SRAM as On-Package SRAM
        participant FPGA as On-Package FPGA
        participant SystemBus as System Bus
    
        CMOS->>SRAM: Stream Pixel Data (Frames N-1, N, N+1)
        FPGA->>SRAM: Read Frames N-1, N+1
        FPGA->>FPGA: Characterize Motion
        FPGA->>SRAM: Read Reference Frame N
        FPGA->>FPGA: Apply 3-Matrix Operation
        FPGA->>SystemBus: Output Modified Frame N
    end
    

Axis 2: Operational Parameter Expansion

Derivative 2.1: Cryogenic Infrared Imaging Stabilization

  • Enabling Description: The method is applied to a stereoscopic camera system operating in the long-wave infrared (LWIR) spectrum (8-15 μm) for astronomical observation. The entire camera assembly, including InSb (Indium Antimonide) sensors, is cooled to cryogenic temperatures (below 77 K) to reduce thermal noise. At this scale, the stabilization algorithm corrects for micro-vibrations from the cryogenic cooling system and atmospheric thermal distortion. The "first motion" is characterized by analyzing the shimmer in background cosmic radiation between frames. The intrinsic camera parameters (focal length) are dynamically adjusted based on temperature-induced contractions in the lens housing, fed from a series of thermal sensors.
  • Mermaid Diagram:
    flowchart LR
        subgraph Cryo-Chamber
            A[LWIR Sensor 1]
            B[LWIR Sensor 2]
            C[Thermal Sensors]
        end
        Stream1 --> D{Stabilization Processor};
        Stream2 --> D;
        C --> E{Focal Length Adjustment};
        E --> D;
        A --> Stream1;
        B --> Stream2;
        D -- Stabilized Frame --> Output;
    end
    

Derivative 2.2: High-Pressure Hydrothermal Vent Imaging

  • Enabling Description: A stereoscopic camera is deployed on a Remotely Operated Vehicle (ROV) at pressures exceeding 300 atmospheres to study deep-sea hydrothermal vents. The stabilization method is used to counteract the violent, turbulent flow of superheated water, which causes both physical ROV motion and severe optical distortion. The "first motion" is a composite vector calculated from the physical motion (via an inertial navigation system) and the optical flow of suspended particulates in the water column. The projection matrix is dynamically updated to account for pressure- and temperature-induced changes in the refractive index of the water between the sapphire lens ports and the subject.
  • Mermaid Diagram:
    stateDiagram-v2
        [*] --> Capturing
        Capturing --> Processing : New Frame Acquired
        Processing --> Capturing : Frame Stabilized
    
        state Processing {
            [*] --> GetINSData
            GetINSData --> OpticalFlowAnalysis
            OpticalFlowAnalysis --> CalculateCompositeMotion
            CalculateCompositeMotion --> UpdateRefractionIndex
            UpdateRefractionIndex --> CalculateMatrices
            CalculateMatrices --> ApplyTransform
            ApplyTransform --> [*]
        }
    end
    

Axis 3: Cross-Domain Application

Derivative 3.1: Aerospace - Debris Collision Avoidance

  • Enabling Description: A satellite equipped with a stereoscopic sensor array uses the claimed method to stabilize imagery for the detection of small, fast-moving orbital debris. The "reference frame" is compared against preceding/succeeding frames to filter out the satellite's own rotation and vibration. The remaining motion vectors belong to external objects. The stabilization allows for the creation of a stable background starfield against which the parallax-induced motion of nearby debris can be accurately measured, enabling precise trajectory prediction for collision avoidance maneuvers.
  • Mermaid Diagram:
    graph TD
        A[Stereo Camera Feed] --> B{Stabilization Module};
        C[Satellite IMU Data] --> B;
        B --> D{Filter Self-Motion};
        D --> E{Identify External Motion Vectors};
        E --> F[Debris Trajectory Prediction];
        F --> G[Collision Alert & Maneuver Plan];
    end
    

Derivative 3.2: AgTech - Robotic Pollination

  • Enabling Description: A robotic arm equipped with a compact stereoscopic camera performs autonomous pollination in a greenhouse. The robot moves from flower to flower, but wind from ventilation systems and minor vibrations cause the arm to shake. The stabilization method is used on the camera feed to provide a stable image of the flower's stigma and anthers. This allows a machine vision system to guide the robotic pollinator with sub-millimeter accuracy, compensating for the unwanted motion between the robot's intended path and the actual real-time position of the flower.
  • Mermaid Diagram:
    sequenceDiagram
        participant RobotArm as Robotic Arm
        participant Camera as Stereo Camera
        participant Processor as Stabilization Processor
        participant VisionAI as Machine Vision AI
    
        RobotArm->>Camera: Move towards flower
        Camera->>Processor: Provide video stream (shaky)
        Processor->>Processor: Apply 3-matrix stabilization
        Processor->>VisionAI: Provide stable video stream
        VisionAI->>RobotArm: Send precise pollinator adjustments
    end
    

Derivative 3.3: Consumer Electronics - Live Sports Augmented Reality

  • Enabling Description: A user is wearing AR glasses at a live basketball game. The glasses have a forward-facing stereoscopic camera. The claimed method is used to stabilize the view of the court, removing the shakiness from the user's head movements. The stabilized video feed is then used as a clean canvas on which to overlay AR content, such as player stats, shot trajectories, or advertisements that appear locked to the court, rather than shaking with the user's view. The motion characterization differentiates between intentional head turns (to look at a different player) and unintentional jitter.
  • Mermaid Diagram:
    flowchart TD
        A[AR Glasses Camera Feed] --> B{Stabilization Engine};
        B -- Stabilized Video --> C{AR Compositor};
        D[Real-time Game Stats] --> C;
        C --> E[Display to User];
    end
    

Axis 4: Integration with Emerging Tech

Derivative 4.1: AI-Driven Predictive Stabilization

  • Enabling Description: The system is integrated with a Long Short-Term Memory (LSTM) neural network. The LSTM is trained on historical motion data from the camera's IMU and the motion vectors generated by the stabilization algorithm itself. During operation, the LSTM predicts the likely motion of the camera several frames into the future. The stabilization algorithm uses this predicted motion to calculate the required transformation matrices before the corresponding frames are even captured. This predictive approach reduces the latency of the stabilization process from a post-capture reaction to a near-instantaneous correction, critical for real-time AR/VR applications.
  • Mermaid Diagram:
    graph TD
        subgraph Real-Time Loop
            A[IMU & Video Data] --> B{LSTM Predictor};
            B -- Predicted Motion Vector --> C{Matrix Calculator};
            D[Live Video Frame] --> E{Frame Modifier};
            C -- Transformation Matrix --> E;
            E --> F[Stabilized Output];
        end
        A --> G[Training Data];
        F --> G;
        G --> H((Train LSTM Model));
    end
    

Derivative 4.2: Blockchain-Verified Calibration Data

  • Enabling Description: The intrinsic camera parameters (focal length, principal point, lens distortion models) used to calculate the second matrix are highly sensitive and critical for accurate stabilization. In this variation, each stereoscopic camera's unique calibration profile is generated at the factory and its hash is stored on a public blockchain (e.g., Ethereum) linked to the device's serial number. When a video is processed, the stabilization software retrieves the calibration data and verifies its integrity by comparing its hash against the one stored on the blockchain. This prevents tampering with calibration files and ensures that the stabilization is always performed with the authentic, manufacturer-certified parameters, providing a verifiable chain of custody for forensic or professional video applications.
  • Mermaid Diagram:
    sequenceDiagram
        participant Factory as Factory Calibration
        participant Blockchain as Immutable Ledger
        participant Camera as Stereoscopic Camera
        participant Player as Playback Device
    
        Factory->>Blockchain: Store Hash(Calibration_Data) for Serial_XYZ
        Factory->>Camera: Load Calibration_Data
        Camera->>Player: Send Video + Serial_XYZ
        Player->>Blockchain: Retrieve Hash for Serial_XYZ
        Player->>Camera: Request Calibration_Data
        Player->>Player: Verify Hash(Calibration_Data) matches
        Note right of Player: If valid, proceed with stabilization
    end
    

Axis 5: The "Inverse" or Failure Mode

Derivative 5.1: Graceful Degradation Stabilization

  • Enabling Description: The system is designed for a low-power device, like a body camera. It continuously monitors the magnitude of the motion vectors being characterized. When the motion is below a "low" threshold (e.g., simple walking), the full three-matrix operation is applied for maximum quality. If motion exceeds a "high" threshold (e.g., running or a physical struggle), the system enters a power-saving mode. It deactivates the complex projection/inverse-projection calculations and applies a simplified 2D affine transformation based only on the filtered motion matrix. This provides a "good enough" level of stabilization to maintain visual context, but at a fraction of the computational cost, thereby preserving battery life during high-action events. The system reverts to full quality once motion subsides.
  • Mermaid Diagram:
    stateDiagram-v2
        state FullQuality {
            description: Apply full 3-matrix stabilization
        }
        state LowPower {
            description: Apply simplified 2D motion transform
        }
    
        [*] --> FullQuality : System On
        FullQuality --> LowPower : Motion > High_Threshold
        LowPower --> FullQuality : Motion < Low_Threshold
    end
    

Combination Prior Art Scenarios

Combination 1: Integration with OpenCV

  • Description: The patented method is implemented using the open-source OpenCV library. The initial motion characterization step ("comparing the first and second set of frames") is achieved using OpenCV's cv::calcOpticalFlowFarneback function to generate a dense optical flow field between the bracketing frames. The average of this flow field provides the initial motion vector. The camera intrinsic parameters (focal length, principal point) are stored in a cv::Mat object, as generated by OpenCV's cv::calibrateCamera function. The matrix operations themselves are performed using standard OpenCV matrix multiplication (cv::Mat::mul) and inversion (cv::Mat::inv) functions. This combination renders the claimed method as an obvious application of standard, well-documented computer vision tools to the known problem of stabilization.

Combination 2: Implementation as an FFmpeg Filter

  • Description: The entire stabilization process is encapsulated as a video filter within the open-source FFmpeg framework. A new filter, named stereostabilize, is created. It is invoked from the command line as -vf stereostabilize. The filter maintains a buffer of three frames. For each incoming frame, it treats it as the reference, pulls the previous and next frames from its buffer, and performs the calculations. The camera parameters (focal length, etc.) are passed as arguments to the filter (e.g., stereostabilize=focal_x=1024:focal_y=1022). This implementation places the patented method directly into the toolkit of any video professional or developer using this ubiquitous open-source software, making it a standard, obvious-to-try technique for stereoscopic video stabilization.

Combination 3: Metadata Extension for the OpenXR Standard

  • Description: A new extension, XR_KHR_camera_motion_metadata, is proposed for the open-source OpenXR standard. This extension defines a standardized format for embedding the necessary stabilization data within a stereoscopic video stream. The format includes fields for per-frame motion vectors (as calculated by an IMU or optical flow) and the camera's intrinsic projection matrix. An OpenXR-compliant runtime on a playback device can then read this metadata stream alongside the video stream. It would use the provided metadata to perform the stabilization method described in Claim 5 (applying the projection, its inverse, and the motion matrix) in real-time. This combination makes the playback-side stabilization method an integral and obvious feature of any VR system that adheres to this open standard, rather than a standalone invention.

Generated 5/1/2026, 8:41:47 PM