Patent 12231703
Derivative works
Defensive disclosure: derivative variations of each claim designed to render future incremental improvements obvious or non-novel.
Active provider: Google · gemini-2.5-pro
Derivative works
Defensive disclosure: derivative variations of each claim designed to render future incremental improvements obvious or non-novel.
As a Senior Patent Strategist and Research Engineer, I have analyzed U.S. Patent 12,231,703. The following defensive disclosure document details a series of derivative works and improvements built upon the core claims of the patent. The purpose of this document is to place these concepts into the public domain, thereby establishing them as prior art against future patent applications for similar, incremental inventions.
Defensive Disclosure: Derivative Works for Content Delivery Pipeline Management
Reference Patent: US 12,231,703 B2
Publication Date: May 1, 2026
I. Material & Component Substitution
Derivative 1.1: Hardware-Accelerated Pipeline Preservation with Heterogeneous Decoders
Enabling Description: This variation extends the pipeline preservation logic to a System-on-a-Chip (SoC) environment with multiple, heterogeneous hardware decoders. The digital receiver's
playercomponent (ref. US 12,231,703, FIG. 2, 202) maintains a real-time registry of available hardware decoder blocks (e.g., a dedicated H.264 block, a more power-efficient HEVC block, and a versatile AV1 block). When a content switch is requested, the system'splayerconsults the prefetched metadata for the new content's codec. Instead of only checking for a codec match with the current decoder, it checks if any available and idle hardware decoder block matches. If the current HEVC decoder is in use, but an idle AV1 decoder is available and the new stream is AV1, the system preserves thesource elementanddemuxwhile re-routing the video elementary stream to the idle AV1 hardware block. This avoids the latency of releasing the first decoder and allows for instantaneous switching between content streams of different, but hardware-supported, codecs. The audio decoder pipeline is preserved independently based on audio codec compatibility (e.g., AAC, Dolby Digital).Diagram:
graph TD subgraph Original Pipeline (HEVC) A[Source Element] --> B[Demux]; B --> C{Video ES}; B --> D{Audio ES}; C --> E[Hardware HEVC Decoder]; D --> F[Audio Decoder]; end subgraph New Pipeline (AV1) A_preserved[Source Element (Preserved)] --> B_preserved[Demux (Preserved)]; B_preserved --> C_new{Video ES (AV1)}; B_preserved --> D_preserved{Audio ES (AAC)}; C_new --> G[Hardware AV1 Decoder (Idle)]; D_preserved --> F_preserved[Audio Decoder (Preserved)]; end H{Player Logic} -- Request Switch --> I{Check Prefetched Metadata}; I -- Codec=AV1 --> J{Query Decoder Registry}; J -- AV1 block idle --> K[Re-route to AV1 Decoder]; E -- Released --> J;
Derivative 1.2: Pipeline Preservation using FPGA-based Reconfigurable Logic
Enabling Description: This embodiment replaces dedicated hardware decoders with a Field-Programmable Gate Array (FPGA). The
playercomponent loads a specific decoder bitstream (e.g., for VP9) onto the FPGA to instantiate the initial playback pipeline. Upon a content switch to a different codec (e.g., HEVC), the system initiates a partial reconfiguration of the FPGA. While thesource elementanddemux(running on a CPU) are preserved, the FPGA logic block corresponding to the VP9 decoder is dynamically swapped with a pre-compiled HEVC decoder bitstream. This allows for pipeline "preservation" even with incompatible codecs by reconfiguring the hardware itself, which is significantly faster than tearing down the entire software pipeline and releasing OS-level resources. The process relies on having a library of decoder bitstreams stored locally.Diagram:
sequenceDiagram participant User participant Player participant CPU participant FPGA User->>Player: Selects new content (HEVC) Player->>CPU: Preserve Source & Demux Player->>FPGA: Initiate Partial Reconfiguration FPGA-->>Player: Acknowledge Note over FPGA: Unloads VP9 Bitstream Note over FPGA: Loads HEVC Bitstream Player->>CPU: Route new HEVC stream to FPGA CPU->>FPGA: Pushes HEVC elementary stream FPGA->>Player: Decoded frames available
II. Operational Parameter Expansion
Derivative 2.1: Pipeline Preservation in High-Latency Satellite Networks
Enabling Description: This derivative applies the invention to a digital receiver operating over a geostationary satellite link, characterized by high latency (~600ms RTT) and variable bandwidth due to weather. The
source element(ref. US 12,231,703, FIG. 2, 212) is enhanced with a predictive buffering algorithm that uses a larger forward buffer (e.g., 60-90 seconds). When a content switch occurs, preserving thesource elementnot only maintains the bitrate heuristic but also the entire buffered data state. If the user switches to a new piece of content and then quickly switches back, the system can resume playback from the preserved buffer without re-requesting segments from the server, effectively masking the high network latency for the return action. The decision to preserve the pipeline is weighted by the size of the forward buffer; if the buffer is nearly full, the system prioritizes preservation to save bandwidth.Diagram:
stateDiagram-v2 [*] --> Playback_ContentA Playback_ContentA: Bitrate = 2Mbps, Buffer = 85s Playback_ContentA --> Playback_ContentB: User selects new content note on link Preserve Pipeline & Buffer for Content A. Start new stream for Content B. end note Playback_ContentB: Bitrate = 2Mbps (maintained) Playback_ContentB --> Playback_ContentA: User switches back within 30s note on link Instantly resume from preserved buffer. No data requested over satellite link. end note
Derivative 2.2: Massively-Scaled Pipeline Management in Cloud Transcoding
Enabling Description: The core mechanism is applied at an industrial scale within a cloud-based video transcoding service. A server handling thousands of live streams (e.g., for a streaming event) must generate multiple adaptive bitrate renditions for each. When a source stream's parameters change (e.g., resolution switches from 720p to 1080p), instead of tearing down and rebuilding the entire transcoding graph for all ABR renditions, the system preserves compatible pipeline elements. The
demuxandaudio transcoder(if the audio format is unchanged) are preserved for all renditions. Only the video scaler and video encoder components are re-initialized with the new parameters. This preservation of a "master" pipeline segment that feeds multiple child pipelines drastically reduces resource churn and processing gaps across the entire bitrate ladder.Diagram:
graph TD subgraph Original State (720p Input) Input[Live 720p Stream] --> Demux; Demux --> AudioTranscoder; Demux --> VideoPath_720p[Scaler + Encoder]; Demux --> VideoPath_480p[Scaler + Encoder]; Demux --> VideoPath_360p[Scaler + Encoder]; end subgraph New State (1080p Input) NewInput[Live 1080p Stream] --> PreservedDemux[Demux (Preserved)]; PreservedDemux --> PreservedAudio[AudioTranscoder (Preserved)]; PreservedDemux --> NewVideoPath_1080p[New Scaler + Encoder]; PreservedDemux --> NewVideoPath_720p[New Scaler + Encoder]; PreservedDemux --> NewVideoPath_480p[New Scaler + Encoder]; end Logic{Orchestrator} -- Input Change Detected --> PreserveAndRebuild;
III. Cross-Domain Application
Derivative 3.1: Aerospace - Real-Time Sensor Fusion Pipeline for UAVs
Enabling Description: An unmanned aerial vehicle (UAV) uses multiple sensors for autonomous navigation (e.g., 4K EO camera, thermal IR, LiDAR). Each sensor has a dedicated processing pipeline (noise reduction, stabilization, object detection). To save computational resources, only one sensor pipeline is fully active at a time. When the navigation logic switches from EO to IR due to low light, it preserves the common pipeline stages, such as the data ingestion buffer, the geometric transformation module (for aligning sensor data to a common map), and the control signal output module. Only the sensor-specific front-end processing modules are swapped. This reduces switch latency from hundreds of milliseconds to under 10 milliseconds, which is critical for maintaining stable flight during sensor transitions.
Diagram:
flowchart LR subgraph EO Pipeline A[EO Sensor] --> B[Noise Reduction]; B --> C[Geo-Transform]; C --> D[Object Detection]; D --> E[Flight Control]; end subgraph IR Pipeline F[IR Sensor] --> G[IR Gain Control]; G --> C_p[Geo-Transform (Preserved)]; C_p --> D_p[Object Detection (Preserved)]; D_p --> E_p[Flight Control (Preserved)]; end Nav[Navigation Logic] -- Low Light --> Switch; Switch -- Preserve C, D, E --> IR_Pipeline;
Derivative 3.2: AgTech - Dynamic Task Pipeline on Autonomous Tractors
Enabling Description: An autonomous tractor performs multiple tasks in a field, such as soil nutrient analysis, weed detection, and targeted pesticide spraying. Each task requires a different sensor-to-actuator pipeline. When transitioning from weed detection (using multispectral cameras) to soil analysis (using ground-penetrating sensors), the core
Data AggregatorandGPS/Positioningelements of the pipeline are preserved. The system releases the machine vision processing module and instantiates a soil composition analysis module, but the underlying data structures and connection to the tractor's CAN bus controller are maintained. This allows the tractor to switch tasks without rebooting its control system, saving time and energy.Diagram:
classDiagram class TractorControlSystem { +Pipeline currentPipeline +switchTask(newTask) } class Pipeline { +DataSource dataSource +Processor processor +ActuatorController actuator } class GpsDataSource class VisionProcessor class SoilProcessor class SprayerActuator TractorControlSystem "1" -- "1" Pipeline Pipeline "1" *-- "1" GpsDataSource Pipeline "1" *-- "1" VisionProcessor Pipeline "1" *-- "1" SoilProcessor Pipeline "1" *-- "1" SprayerActuator note for TractorControlSystem "switchTask preserves GpsDataSource and re-instantiates Processor"
Derivative 3.3: Telemedicine - Multi-Feed Management in Robotic Surgery
Enabling Description: During a remote surgical procedure, the surgeon's console displays video from an endoscope, an external overhead camera, and real-time ultrasound imaging. Each feed has a processing pipeline for tasks like latency compensation, artifact removal, and data overlay. When the surgeon switches their main view from the endoscope to the ultrasound, the system preserves the
Network Transportelement (maintaining the WebRTC connection and its quality-of-service parameters) and theDisplay Renderingengine. The system releases the endoscope-specific image processing resources and allocates resources for the ultrasound feed, directing its output to the preserved renderer. This minimizes the "black screen" time and cognitive load on the surgeon during a critical procedure.Diagram:
sequenceDiagram participant Surgeon participant Console participant Network participant Robot Surgeon->>Console: Switch view to Ultrasound Console->>Console: Preserve Network & Renderer Console->>Robot: Request Ultrasound Stream Robot-->>Console: Ultrasound Data Console->>Console: Instantiate US Processor Console->>Console: Route data to Renderer
IV. Integration with Emerging Tech
Derivative 4.1: AI-Driven Predictive Pipeline Pre-warming
Enabling Description: This system uses a recurrent neural network (RNN) trained on user navigation history to predict the next piece of content a user is likely to select with a high probability (e.g., >95%). Instead of waiting for a selection, the system proactively instantiates a "shadow" pipeline for the predicted content. It uses the principles of the '703 patent to preserve components from the current pipeline to build this shadow pipeline efficiently. When the user makes the predicted selection, the system performs an instantaneous switch by promoting the shadow pipeline to the active one. The AI model also determines the optimal initial bitrate for the shadow pipeline based on a time-series forecast of network bandwidth, rather than just using the last known bitrate.
Diagram:
graph TD A[User Navigation Events] --> B[RNN Model]; B -- Prediction: Content_X --> C[Orchestrator]; D[Current Pipeline] --> C; C --> E{Build Shadow Pipeline for X}; E -- Reuse Components --> D; F[User] -- Selects Content_X --> G{Instant Switch}; E --> G;
Derivative 4.2: IoT-Informed Pipeline Preservation Logic
Enabling Description: The decision logic for pipeline preservation is augmented with real-time data from IoT sensors on the client device. A
Device State Monitorservice collects data on CPU temperature, battery charge level, and memory pressure. Theplayercomponent queries this service before deciding on preservation. For example, if the metadata indicates the new and old content use the same resource-intensive software video decoder, but the CPU temperature is above a critical threshold (e.g., 85°C), the preservation logic will be overridden. The system will deconstruct the pipeline and build a new one using a less intensive hardware decoder or a lower-resolution stream, prioritizing device stability and battery life over switch speed.Diagram:
flowchart TD A[User requests new content] --> B{Check content type & codec}; B -- Compatible --> C{Query Device State Monitor}; subgraph IoT Sensors T[CPU Temp] --> DSM; M[Memory Pressure] --> DSM; BATT[Battery Level] --> DSM; end C -- State Data --> D{Preservation Logic}; D -- Is CPU > 85°C? --> E{Yes}; D -- Is CPU > 85°C? --> F{No}; E --> G[Override: Deconstruct & build low-power pipeline]; F --> H[Proceed: Preserve pipeline];
V. The "Inverse" or Failure Mode
Derivative 5.1: Graceful Degradation Pipeline
Enabling Description: This derivative describes a "Safe Mode" for the playback pipeline. If the system detects a non-fatal error during playback (e.g., a series of corrupted video frames, a decoder process repeatedly crashing), it triggers a graceful degradation. Instead of halting, the system intentionally preserves only the most basic components of the pipeline: the
source elementanddemux. It releases the potentially faulty hardware/software decoders and constructs a new pipeline backend using a "failsafe" profile: a low-resolution video stream and a universally compatible, low-CPU software video decoder (e.g., a simple MPEG-4 Part 2 decoder). This ensures that playback continues, albeit at a significantly reduced quality, allowing the user to continue watching while the system logs the error in the background. The bitrate is also reset to the lowest available to ensure stability.Diagram:
stateDiagram-v2 state "Full Quality" as Full state "Degraded Mode" as Degraded [*] --> Full Full: Playing 1080p HEVC with Hardware Decoder Full --> Degraded: on: DecoderError note on link Preserve Source & Demux. Reset to 360p stream. Instantiate Failsafe SW Decoder. end note Degraded: Playing 360p MPEG4 with Software Decoder Degraded --> Full: on: UserReboot or ErrorClear
VI. Combination Prior Art with Open-Source Standards
Scenario 6.1: Integration with GStreamer Multimedia Framework
- Enabling Description: The patented method is implemented as a new GStreamer plugin library,
gst-pipeline-preserver.so. This library provides a new element,pipemanager, which can be placed in a GStreamer pipeline. When an application needs to switch content, it sends a "prepare-switch" event to thepipemanagercontaining the URI and prefetched metadata of the new stream. Thepipemanagerintrospects the current running pipeline, identifies compatible downstream elements (e.g.,avdec_h264,audioconvert), and locks them. It then sends a flush event to clear their internal state. When the application sets the pipeline to thePLAYINGstate with the new URI, thepipemanagerallows data to flow through the preserved elements, having avoided the cost of destroying and recreating them.
Scenario 6.2: Integration with WebRTC Standard
- Enabling Description: In a browser implementing the WebRTC standard, the pipeline preservation logic is applied to
RTCPeerConnection. When a track is replaced usingRTCRtpSender.replaceTrack(), for example, to switch from a 1080p camera to a 1080p screen share, the browser's media engine checks if the codec and resolution are identical. If so, it preserves the underlying video encoding pipeline (e.g., the configured VP9 or AV1 encoder instance) and the RTP packetization components. It simply flushes the encoder's state and begins feeding it frames from the new media track. This avoids renegotiating the connection or re-allocating the computationally expensive encoder, resulting in a nearly instantaneous track switch for remote peers.
Scenario 6.3: Integration with FFmpeg Libraries
- Enabling Description: The logic is integrated into FFmpeg's
libavformatandlibavcodeclibraries. A new set of functions, such asavformat_preserve_context()andavcodec_flush_and_reuse(), is introduced. An application playing content can callavformat_preserve_context()on itsAVFormatContextbefore closing it. This caches the bitrate heuristics and pointers to the underlyingAVCodecContext. When opening a new stream, the application can pass this cached object toavformat_open_input(). The function will then attempt to reuse the existing codec context by callingavcodec_flush_and_reuse(), which is faster thanavcodec_close()followed byavcodec_open2(). This allows any media player built on FFmpeg to leverage pipeline preservation with minimal code changes.
Generated 5/1/2026, 2:31:02 AM