Patent 6185590
Derivative works
Defensive disclosure: derivative variations of each claim designed to render future incremental improvements obvious or non-novel.
Active provider: Google · gemini-2.5-pro
Derivative works
Defensive disclosure: derivative variations of each claim designed to render future incremental improvements obvious or non-novel.
Defensive Disclosure and Prior Art Publication
Title: Methods and Architectures for Abstraction, Distribution, and Interaction with Heterogeneous Software Modules
Publication Date: May 11, 2026
Abstract: This document discloses a series of technical implementations, variations, and applications of a component-based software architecture. The core concept involves abstracting the unique Application Programming Interfaces (APIs) of diverse software modules (termed "engines") into standardized, manageable components. These components are designed to operate in both standalone and distributed computing environments, including client-server, intranet, and internet contexts. The disclosures herein expand upon this concept by introducing alternative components, applications in extreme operational parameters, cross-domain implementations, integration with modern technologies like AI, IoT, and blockchain, and fail-safe operational modes. The intent is to place these derivative concepts into the public domain to serve as prior art for future patent applications in this domain.
Derivative Works Based on Core Architecture (Ref: Claims 1, 11, 18)
This section describes variations on the three-layer architecture (Engine Management, Engine Configuration, Engine Function) for abstracting a native API into a standardized component.
1. Material & Component Substitution
1.1. WebAssembly (WASM) as the Engine Container
- Enabling Description: Instead of relying on platform-specific Dynamic-Link Libraries (DLLs), the core technology "engine" is compiled to a WebAssembly (WASM) module. The Engine Management Layer (Layer 1) is implemented as a WASM runtime host (e.g., Wasmer or Wasmtime) embedded within the component. The
LoadLibrary()function call is replaced by awasm_instance_new()call, which instantiates the WASM module in a sandboxed memory space. The Engine Management Layer's function mapping is achieved by introspecting the WASM module's exported functions. This approach provides cross-platform compatibility (Windows, macOS, Linux, web browsers) and enhanced security due to the WASM sandbox, preventing the engine from causing system-level faults. The Engine Configuration Layer (Layer 2) communicates with the WASM instance by writing to or reading from its linear memory, which is pre-configured to represent the engine's settings structure. - Mermaid Diagram:
graph TD A[Object Manager] --> B{WASM-based Engine Component}; subgraph B C[Engine Functions (WASM Exports)] D[Engine Configuration (WASM Linear Memory)] E[Engine Management (WASM Runtime)] end E -- Instantiates --> F[Engine.wasm Module]; E -- Maps Exports --> C; D -- Configures --> F; A -- Invokes --> C;
1.2. GraphQL as the Standardized Interface Definition
- Enabling Description: The "substantially consistent interface" is formally defined using a GraphQL schema. The Engine Function Layer (Layer 3) is implemented as a GraphQL server that resolves queries and mutations. An
ExecuteFunctioncall is replaced by a GraphQL mutation, e.g.,mutation { ocrImage(source: "base64_string") { text, confidence } }. The Engine Configuration Layer (Layer 2) exposes engine settings via GraphQL queries, e.g.,query { ocrSettings { language, dpi } }, and allows modification via mutations. This provides a strongly-typed, self-documenting, and network-efficient interface, allowing clients (Object Managers) to request only the specific data they need, reducing bandwidth and processing overhead compared to generic COM or RPC calls. - Mermaid Diagram:
sequenceDiagram participant Client as Object Manager participant Component as Engine Component (GraphQL Server) participant Engine as Native "C" API Engine Client->>+Component: GraphQL Mutation: ocrImage(...) Component->>Component: Parse Query, Resolve ocrImage Component->>+Engine: Call native_ocr_function(params) Engine-->>-Component: Return OCR Result Component->>Component: Format Result as JSON Component-->>-Client: JSON Payload { data: { ocrImage: { ... } } }
2. Operational Parameter Expansion
2.1. Real-Time, Deterministic Engine Management for Avionics
- Enabling Description: The architecture is implemented within a Real-Time Operating System (RTOS) like VxWorks or PikeOS, targeting safety-critical applications (e.g., flight control systems). The Engine Management Layer (Layer 1) is modified to provide deterministic guarantees for engine loading and function invocation. Dynamic memory allocation is replaced with pre-allocated memory pools to avoid non-deterministic heap fragmentation. Function pointers are resolved at system startup rather than on-demand to eliminate lookup latency. The
ActivateEngine()function includes a deadline parameter, and if the engine's DLL/shared object cannot be loaded and initialized within the specified microsecond timeframe, a hard real-time fault is triggered. This ensures that a backup or redundant engine can be activated without compromising system stability. - Mermaid Diagram:
stateDiagram-v2 [*] --> Inactive Inactive --> Loading: ActivateEngine(deadline) Loading --> Active: [Load & Init OK] / Start Timer Loading --> Fault: [Deadline Exceeded] Active --> Inactive: DeactivateEngine() Active --> Executing: InvokeFunction() Executing --> Active: [Execution Complete] Fault --> [*]
2.2. Massive Parallelism for GPU-based Computational Engines
- Enabling Description: The architecture is adapted to manage CUDA or OpenCL "engines" for massively parallel computation. The Engine Component resides on a server with multiple GPUs. The Engine Management Layer (Layer 1) manages GPU contexts and memory, loading and compiling PTX or SPIR-V kernels. The Engine Configuration Layer (Layer 2) allows the Object Manager to specify parameters like grid size, block size, and shared memory allocation. The Engine Function Layer (Layer 3) translates standardized function calls (e.g.,
ProcessMatrix) into a sequence ofcudaMemcpy(Host to Device), kernel launch, andcudaMemcpy(Device to Host) operations. The Object Manager can queue multiple jobs, and the Engine Component manages a command queue for each available GPU, distributing the workload for maximum throughput. - Mermaid Diagram:
graph LR subgraph Client OM(Object Manager) end subgraph Server EC[Engine Component] subgraph GPU1 K1(Kernel A) M1(GPU Memory) end subgraph GPU2 K2(Kernel B) M2(GPU Memory) end end OM -- Request(Data, Kernel_A) --> EC OM -- Request(Data, Kernel_B) --> EC EC -- Manages Queue --> GPU1 EC -- Manages Queue --> GPU2 EC -->|1. cudaMemcpy H->D| M1 EC -->|2. Launch| K1 EC -->|3. cudaMemcpy D->H| M1 EC -->|1. cudaMemcpy H->D| M2 EC -->|2. Launch| K2 EC -->|3. cudaMemcpy D->H| M2
3. Cross-Domain Application
3.1. Aerospace: Modular Flight Management Systems (FMS)
- Enabling Description: In an FMS, different avionics subsystems (e.g., GPS, Inertial Reference System, VOR/DME receivers) are treated as "engines," each with a proprietary data protocol and API. An Engine Component is created for each subsystem. The Engine Management Layer handles the initialization and health checks for the hardware interface (e.g., ARINC 429 bus). The Engine Configuration Layer standardizes settings, such as navigation database selection or sensor calibration offsets. The Engine Function Layer provides a uniform API, like
GetCurrentPosition()orGetGroundSpeed(), to the Object Manager (the primary flight computer). This allows for "plug-and-play" replacement of avionics boxes from different manufacturers without rewriting the core FMS logic. - Mermaid Diagram:
flowchart TD FMC[Flight Management Computer<br/>(Object Manager)] subgraph GPS Component L3_GPS[Standard Func: GetPosition()] L2_GPS[Config: SetDatum()] L1_GPS[Mgmt: Init_ARINC429()] end subgraph IRS Component L3_IRS[Standard Func: GetAttitude()] L2_IRS[Config: Align(lat,lon)] L1_IRS[Mgmt: HealthCheck()] end GPS[GPS Receiver<br/>(Engine)] IRS[Inertial Reference System<br/>(Engine)] FMC --> L3_GPS FMC --> L3_IRS L1_GPS --> GPS L1_IRS --> IRS
3.2. AgTech: Unified Farm Sensor Data Aggregation
- Enabling Description: A farm management platform (the Object Manager) needs to integrate data from disparate IoT sensors: soil moisture probes (Modbus), weather stations (proprietary HTTP API), and drone imagery analysis services (REST API). Each sensor type is an "engine." An Engine Component is deployed for each, typically on an edge gateway device. The Engine Management Layer handles the specific communication protocol (TCP, serial, HTTP). The Engine Configuration Layer normalizes sensor data, converting different units (e.g., Celsius to Fahrenheit) and data formats (e.g., XML to a standard JSON object). The Engine Function Layer provides consistent calls like
GetSoilMoisture(field_id)orGetLatestNDVIMap(field_id), which the Object Manager can use to build a unified dashboard for the farmer. - Mermaid Diagram:
graph TD Dashboard[Farm Dashboard<br/>(Object Manager)] subgraph EdgeGateway subgraph SoilSensor_Component L3_Soil[Func: GetMoisture()] L2_Soil[Config: SetPollingInterval()] L1_Soil[Mgmt: ConnectModbus()] end subgraph WeatherStation_Component L3_Weather[Func: GetTemperature()] L2_Weather[Config: SetAPIKey()] L1_Weather[Mgmt: EstablishHTTP()] end end Soil[Soil Probe (Engine)] Weather[Weather Station (Engine)] Dashboard -- Over Network --> L3_Soil Dashboard -- Over Network --> L3_Weather L1_Soil --> Soil L1_Weather --> Weather
3.3. Consumer Electronics: Smart Home Hub Interoperability
- Enabling Description: A universal smart home hub (Object Manager) uses the architecture to control devices from different ecosystems (e.g., Philips Hue, Google Nest, Apple HomeKit). Each device API (Zigbee, Matter, proprietary cloud API) is an "engine." The Engine Component for a Philips Hue bridge, for example, would have its Engine Management Layer handle network discovery and authentication with the bridge. The Engine Configuration Layer would abstract device-specific settings like color temperature ranges or supported effects. The Engine Function Layer provides standardized commands like
SetLightState({id: "...", on: true, brightness: 80}), allowing the hub's user interface and automation rules to treat all lights, regardless of manufacturer, in an identical manner. - Mermaid Diagram:
classDiagram ObjectManager "1" -- "N" EngineComponent ObjectManager : +setDeviceState(id, state) EngineComponent <|-- HueComponent EngineComponent <|-- NestComponent EngineComponent : +execute(standardCommand) HueComponent : +execute(standardCommand) NestComponent : +execute(standardCommand) HueComponent ..|> HueBridgeAPI NestComponent ..|> NestCloudAPI class HueBridgeAPI{ +setLight(id, payload) } class NestCloudAPI{ +setThermostat(id, temp) }
4. Integration with Emerging Tech
4.1. AI-Driven Automatic Component Generation
- Enabling Description: The "component factory" concept is fully automated using a large language model (LLM) or a specialized code-generation AI. The factory takes a C-header file (
.h), API documentation, and sample code as input. The AI first performs semantic analysis to identify functions, data structures, and their purposes. It then generates the C++ source code for the three-layer Engine Component. The Engine Management Layer code (e.g., loading DLLs, usingGetProcAddress) is generated by identifying function signatures. The Engine Configuration Layer is created by mapping API constants and settings structures to a table-driven implementation. The Engine Function Layer maps the high-level standardized calls to the wrapped C functions. The process concludes with automated compilation and testing of the newly generated component against the sample code. - Mermaid Diagram:
graph TD subgraph AI Component Factory A[Input: Header Files, Docs] --> B(Semantic Analysis); B --> C{Identify Functions & Settings}; C --> D[Generate Engine Management Layer]; C --> E[Generate Engine Config Layer]; C --> F[Generate Engine Function Layer]; G[Combine & Compile] --> H(Output: Engine Component DLL); end D -- uses --> Win32_API; F -- maps to --> Original_Engine_API;
4.2. IoT-Based Dynamic Engine Instantiation
- Enabling Description: The distributed architecture is integrated with an IoT platform. The Object Manager runs in the cloud, while Engine Components are deployable to edge devices or other cloud servers. A network of IoT sensors provides real-time data (e.g., temperature, vibration, location). The Object Manager uses this data to make intelligent decisions about which engines to activate. For example, if a machine's vibration sensor (IoT device) exceeds a threshold, the Object Manager automatically instantiates a "Vibration Analysis Engine Component" on a nearby edge server, passes it the sensor data stream, and triggers an alert if the analysis predicts a failure. The engine is then unloaded when the sensor data returns to normal, conserving resources.
- Mermaid Diagram:
sequenceDiagram participant IoT_Sensor participant ObjectManager participant Server participant EngineComponent IoT_Sensor->>ObjectManager: Telemetry (Vibration=High) ObjectManager->>ObjectManager: Rule Triggered: Analyze Vibration ObjectManager->>Server: InstantiateEngine("VibrationAnalyzer") Server->>EngineComponent: new() EngineComponent-->>Server: Ready Server-->>ObjectManager: Handle to Component ObjectManager->>EngineComponent: ProcessStream(Telemetry) EngineComponent-->>ObjectManager: Result (Predicted Failure) ObjectManager->>Server: DeactivateEngine("VibrationAnalyzer")
4.3. Blockchain for Auditable Engine Usage (SaaS Metering)
- Enabling Description: The architecture is used to provide Software-as-a-Service (SaaS) access to high-value proprietary engines. Every call from an Object Manager to a remote Engine Component's Function Layer is recorded as a transaction on a private blockchain (e.g., Hyperledger Fabric). The transaction record includes a hash of the input data, the function called, the engine version, a timestamp, and the client's identity. This creates an immutable, tamper-proof audit trail of engine usage. The Engine Configuration Layer reads smart contracts to determine if a client has sufficient payment/credits to execute a function. This provides a transparent and verifiable system for pay-per-use billing and for proving compliance in regulated industries where data processing steps must be logged.
- Mermaid Diagram:
flowchart LR Client[Client App<br/>(Object Manager)] -- 1. API Call --> Server subgraph Server EC[Engine Component] --> EM[Engine Module] EC -- 2. Create Transaction --> BC(Blockchain Ledger) BC -- 3. Validate & Record --> BC end Client <-- 4. Result -- Server Auditor -->|Query| BC
5. The "Inverse" or Failure Mode
5.1. Graceful Degradation with a "Proxy Engine"
- Enabling Description: The distributed system is designed for high availability in unreliable networks. When a client's Object Manager attempts to call a function on a remote Engine Component and the network connection fails, the DCOM/RPC call times out. Instead of returning an error, the Object Manager's wrapper for that engine transparently re-routes the call to a local, lightweight "proxy engine." This proxy provides limited functionality. For example, a full-featured remote OCR engine might be replaced by a local Tesseract-based proxy that is less accurate but provides an immediate, albeit lower-quality, result. The user interface can indicate that the system is in a "degraded" mode. When the network is restored, the Object Manager automatically flushes any queued requests to the primary remote engine.
- Mermaid Diagram:
stateDiagram-v2 state "Connected" as S1 state "Degraded" as S2 [*] --> S1 S1: Calls go to Remote Engine S2: Calls go to Local Proxy Engine S1 --> S2: Network Failure S2 --> S1: Network Restored
5.2. "Read-Only" Configuration Mode
- Enabling Description: The Engine Configuration Layer (Layer 2) implements a security model with a "read-only" mode. In this mode,
get_settingcalls function normally, but any attempt to call aset_settingfunction is rejected with a "Permission Denied" error. This is useful for creating user roles. An "Administrator" user's Object Manager can operate in read/write mode to configure engines, while a "Standard User" can only view the current settings but not change them. The mode is established by the Object Manager during the initial connection to the Engine Component, passing a security token that the Engine Management Layer validates to set the internal state of the configuration layer. - Mermaid Diagram:
sequenceDiagram actor User User->>ObjectManager: Login(credentials) ObjectManager->>AuthService: Authenticate(credentials) AuthService-->>ObjectManager: Return Token (Role: 'ReadOnly') ObjectManager->>EngineComponent: Connect(Token) EngineComponent->>EngineComponent: Set Internal State to ReadOnly User->>ObjectManager: Attempt setSetting("dpi", 300) ObjectManager->>EngineComponent: setSetting("dpi", 300) EngineComponent-->>ObjectManager: Error: Permission Denied
Combination Prior Art Scenarios with Open-Source Standards
Engine-as-a-Filesystem with FUSE: The core architecture is combined with the Filesystem in Userspace (FUSE) library. The Object Manager is implemented as a FUSE daemon that mounts a virtual filesystem, e.g.,
/mnt/engines. Each available Engine Component appears as a subdirectory, e.g.,/mnt/engines/ocr_engine. The engine's settings are exposed as simple text files within that directory (e.g.,cat /mnt/engines/ocr_engine/languagereturns "en-US"). Changing a setting is done viaecho "de-DE" > /mnt/engines/ocr_engine/language. Engine functions are exposed as special "action" files. Writing the path to an input file into an action file triggers the function, and the output is written to a corresponding results file. For example:echo "/path/to/image.tif" > /mnt/engines/ocr_engine/recognize_text. The system then creates/mnt/engines/ocr_engine/recognize_text.resultcontaining the OCR'd text. This makes every engine on the system accessible and scriptable from any standard command-line shell or programming language with basic file I/O capabilities.Distributed Processing Pipeline with Apache Kafka: The distributed architecture is integrated with Apache Kafka to create a scalable, asynchronous processing pipeline. An "ingest" process drops image files into a network storage location and publishes a message to a Kafka topic named
images-for-processing. Multiple consumer services are deployed, each containing an Engine Component (e.g., OCR, barcode, image-cleanup). These services subscribe to theimages-for-processingtopic. When a message is received, the Engine Component processes the corresponding image file and publishes its results (e.g., a JSON object with the extracted text) to a different topic, such asocr-results. The Object Manager becomes a monitoring and control dashboard, using Kafka's APIs to view consumer lag, manage the deployment of new engine consumers, and re-route data flows, providing a resilient and decoupled system for high-volume document processing.Standardized Web Services via OpenAPI/Swagger: The "consistent interface" of the engine components is formally defined using the OpenAPI 3.0 specification. Each Engine Component is wrapped in a microservice with a RESTful API that conforms to this specification. The Engine Management Layer handles service startup and shutdown. The Engine Configuration Layer is exposed through
GETandPUTrequests to endpoints like/config. The Engine Function Layer is exposed viaPOSTrequests to endpoints like/actions/recognize. The Object Manager is then no longer a monolithic component but can be auto-generated from the OpenAPI schema in any language (Python, Java, JavaScript, etc.) using tools like Swagger Codegen. This makes the entire ecosystem of engines instantly accessible as standard web services, discoverable, and usable by any modern web or enterprise application without needing proprietary client libraries.
Generated 5/11/2026, 12:48:37 AM