Patent 10430015
Derivative works
Defensive disclosure: derivative variations of each claim designed to render future incremental improvements obvious or non-novel.
Active provider: Google · gemini-2.5-pro
Derivative works
Defensive disclosure: derivative variations of each claim designed to render future incremental improvements obvious or non-novel.
Defensive Disclosure and Prior Art Generation for U.S. Patent 10,430,015
Publication Date: May 13, 2026
Subject: Derivative Implementations and Obvious Variations of U.S. Patent 10,430,015 ("Image analysis")
This document details a series of derivative works, alternative embodiments, and cross-domain applications of the core method described in US patent 10,430,015. The purpose of this disclosure is to place these variations into the public domain, thereby establishing them as prior art for any future patent applications.
Derivatives Based on Core Method (Claim 1)
The core method involves receiving a query with start/end points, collecting images from different users, selecting a subset based on user criteria, ordering the subset, and displaying the resulting "virtual tour." The following sections disclose variations on this process.
1. Component & Data Source Substitution
1.1. AI-Based Aesthetic and Relevance Filtering
Enabling Description: This variation replaces the "user-specified filter criteria" (Claim 1) with an automated, model-driven filtering process. Upon collecting the non-ordered set of images, each image is passed through a pre-trained Convolutional Neural Network (CNN) that has been trained on a large dataset of images with human-assigned aesthetic scores (e.g., AVA: Aesthetic Visual Analysis dataset). The CNN outputs a numerical score for technical quality and aesthetic appeal. A second model, a natural language inference (NLI) model, compares the user's search query (e.g., "scenic train ride from New York to DC") with image metadata (tags, descriptions) to generate a relevance score. The final subset of images is selected based on a weighted combination of these aesthetic and relevance scores, bypassing explicit "like/dislike" user profiles.
Mermaid Diagram:
flowchart TD A[Receive Query: Start/End Points] --> B{Collect Non-Ordered Images}; B --> C{For each image}; C --> D[CNN Aesthetic Scoring]; C --> E[NLI Relevance Scoring]; subgraph AI Filtering D & E end F[Combine Scores] --> G{Select Subset based on Score Threshold}; G --> H[Order Subset Spatially/Temporally]; H --> I[Display Virtual Tour];
1.2. Alternative Geolocation and Positioning Systems
Enabling Description: This derivative expands the method to function with non-standard positioning systems, particularly in GPS-denied environments. The system is configured to ingest and process location data from Bluetooth Low Energy (BLE) beacons for indoor tours (e.g., a museum), Wi-Fi Round-Trip Time (RTT) for campus-wide tours, or geocoding systems like what3words for high-precision outdoor locations. The ordering step normalizes these disparate location data types into a common internal coordinate system before calculating relative positions. For example, a tour from "what3words address ///filled.count.soap" to "///models.fancy.glad" would collect images tagged with what3words addresses in their metadata.
Mermaid Diagram:
sequenceDiagram participant User participant System participant GeolocationModule participant ImageDB User->>System: Query(start="///filled.count.soap", end="///models.fancy.glad") System->>GeolocationModule: Normalize(start, end) GeolocationModule-->>System: Normalized Coordinates System->>ImageDB: Collect images within boundary ImageDB-->>System: Non-ordered images with mixed location data (GPS, what3words, BLE) loop for each image System->>GeolocationModule: Normalize(image.location) end System->>System: Order images using normalized coordinates System->>User: Display Ordered Tour
2. Operational Parameter Expansion
2.1. Microscopic Virtual Tour Generation
Enabling Description: This variation applies the method to microscopic imaging. The "virtual tour" visualizes a biological or chemical process over time. The 'start point' and 'end point' are defined as specific states, e.g., "protein folding initiation" and "protein fully folded." The image set is collected from multiple time-lapse microscopy experiments (e.g., electron microscopy, fluorescence microscopy). The "spatial metric" is the physical position of key molecules, and the "temporal metric" is the time-stamp of the image frame. Image density is defined as "frames per microsecond." This allows researchers to construct a canonical, high-density visualization of a process from fragmented observations.
Mermaid Diagram:
stateDiagram-v2 [*] --> State_A : Start Point: Protein Unfolded State_A --> State_B : Image from Exp1, t=1µs State_B --> State_C : Image from Exp2, t=2µs State_C --> State_D : Image from Exp1, t=3µs D: Intermediate Folded State State_D --> State_E : Image from Exp3, t=4µs State_E --> [*] : End Point: Protein Fully Folded
2.2. Real-Time Event Tour Synthesis
Enabling Description: This derivative operates in a real-time, high-frequency environment. It generates an evolving virtual tour of a live event (e.g., a marathon). The system subscribes to real-time data streams from social media APIs (e.g., Twitter, Instagram) and filters for geotagged images posted along the marathon route. The 'start' and 'end' points are the race's start and finish lines. As new images are ingested, the system continuously re-calculates the ordering and updates the displayed tour with sub-minute latency. The image density is dynamically adjusted based on the volume of incoming images to prevent display overload. The ordering algorithm uses a weighted function prioritizing temporal recency to ensure the tour reflects the current state of the event.
Mermaid Diagram:
flowchart TD A[Event Boundary Defined: Marathon Route] --> B(Real-time Image Stream Ingest); subgraph Processing Pipeline B --> C{Geotag & Timestamp Filter}; C --> D[Append to Non-Ordered Pool]; D --> E{Re-order Pool by Time & Location}; E --> F[Select Subset for Display]; end F --> G((Live Virtual Tour Display)); B -- new image --> C;
3. Cross-Domain Applications
3.1. Aerospace: Planetary Rover Traverse Analysis
Enabling Description: In this application, the system assembles a virtual tour of a planetary surface to aid in mission planning and scientific analysis. The image collection comprises images from multiple assets, such as the Mars Perseverance rover, the Curiosity rover, and the Mars Reconnaissance Orbiter. The user defines a traverse path with a start and end coordinate on the Martian surface. The system collects all available images within a corridor along this path, normalizes them for lighting and color differences, and orders them spatially. The final tour provides a high-density, ground-level preview of the terrain a future mission might encounter.
Mermaid Diagram:
classDiagram class PlanetaryTourSystem { +createQuery(startCoord, endCoord, corridorWidth) +collectImages(boundary) +normalizeImages(imageSet) +orderByTraverse(imageSet) +displayTour() } class ImageSource { <<interface>> +getImagesByRegion() } class RoverImageDB { -roverName: string } class OrbiterImageDB { -instrument: string } ImageSource <|.. RoverImageDB ImageSource <|.. OrbiterImageDB PlanetaryTourSystem ..> ImageSource : uses
3.2. AgTech: Crop Phenotyping and Growth Monitoring
Enabling Description: This system creates a time-lapse virtual tour of crop development over a growing season. The 'start point' is "planting date" and the 'end point' is "harvest date." The 'boundary' is a specific farm field defined by GPS coordinates. Images are collected from a heterogeneous set of sources: daily satellite imagery (e.g., from Planet Labs), weekly drone flyovers, and fixed-position IoT cameras within the field. The system orders the images primarily by their timestamp, creating a sequential view of the crop's growth. The user can set the 'image density' to 'one composite image per day' to track key growth stages and identify anomalies like pest infestation or nutrient deficiency.
Mermaid Diagram:
gantt title Crop Growth Virtual Tour dateFormat YYYY-MM-DD axisFormat %m-%d section Field A Planting :done, p1, 2026-04-01, 1d Germination Phase: g1, after p1, 14d Vegetative Phase : v1, after g1, 30d Flowering Phase : f1, after v1, 20d Harvest : h1, after f1, 1d %% Data points represent images collected for the tour Satellite Image: crit, 2026-04-10, 1d Drone Flyover : crit, 2026-04-25, 1d IoT Camera Snap: crit, 2026-05-15, 1d Satellite Image: crit, 2026-05-20, 1d
4. Integration with Emerging Technology
4.1. Generative AI for Tour Completion
Enabling Description: This variation addresses gaps in a virtual tour where no images are available. After collecting and ordering existing images, the system identifies segments of the path with a density below a user-defined threshold. For each gap, a conditional Generative Adversarial Network (cGAN) is employed. The cGAN is conditioned on the two images chronologically or spatially bracketing the gap, as well as on underlying map data (e.g., satellite or street view imagery) for that location. It then generates a synthetic, photorealistic image that provides a plausible transition between the real images, resulting in a complete and continuous virtual tour.
Mermaid Diagram:
sequenceDiagram participant User participant TourSystem participant GAN_Module participant MapAPI User->>TourSystem: createTour(A, Z) TourSystem->>TourSystem: Collect & order images [img1, img5, img10] TourSystem->>TourSystem: Identify gap between img1 and img5 TourSystem->>MapAPI: Get map data for gap location MapAPI-->>TourSystem: Satellite/Street View data TourSystem->>GAN_Module: GenerateImage(context=img1, context2=img5, mapData) GAN_Module-->>TourSystem: Synthetic image [img_synth_3] TourSystem->>TourSystem: Insert synthetic image into sequence TourSystem->>User: Display completed tour [img1, img_synth_3, img5, ...]
4.2. Blockchain for Provenance and Royalties
Enabling Description: To ensure image authenticity and manage creator rights, this system integrates a blockchain ledger. When an image is submitted for inclusion in a tour, a perceptual hash (e.g., aHash or dHash) is calculated and registered on a smart contract, creating an immutable record of the image content and its creator's wallet address. When a user's query results in a tour that includes the image, the smart contract automatically logs the usage. This framework can be extended to handle micropayments, where viewing a tour triggers a transaction that distributes fractional royalties to the original photographers whose work was included. This establishes a transparent and verifiable marketplace for tour content.
Mermaid Diagram:
erDiagram USER ||--o{ VIRTUAL_TOUR : creates VIRTUAL_TOUR ||--|{ TOUR_IMAGE : contains TOUR_IMAGE }o--|| IMAGE_LEDGER : references IMAGE_LEDGER { string imageHash PK string creatorWalletAddress datetime timestamp string metadataURI } USER { string userID PK string walletAddress }
5. Inverse and Failure Mode Operation
5.1. Bandwidth-Optimized "Minimalist Tour"
Enabling Description: This derivative is designed for low-power or low-bandwidth environments. When the system detects constrained network conditions, it enters a "minimalist" mode. It first selects a drastically reduced subset of images. The selection algorithm prioritizes "keyframe" images that represent significant changes in scenery or direction, discarding visually redundant images. Redundancy is calculated by comparing the perceptual hashes of spatially adjacent images; if the Hamming distance between two hashes is below a threshold, one image is discarded. The remaining images are then heavily compressed before being sent to the client, ensuring a functional, albeit sparse, tour with minimal data consumption.
Mermaid Diagram:
flowchart TD A[Create Tour Request] --> B{Network Condition Check}; B -- High Bandwidth --> C[Standard Tour Generation]; B -- Low Bandwidth --> D[Minimalist Mode]; D --> E{Select Keyframe Images}; E --> F{Calculate Perceptual Hashes}; F --> G{Discard Redundant Images (Low Hamming Distance)}; G --> H{Aggressive Image Compression}; H --> I[Display Minimalist Tour]; C --> J[Display Standard Tour];
Combination Prior Art Scenarios
Combination with OpenStreetMap (OSM) and Overpass API: The user defines the start and end points on an OSM map interface. The system queries the Overpass API to retrieve the specific nodes and ways (roads, paths) that constitute a viable route between the points. The "boundary" for image collection is not a simple corridor but is defined precisely by the geometry of the retrieved OSM route. The final ordering of images is constrained by the sequence of nodes in the OSM data, ensuring the tour perfectly follows a real-world path.
Combination with EXIF and IPTC Open Standards: The entire filtering and ordering mechanism is built on open metadata standards, without proprietary data structures. The system exclusively uses embedded EXIF data for
GPSLatitude,GPSLongitude, andDateTimeOriginalfor spatiotemporal ordering. User-defined filters for content (e.g., "show only sunsets") are implemented as string searches against theImageDescriptionandKeywordsfields within the standardized IPTC metadata block of each image file.Combination with ActivityPub (Fediverse) Protocol: The system operates as a federated service. A user on one ActivityPub instance (e.g., a "Travel-Tour" server) initiates a query. This query is broadcast as an ActivityPub
Questionobject to other federated instances. Each instance locally searches its public, geotagged images that match the query's boundary and responds. The originating server collects these responses, compiles the images from across the Fediverse, orders them, and presents the final tour. The tour itself can be published as anOrderedCollectionobject, making it natively shareable across the decentralized network.
Generated 5/13/2026, 6:48:48 AM