Patent 11644693
Derivative works
Defensive disclosure: derivative variations of each claim designed to render future incremental improvements obvious or non-novel.
Active provider: Google · gemini-2.5-pro
Derivative works
Defensive disclosure: derivative variations of each claim designed to render future incremental improvements obvious or non-novel.
Defensive Disclosure: Enhancements and Alternative Embodiments for Wearable Audio Systems
Publication Date: May 1, 2026
Abstract: This document discloses a series of derivative inventions and alternative embodiments that build upon the core concepts of U.S. Patent No. 11,644,693. The purpose of this disclosure is to place into the public domain a range of foreseeable modifications, extensions, and applications of the technology, thereby establishing prior art against future patent applications on these incremental improvements. The disclosures herein cover alternative materials and components, expanded operational parameters, novel applications in disparate industries, integration with emerging technologies, and alternative operational modes.
Derivatives of Claim 1: A Wearable Audio System
Claim 1 describes a pair of eyeglasses comprising a frame with two temples, where at least one temple houses a speaker, a wireless receiver, a battery, and a processor. The processor is configured to apply a hearing enhancement to wirelessly received audio signals based on a user's hearing profile.
1. Material & Component Substitution
Derivative 1.1: Graphene-Based Audio Transducer and Power System
- Enabling Description: The conventional speaker is replaced with a micro-transducer constructed from a graphene-based diaphragm. This material offers superior frequency response and lower power consumption compared to traditional voice coil speakers. The battery is substituted with a flexible, thin-film lithium-ceramic battery integrated into the acetate or polycarbonate frame material itself, allowing for a more seamless design. The wireless receiver is a low-power Bluetooth 6.0 module with an integrated antenna printed directly onto the inner surface of the temple using conductive ink.
- Mermaid Diagram:
graph TD A[External Audio Source] -- Bluetooth 6.0 --> B{BT 6.0 Module w/ Printed Antenna}; B --> C[DSP/Processor]; D[Flexible Li-Ceramic Battery] --> C; C -- Hearing Profile Applied --> E[Graphene Diaphragm Transducer]; E --> F(Audio Output);
Derivative 1.2: Bone Conduction Actuators with Piezoelectric Power Generation
- Enabling Description: The air-conduction speaker is replaced with a pair of piezoelectric bone conduction transducers located on the temple tips, designed to make direct contact with the user's mastoid process. This provides audio transmission while leaving the ear canal open. The battery is supplemented or recharged by piezoelectric nanogenerators embedded within the hinge mechanism of the eyeglasses. These generators convert the mechanical stress and motion from opening and closing the temples into electrical energy, which is stored in a supercapacitor.
- Mermaid Diagram:
graph TD A[Hinge Movement] --> B(Piezoelectric Nanogenerators); B --> C[Supercapacitor Power Store]; D[Wireless Audio Signal] --> E{Wireless Transceiver}; C --> E; C --> F[Processor with Audiogram Data]; E --> F; F --> G(Piezoelectric Bone Conduction Transducers); G -- Vibrations --> H(User's Mastoid Process);
2. Operational Parameter Expansion
Derivative 1.3: Cryogenic-Cooled Superconducting Electronics for High-Fidelity Audio
- Enabling Description: For applications requiring absolute audio fidelity, such as professional audio mixing or medical diagnostics, the processor and key amplifier components are replaced with superconducting circuits. These are housed in a thermally-insulated module within a larger, industrial-style goggle frame. A miniaturized Stirling cycle cryocooler, powered by an external source, maintains the necessary low temperatures. This configuration eliminates thermal noise, allowing for unparalleled signal-to-noise ratios in the audio processing and amplification stages. The system is designed to operate at temperatures below 77 Kelvin (-196°C).
- Mermaid Diagram:
graph TD subgraph GoggleFrame A[Wireless Receiver] --> B{Superconducting Processor}; C(Miniature Stirling Cryocooler) --> B; B --> D[Superconducting Amplifier]; C --> D; D --> E(High-Fidelity Speaker); end F[External Power] --> C; G[Audio Source] --> A;
Derivative 1.4: High-Frequency Ultrasonic Communication and Powering
- Enabling Description: The radio-frequency (RF) wireless receiver is replaced by an array of MEMS-based ultrasonic transducers. These transducers receive audio data modulated onto high-frequency sound waves (e.g., in the 40-100 kHz range) from a dedicated room-based transmitter. This avoids RF interference and enhances security. The same ultrasonic array can be configured to receive power via acoustic energy harvesting, converting ambient ultrasonic energy into electrical power to trickle-charge the onboard battery, making it suitable for secure facilities or environments with high RF interference.
- Mermaid Diagram:
sequenceDiagram participant T as Transmitter participant G as Glasses participant P as Processor T->>G: Modulated Ultrasonic Signal (Data + Power) G->>P: Demodulated Audio Data G->>P: Harvested Electrical Energy P->>P: Apply Hearing Profile P->>G: Processed Audio Signal G-->>T: (Optional) Acknowledgment Signal
3. Cross-Domain Application
Derivative 1.5: Aerospace - Augmented Auditory Cues for Pilots
- Enabling Description: The system is integrated into an aviator's headset or helmet visor. Instead of just hearing enhancement, the processor receives data from the aircraft's avionics bus (e.g., ARINC 429 or AFDX). It generates spatially-localized, 3D audio cues that are superimposed onto the pilot's normal hearing. For example, a warning for an approaching aircraft from the left would be rendered as a distinct audio tone that appears to emanate from that direction. The pilot's hearing profile is used to ensure these critical alerts are always within their optimal hearing range, compensating for any frequency-specific hearing loss.
- Mermaid Diagram:
flowchart LR subgraph Cockpit A[Avionics Data Bus] --> B[Wireless Gateway]; end subgraph PilotHeadset C{Wireless Receiver} --> D[Processor]; B -- Data Stream --> C; E[Pilot Hearing Profile] --> D; D -- 3D Audio Cues --> F[Spatial Audio Engine]; F --> G((Speakers/Transducers)); end
Derivative 1.6: Agricultural Technology (AgTech) - Livestock Health Monitoring
- Enabling Description: The eyeglass form factor is adapted into a durable, head-mounted sensor for livestock (e.g., cattle). The "speaker" is replaced with a low-frequency vibration motor for providing haptic feedback to the animal. The "microphone" (an additional component) is a contact-based transducer that monitors the animal's ruminations and heart rate. The processor analyzes these sounds for anomalies indicative of illness, using a pre-loaded "health profile" instead of a hearing profile. The wireless transceiver transmits alerts and raw data to a central farm management system via a LoRaWAN network for long-range, low-power communication.
- Mermaid Diagram:
classDiagram class AnimalHeadset { +UUID animalID +LoRaWANTransceiver transceiver +ContactTransducer sensor +DSP processor +VibrationMotor hapticFeedback +Battery powerSource +analyzeHealth(audioData) +transmitAlert(alertCode) } class FarmGateway { +receiveData(data) +forwardToCloud() } class FarmManagementSystem { +analyzeFleetHealth() +generateDashboard() } AnimalHeadset "1" -- "1" LoRaWANTransceiver AnimalHeadset "1" -- "1" DSP LoRaWANTransceiver ..> FarmGateway : Transmits Data FarmGateway ..> FarmManagementSystem : Forwards Data
Derivative 1.7: Consumer Electronics - Dynamic Gaming Soundscape Personalization
- Enabling Description: The system is integrated into a pair of gaming glasses. It connects wirelessly to a gaming console or PC. The processor receives multi-channel audio and real-time game telemetry. It uses the player's audiogram (hearing profile) to not just amplify, but to dynamically remix the game's audio. For instance, if a player has high-frequency hearing loss, the system can transpose the audio frequencies of crucial in-game cues (like enemy footsteps or bullet whizzes) down into a range where the player's hearing is more sensitive, without altering the overall sound mix. This provides a competitive advantage and a more immersive experience.
- Mermaid Diagram:
stateDiagram-v2 [*] --> Idle Idle --> ReceivingAudio: Game Started ReceivingAudio --> Processing: Audio Packet Received Processing --> Mixing: Apply Hearing Profile Mixing --> Transposing: Analyze Game Telemetry for Cues Transposing --> Output: Transpose Critical Cues Output --> ReceivingAudio: Send to Speakers ReceivingAudio --> Idle: Game Ended
4. Integration with Emerging Tech
Derivative 1.8: AI-Driven Real-Time Environmental Adaptation
- Enabling Description: The processor is an edge AI-capable System-on-Chip (SoC) running a lightweight neural network. In addition to a stored hearing profile, the glasses include an array of microphones. The AI model continuously analyzes the ambient soundscape (e.g., a quiet library, a noisy restaurant, a concert) and identifies the primary audio source (e.g., a conversation partner). It then dynamically adjusts the hearing enhancement profile in real-time, applying aggressive noise cancellation to background noise while using beamforming to isolate and clarify the primary speaker's voice, going beyond the static pre-set profile. The AI model is updated periodically via wireless connection to a cloud-based machine learning platform.
- Mermaid Diagram:
flowchart TD A[Ambient Sound] --> B(Microphone Array); C[Wireless Audio] --> D{Wireless Receiver}; B --> E[Edge AI Processor]; D --> E; F[Stored Hearing Profile] --> E; E -- Analyzes Environment & Source --> E; E -- Creates Dynamic Profile --> G(DSP Core); G -- Applies Dynamic Profile --> H((Speaker)); E -- Telemetry --> I(Cloud ML Platform); I -- Model Updates --> E;
Derivative 1.9: IoT-Enabled Situational Awareness and Safety
- Enabling Description: The wearable device is part of an IoT ecosystem. It is equipped with a UWB (Ultra-Wideband) transceiver for precise indoor positioning. When a user wearing the glasses enters a hazardous area in a factory (geofenced and marked with IoT beacons), the system automatically receives a safety alert from the beacon. The processor overrides any active audio streaming and plays a loud, pre-recorded warning message, with equalization adjusted by the user's hearing profile to ensure it is heard. The system can also transmit the user's precise location back to a central safety monitoring system.
- Mermaid Diagram:
sequenceDiagram participant UserGlasses participant IoTBeacon participant SafetySystem UserGlasses->>+IoTBeacon: Enters Geofence IoTBeacon-->>-UserGlasses: Hazard Alert Signal UserGlasses->>UserGlasses: Processor overrides audio stream UserGlasses->>SafetySystem: Transmit UWB Location UserGlasses->>User: Play Profile-Adjusted Warning SafetySystem->>SafetySystem: Log event and location
5. The "Inverse" or Failure Mode
- Derivative 1.10: Failsafe Audio Passthrough Mode
- Enabling Description: The system includes a fail-safe analog circuit that bypasses the digital signal processor (DSP) and battery-powered components in the event of a critical power failure. If the battery is fully depleted or the processor fails, a relay or analog switch defaults to a state that physically connects the wireless receiver's output directly to the speaker's input, bypassing all processing. This "limp mode" allows the user to still hear the raw, unenhanced audio from the wireless source, ensuring the device does not become completely non-functional. This is critical for applications where the audio stream contains important information (e.g., navigation prompts).
- Mermaid Diagram:
graph LR subgraph Normal_Operation A(Wireless Receiver) --> B(Processor/DSP); B --> C(Amplifier); C --> D(Speaker); end subgraph Failsafe_Mode A -- Analog Bypass --> D; end E{Power Monitor} -- Power Low --> F(Activate Bypass); E -- Power OK --> G(Enable Normal Operation);
Derivatives of Claim 16: A Method for Providing Audio
Claim 16 describes a method of using a head-worn device for audio provision, involving wirelessly receiving audio signals, processing them, applying a hearing enhancement based on a user profile, and outputting the audio via a speaker, all powered by an internal battery.
1. Material & Component Substitution
- Derivative 16.1: Method Using Optical Wireless Communication (Li-Fi)
- Enabling Description: This method replaces the step of "wirelessly receiving an audio signal" via radio frequency with receiving the audio signal via a modulated light source (Li-Fi). An optical sensor on the eyeglass frame detects high-frequency intensity changes from a Li-Fi-enabled LED light source. The method involves demodulating this optical signal to reconstruct the digital audio stream, which is then processed according to the user's hearing profile by the onboard processor before being converted to sound. This provides a high-bandwidth, secure communication channel that is immune to RF interference.
- Mermaid Diagram:
flowchart TD A[Li-Fi Emitter Modulates Light] --> B(Optical Sensor on Glasses); B --> C{Demodulator}; C --> D[Processor]; E[Hearing Profile Storage] --> D; D -- Applies Enhancement --> F(DAC & Amplifier); F --> G((Speaker)); H[Battery] --> D & F;
2. Operational Parameter Expansion
- Derivative 16.2: Method for Hypersonic Sound Beaming
- Enabling Description: The method's output step is modified to use a phased array of ultrasonic transducers instead of a conventional speaker. The processor, after applying the hearing profile, modulates the enhanced audio signal onto an ultrasonic carrier wave. The phased array then emits a highly directional, focused beam of ultrasound. This beam travels to the user's ear, where the non-linear properties of the air demodulate the signal, making the audio audible only to the user and inaudible to bystanders a few inches away. This allows for private listening in public spaces without earpieces. The method includes beam-steering algorithms to track the position of the user's ear canal.
- Mermaid Diagram:
graph TD A(Receive Wireless Audio) --> B(Process & Apply Hearing Profile); B --> C{Ultrasonic Modulation}; C --> D(Phased Array Controller); D --> E[Ultrasonic Transducer Array]; E -- Focused Beam --> F(Air column near ear); F -- Self-Demodulation --> G(Audible Sound at Ear);
3. Cross-Domain Application
- Derivative 16.3: Method for Sub-Aqua Diver Communication
- Enabling Description: The method is adapted for a diver's mask. "Wirelessly receiving" is accomplished via a short-range hydro-acoustic modem that receives sonar-based communication signals. The received signal is processed to filter out underwater noise (e.g., bubbles, engine sounds). The "hearing enhancement profile" is adapted to be an equalization profile that compensates for the way sound travels differently through water and the diver's skull. The final output step utilizes a bone conduction transducer pressed against the diver's temple, as standard speakers are ineffective underwater. The entire system is housed in a pressure-resistant, waterproof enclosure integrated into the mask frame.
- Mermaid Diagram:
sequenceDiagram participant SurfaceUnit participant DiverMask SurfaceUnit->>DiverMask: Transmits Hydro-Acoustic Signal DiverMask->>DiverMask: Receive & Demodulate DiverMask->>DiverMask: Process (Noise Filter + Water EQ Profile) DiverMask->>DiverMask: Output via Bone Conduction
4. Integration with Emerging Tech
- Derivative 16.4: Method Utilizing Blockchain for Secure Profile Management
- Enabling Description: This method enhances security and portability of the "hearing profile." The user's audiogram and enhancement parameters are stored as a non-fungible token (NFT) or a secure record on a private blockchain. The method includes a step where the eyeglasses, upon startup, use their secure element and a wireless connection (e.g., to a smartphone) to authenticate with the blockchain and retrieve the encrypted hearing profile. This ensures that the highly sensitive medical data is secure, tamper-proof, and can be easily authorized for use on any compatible device the user owns, without being tied to a single manufacturer's cloud service.
- Mermaid Diagram:
flowchart TD subgraph User's Device A(Eyeglasses) -- Request Profile --> B(Paired Smartphone); end subgraph Network B -- Authenticates --> C(Blockchain Node); C -- Verifies Ownership --> C; C -- Returns Encrypted Profile --> B; end B -- Sends Profile --> A; A -- Decrypts & Applies Profile --> D(Audio Processing);
5. The "Inverse" or Failure Mode
- Derivative 16.5: Method for Graceful Degradation of Audio Enhancement
- Enabling Description: The method incorporates a power-aware processing step. The processor continuously monitors the battery level. As the battery depletes below predefined thresholds (e.g., 50%, 25%, 10%), the "applying a hearing enhancement" step is gracefully degraded. At 50%, complex multi-band compression is reduced to simple equalization. At 25%, equalization is disabled, and only volume amplification is applied. At 10%, all processing is disabled, and the system enters an analog passthrough mode. This method prioritizes longevity of basic function over feature-richness, ensuring the user maintains at least a basic audio connection for as long as possible.
- Mermaid Diagram:
stateDiagram-v2 state "Full Enhancement" as S1 state "Reduced EQ" as S2 state "Volume Only" as S3 state "Analog Passthrough" as S4 [*] --> S1: Battery > 50% S1 --> S2: Battery < 50% S2 --> S3: Battery < 25% S3 --> S4: Battery < 10% S4 --> [*]: Power Off S2 --> S1: Charging S3 --> S2: Charging S4 --> S3: Charging
Combination Prior Art Scenarios
1. Combination with WebRTC for Real-Time Communication: The wearable audio system of the '693 patent is combined with the Web Real-Time Communication (WebRTC) open standard. A browser-based application on a smartphone or computer establishes a peer-to-peer audio link with another WebRTC client. The audio stream is then relayed from the smartphone to the eyeglasses via Bluetooth. The method involves receiving the WebRTC audio stream, applying the user's hearing profile for clarification, and using the glasses' microphone to send audio back into the WebRTC session. This creates a hearing-enhanced, hands-free, secure communication device that works natively with modern web applications without proprietary software.
2. Combination with the Matter IoT Standard: The wearable audio system is configured as a Matter-compliant device. This allows it to seamlessly integrate into a smart home ecosystem. The method involves using the Matter protocol over Wi-Fi or Thread (relayed via the user's phone) to receive audio notifications from other smart home devices (e.g., a doorbell, smoke alarm, or washing machine). The processor applies the hearing enhancement profile to ensure these critical alerts are audible to a hearing-impaired user. The user could also use voice commands via the glasses' microphone to control other Matter-certified devices.
3. Combination with Android Open Source Project (AOSP) Accessibility Features: The method is integrated at the operating system level within an AOSP-based device. The hearing profile is not stored on the device itself but is managed through the standard Android Accessibility settings. When the eyeglasses connect to any AOSP-compliant device (phone, tablet, etc.), the operating system automatically recognizes the glasses as an "Enhanced Audio Output" device. The OS itself then performs the audio processing and enhancement using the user's centrally-stored profile before streaming the modified audio to the glasses. This makes the enhancement feature universal across the ecosystem rather than being a proprietary feature of the glasses.
Generated 5/1/2026, 2:43:43 PM