Patent 11871174

Derivative works

Defensive disclosure: derivative variations of each claim designed to render future incremental improvements obvious or non-novel.

Active provider: Google · gemini-2.5-pro

Derivative works

Defensive disclosure: derivative variations of each claim designed to render future incremental improvements obvious or non-novel.

✓ Generated

Defensive Disclosure: Enhancements and Alternative Embodiments for Personalized Directional Audio Systems in Head-Worn Devices

Publication Date: May 10, 2026
Reference: Based on concepts disclosed in U.S. Patent No. 11,871,174 B1

This document describes systems, methods, and apparatuses that expand upon the core technologies of personalized directional audio integrated into eyewear. The following disclosures are intended to enter the public domain to serve as prior art for future patent applications in this and related fields.


Part 1: Derivative Embodiments of Integrated Eyewear Audio Systems

Based on the principles described in independent claims 1 and 9 of US 11,871,174.

1.1 Material & Component Substitutions

1.1.1 Biphasic Polymer Temple with Tuned Acoustic Zones

  • Enabling Description: The temple arm is constructed from a co-molded biphasic polymer. The inner surface of the acoustic chamber (1604) is molded from a high-density, rigid polymer like Polyether ether ketone (PEEK) or a ceramic-infused ABS plastic to maximize acoustic reflectivity. The remainder of the temple body (1608) is molded from a viscoelastic polymer with a high mechanical loss factor, such as a thermoplastic polyurethane (TPU) with a Shore hardness of 70A. This composite structure ensures that acoustic energy from the micro-speaker (1612) is efficiently reflected towards the acoustic port (1610) while mechanical vibrations are absorbed by the outer body, preventing tactile vibration transfer to the user's skull and minimizing sound leakage through the frame material itself. The co-molding process is achieved via a multi-shot injection molding technique.

  • Mermaid Diagram:

    graph TD;
        A[Micro-Speaker Actuation] --> B{Acoustic Waves};
        B --> C[Inner Chamber: High-Density PEEK];
        C -- Reflected Sound --> D[Acoustic Port -> Ear Canal];
        B -- Mechanical Vibration --> E[Outer Temple: Viscoelastic TPU];
        E -- Damping --> F[Vibration Attenuated];
        C -- Co-molded Interface --> E;
    

1.1.2 Piezoelectric Film Transducer and Conformal Chamber

  • Enabling Description: The conventional electro-dynamic micro-speaker is replaced with a laminated piezoelectric film transducer. This transducer, composed of a polyvinylidene fluoride (PVDF) film, is bonded directly to a flexible, curved section of the inner chamber wall. The audio signal, amplified and conditioned by a dedicated charge amplifier, causes the film to vibrate, generating sound waves. This allows the acoustic chamber to be significantly flatter and more conformal to the temple's thin profile. The chamber itself can be thermoformed from a thin sheet of polycarbonate, with the PVDF transducer applied post-forming. This reduces the overall bulk and weight of the temple arm compared to designs requiring a discrete speaker driver.

  • Mermaid Diagram:

    sequenceDiagram
        participant AMP as Audio Amplifier
        participant PVDF as Piezoelectric Film
        participant Chamber as Acoustic Chamber
        participant Ear as User's Ear
        AMP->>PVDF: Electrical Signal (Audio)
        PVDF->>PVDF: Vibrate based on signal
        PVDF->>Chamber: Generate Sound Waves
        Chamber->>Ear: Direct/Focus Sound
    

1.1.3 Liquid-Filled Acoustic Lens Port

  • Enabling Description: The acoustic port (1610) is not an open aperture but is instead sealed with a thin, flexible, and acoustically transparent membrane. The small cavity between this membrane and a secondary, outer membrane is filled with a non-toxic, inert, and low-viscosity fluid (e.g., mineral oil or a silicone-based liquid). This fluid-filled structure acts as an acoustic lens, further shaping and focusing the sound wavefront as it exits the chamber. The curvature of the membranes and the refractive index of the fluid are tuned to collimate specific frequency ranges, improving directivity and perceived loudness at the user's ear while also providing a high degree of water and dust resistance (IP68 rating).

  • Mermaid Diagram:

    graph TD;
        subgraph Temple
            A[Speaker] --> B(Acoustic Chamber);
            B --> C{Acoustic Port};
        end
        subgraph Liquid Lens
            C -- Sound Waves --> D[Inner Membrane];
            D -- Vibrates --> E[Acoustic Fluid];
            E -- Transmits & Refracts --> F[Outer Membrane];
        end
        F --> G[Focused Sound Wave -> Ear];
    

1.2 Operational Parameter Expansion

1.2.1 Cryogenic/High-Temperature Operation using Shape Memory Alloy (SMA) Actuators

  • Enabling Description: For applications in extreme temperature environments (-100°C to +200°C), the electro-dynamic speaker is replaced with an acoustic diaphragm driven by a Shape Memory Alloy (SMA) wire actuator (e.g., Nitinol). An electrical current passed through the SMA wire induces a phase transition, causing it to contract and move the diaphragm. The chamber and temple are constructed from a high-performance thermoplastic like Ultem (PEI), which maintains its structural integrity across this temperature range. This design eliminates components like voice coils and magnets that can fail or have their performance significantly altered by extreme temperatures. The control electronics are housed in a separate, thermally-insulated module.

  • Mermaid Diagram:

    graph LR
        subgraph Control Module
            A[Audio Signal Processor] --> B{Pulse-Width Modulator};
        end
        subgraph Temple
            B --> C[SMA Actuator Driver];
            C -- Electrical Pulses --> D(Nitinol Wire);
            D -- Contraction/Relaxation --> E(Diaphragm);
            E -- Generates Sound --> F(Acoustic Chamber);
        end
        F --> G[Sound Port];
    

1.2.2 High-Pressure Hyperbaric Audio System

  • Enabling Description: For deep-sea diving or hyperbaric chamber use, the acoustic chamber is pressure-equalized. A micro-diaphragm pressure equalization valve, similar to those used in diving watches, is integrated into the chamber wall. This allows the internal chamber pressure to match the ambient external pressure, preventing the speaker driver from being crushed or failing to actuate. The temple body is machined from a solid billet of titanium or high-density composite to withstand pressures exceeding 10 atmospheres. Audio is transmitted to the device via hydro-acoustic modem signals received by a piezoelectric transducer on the frame, which are then decoded and converted to audible sound.

  • Mermaid Diagram:

    stateDiagram-v2
        [*] --> Ambient
        Ambient --> HighPressure: Descending
        HighPressure --> Ambient: Ascending
        state HighPressure {
            direction LR
            [*] --> Equalizing
            Equalizing --> Stable: Valve Open, Pressure Matched
            Stable --> Equalizing: Pressure Change Detected
            note right of Stable
                Speaker operates normally
                as internal/external pressures
                are balanced.
            end note
        }
    

1.3 Cross-Domain Applications

1.3.1 Aerospace: Cockpit Alert & Communication System

  • Enabling Description: The personal projection micro speaker system (PPMS) is integrated into the temple arms of a pilot's or astronaut's mandatory flight sunglasses or helmet visor frame. The system provides discrete, non-occluding audio alerts (e.g., stall warnings, altitude callouts, master caution tones) directly into the user's near-ear field. This allows the pilot to maintain full situational awareness of ambient cockpit sounds and radio communications through their primary headset. The audio is spatially-localized, so an alert for a system on the left panel can be programmed to originate from the left speaker, providing an intuitive directional cue. The system is hardened against EMI and rapid depressurization.

  • Mermaid Diagram:

    graph TD
        subgraph Flight Computer
            FC[Flight Management System]
            EICAS[EICAS]
        end
        subgraph Eyewear System
            CPU[Central Processing Unit]
            LS[Left Temple PPMS]
            RS[Right Temple PPMS]
        end
        FC -- "Left Engine Fire" --> CPU;
        EICAS -- "Altitude Alert" --> CPU;
        CPU -- "Play 'Engine Fire' alert" --> LS;
        CPU -- "Play 'Altitude' alert" --> RS;
        LS -- Directed Sound --> LeftEar[Pilot's Left Ear];
        RS -- Directed Sound --> RightEar[Pilot's Right Ear];
    

1.3.2 Agricultural Technology (AgTech): Smart-Farm Headgear

  • Enabling Description: The PPMS is built into ruggedized safety glasses or a wide-brimmed hat used by farm workers. The system connects via Bluetooth Low Energy (BLE) to a network of IoT sensors in the field (e.g., soil moisture, crop health monitors) and farm equipment (e.g., tractor diagnostics). It provides real-time audio updates and instructions, such as "Moisture level low in Sector 4" or "Fertilizer hopper level at 15%." This hands-free, non-occluding system allows the worker to hear their environment for safety (approaching vehicles, animal sounds) while receiving critical data without needing to look at a screen.

  • Mermaid Diagram:

    sequenceDiagram
        participant IoT as Field Sensor Network
        participant Eyewear as Smart Headgear
        participant Worker as Farm Worker
        loop Real-time Monitoring
            IoT->>Eyewear: Transmit sensor data (e.g., moisture level)
            Eyewear->>Eyewear: Process data & generate voice prompt
            Eyewear-->>Worker: "Sector 4 moisture low"
        end
    

1.3.3 Consumer Electronics: Interactive Museum Audio Guide

  • Enabling Description: The PPMS is integrated into stylish, transparent-lens eyewear rented or sold by a museum. As a visitor approaches an exhibit, an indoor positioning system (using UWB or Bluetooth beacons) identifies their location. The eyewear automatically plays the relevant audio commentary, directed precisely to the user's ears. This creates a "personal sound bubble" that doesn't require unhygienic in-ear headphones and doesn't bleed sound to disturb other patrons. The system can overlay audio, allowing the visitor to hear both the guide and the ambient sounds of the museum.

  • Mermaid Diagram:

    graph LR
        A[Visitor with Eyewear] -- Enters Zone --> B{Beacon A};
        B -- Beacon ID --> C[Eyewear Receiver];
        C -- "Request Content for A" --> D[Museum Content Server];
        D -- "Audio Track for 'Mona Lisa'" --> C;
        C --> E[PPMS Audio Playback];
        E -- Directed Audio --> F[Visitor's Ears];
        A -- Walks to new exhibit --> G{Beacon B};
        G -- Beacon ID --> C;
    

1.4 Integration with Emerging Tech

1.4.1 AI-Driven Adaptive Acoustic Beamforming

  • Enabling Description: The single micro-speaker is replaced by a phased array of three or more MEMS (Micro-Electro-Mechanical Systems) speakers within the acoustic chamber. Two or more microphones are integrated into the exterior of the eyewear frame. An onboard neural processing unit (NPU) runs a real-time AI model that analyzes the ambient noise profile from the microphones. The model then dynamically adjusts the phase and amplitude of the signal sent to each MEMS speaker. This creates a highly focused, steerable beam of sound that can be actively directed towards the user's ear canal, and can even generate an "anti-noise" field in other directions to further minimize sound leakage and cancel external noise. The model can be trained to recognize the user's specific head-related transfer function (HRTF) for a fully personalized audio experience.

  • Mermaid Diagram:

    graph TD
        A[External Microphones] --> B[Neural Processing Unit (NPU)];
        C[User Audio Source] --> B;
        B -- "Analyzes Ambient Noise & User HRTF" --> B;
        B -- "Calculates Phase/Amplitude Shifts" --> D{MEMS Speaker Array Controller};
        D --> E[MEMS Speaker 1];
        D --> F[MEMS Speaker 2];
        D --> G[MEMS Speaker 3];
        E -- "Phased Wavelet 1" --> H(Constructive Interference @ Ear);
        F -- "Phased Wavelet 2" --> H;
        G -- "Phased Wavelet 3" --> H;
        E & F & G -- "Destructive Interference" --> I(Minimized Sound Leakage);
    

1.4.2 IoT-Enabled Environmental Context Awareness

  • Enabling Description: The eyewear's electronic module includes a multi-sensor IoT package (accelerometer, gyroscope, GPS, ambient light sensor, barometer). This data is fused and processed locally to determine the user's context (e.g., walking outdoors, driving, sitting in a quiet office). The system automatically adjusts the audio profile based on this context. For example, when the accelerometer detects a running cadence, it may boost bass frequencies and enable a "transparency mode" that mixes in more ambient sound for safety. When the GPS detects the user is in a vehicle moving over 15 mph, it may increase the overall volume and use noise-cancelling microphone arrays for phone calls. The device state and context data can be published to an MQTT broker for integration with other smart devices or life-logging applications.

  • Mermaid Diagram:

    classDiagram
        class EyewearDevice {
          +sensorSuite: IoT_Sensors
          +audioProcessor: AudioDSP
          +ppms: PPMS_Speaker
          +getContext()
          +adjustAudioProfile()
        }
        class IoT_Sensors {
          +accelerometer
          +gyroscope
          +gps
          +barometer
        }
        class AudioDSP {
          +volume
          +equalizer_settings
          +noise_cancellation_level
        }
        EyewearDevice "1" -- "1" IoT_Sensors : contains
        EyewearDevice "1" -- "1" AudioDSP : controls
    

1.5 The "Inverse" or Failure Mode

1.5.1 Graceful Degradation & Failsafe Audio Mode

  • Enabling Description: The system incorporates a power management integrated circuit (PMIC) that monitors the battery level. When the battery drops below a critical threshold (e.g., 5%), the PMIC triggers a "Limp-Home Mode." In this mode, the main power-hungry digital signal processor (DSP) and wireless radio are shut down. A secondary, ultra-low-power pathway is enabled, connecting a single, high-impedance piezoelectric buzzer directly to a simplified audio alert generator circuit. This allows the device to still provide essential, pre-programmed audible alerts (e.g., a low-battery tone, a "find my device" ping) using minimal power, long after normal audio streaming has ceased. The acoustic port is designed with a secondary, passive Helmholtz resonator channel that amplifies the specific frequency of the piezoelectric buzzer, making the low-power alert audible without electronic amplification.

  • Mermaid Diagram:

    stateDiagram-v2
        state "Full Power Mode" as Full {
            description: DSP Active, Bluetooth On, Stereo Audio
        }
        state "Limp-Home Mode" as Limp {
            description: DSP/BT Off, Piezo Buzzer Active
        }
    
        [*] --> Full : Power On
        Full --> Limp : Battery < 5%
        Limp --> Full : Charging
        Limp --> [*] : Battery Depleted
        Full --> [*] : Power Off
    

Part 2: Derivative Embodiments of Modular Eyewear Audio Systems

Based on the principles described in independent claim 15 of US 11,871,174.

2.1 Material & Component Substitutions

2.1.1 Overmolded Elastomeric Interlock with Magnetic Pogo Pin Interface

  • Enabling Description: The temple interlock (e.g., 920, 922) is constructed from a soft, high-friction silicone elastomer overmolded onto a semi-rigid internal skeleton. This allows the interlock to stretch and conform to a wide variety of temple shapes and sizes, from thin wire frames to thick acetate arms. Instead of a direct physical connector, the electrical interface between the smart cord (926, 928) and the interlock module utilizes a set of gold-plated pogo pins on the cord side and a corresponding set of surface pads on the interlock, with neodymium magnets embedded in both housings to ensure precise, self-aligning, and robust electrical contact. This makes attaching and detaching the module effortless and weather-resistant.

  • Mermaid Diagram:

    graph TD;
        A[Eyewear Temple]
        B[Silicone Interlock Body]
        C[Internal Semi-Rigid Skeleton]
        D{Magnetic Pogo Pin Connector}
        E[Smart Cord]
        F[PPMS Module in Interlock]
    
        A -- "Slips Into" --> B;
        C -- "Embedded In" --> B;
        F -- "Housed In" --> B;
        E -- "Attaches to" --> D;
        D -- "Mates with" --> F;
    

Part 3: Combination with Open-Source Standards

3.1 Web Real-Time Communication (WebRTC) Native Integration

  • Enabling Description: The firmware of the eyewear's main processor (804) includes a full WebRTC stack. The device can connect directly to a Wi-Fi network (624) and establish a peer-to-peer, encrypted audio (and optionally video, if a camera is present) communication session with any other WebRTC-compliant device (e.g., a web browser, another pair of smart glasses) without requiring a smartphone as an intermediary. The integrated microphones are used for voice capture, and the Personal Projection Micro Speaker (PPMS) system serves as the audio output. This enables hands-free, low-latency communication for applications like remote assistance, where a remote expert can speak directly to a field technician as if they were standing next to them.

  • Mermaid Diagram:

    sequenceDiagram
        participant UserA as Eyewear A
        participant SignalingServer as STUN/TURN Server
        participant UserB as Web Browser
        UserA->>SignalingServer: I am UserA at IP:Port
        UserB->>SignalingServer: I am UserB at IP:Port
        SignalingServer-->>UserA: UserB is at IP:Port
        SignalingServer-->>UserB: UserA is at IP:Port
        UserA->>UserB: Establish Peer-to-Peer WebRTC Connection
        loop Audio Stream
            UserA->>UserB: Microphone Audio
            UserB->>UserA: Speaker Audio (to PPMS)
        end
    

3.2 Android Open Source Project (AOSP) Standardized HAL

  • Enabling Description: A new Hardware Abstraction Layer (HAL) for "Directional Audio" is defined and proposed for inclusion in AOSP. This HAL provides a standardized interface for the Android OS to control advanced features of audio eyewear. The interface includes methods like setBeamformingAngle(float angle), setLeakageCancellationLevel(int level), and getHeadRelatedTransferFunction(). Eyewear manufacturers can implement this HAL in their device drivers. This allows any third-party Android application with the appropriate permissions to finely control the audio output, enabling use cases like augmented reality apps that can place virtual sound sources in 3D space relative to the user's head, or communication apps that can optimize the audio beam for clarity in noisy environments.

  • Mermaid Diagram:

    graph TD
        subgraph Android Application
            A[AR Game App]
        end
        subgraph Android OS Framework
            B[AudioManager API]
        end
        subgraph Directional Audio HAL
            C["IDirectionalAudio.hal"]
            D["setBeamformingAngle()"]
            E["setLeakageCancellation()"]
        end
        subgraph Vendor Specific Driver
            F[PPMS Driver]
        end
        subgraph Hardware
            G[Eyewear PPMS Hardware]
        end
    
        A -->|Request Spatial Audio| B
        B -->|Calls HAL Interface| C
        C -->|Executes Methods (D, E)| F
        F -->|Controls Hardware| G
    

3.3 RISC-V Open Standard for Audio Processing Core

  • Enabling Description: The central processing unit (CPU) within the Temple Insert Module (TIM) or electronics pod (ePOD) is designed around an open-source RISC-V ISA core (e.g., a "Rocket" or "BOOM" core). This avoids licensing fees and allows for deep customization of the processor for audio-centric tasks. Custom instructions are added to the RISC-V core specifically for accelerating audio processing algorithms like FFTs (Fast Fourier Transforms), acoustic echo cancellation (AEC), and multi-channel audio mixing. The entire system, from the processor RTL to the RTOS (e.g., Zephyr or FreeRTOS) and the audio codec libraries, is built on open-source components, enabling a fully auditable and customizable platform for secure communication and specialized audio research applications.

  • Mermaid Diagram:

    graph BT
        subgraph Hardware
            A[Custom RISC-V Core]
            B[DSP Extensions]
            C[Memory]
            D[Peripherals]
        end
        subgraph Software
            E[Zephyr RTOS]
            F[Open Source Codec (e.g., Opus)]
            G[AEC/Beamforming Library]
            H[Application Logic]
        end
    
        A -- Has --> B
        A -- Accesses --> C
        A -- Controls --> D
        E -- Runs on --> A
        F -- Runs on --> E
        G -- Runs on --> E
        H -- Utilizes --> F & G
    

Generated 5/10/2026, 6:47:02 AM