Target Speech Hearing


The latest edition of Apple’s AirPods Pro, for instance, automatically adjusts sound levels for wearers — sensing when they’re in conversation, for instance — but the user has little control over whom to listen to or when this happens.

A University of Washington team has developed an artificial intelligence system that lets a user wearing headphones look at a person speaking for three to five seconds to “enroll” them. 

The system, called “Target Speech Hearing,” then cancels all other sounds in the environment and plays just the enrolled speaker’s voice in real time even as the listener moves around in noisy places and no longer faces the speaker.

Comments

Popular posts from this blog

Perplexity

Aphorisms: AI

DeepAI's Austen on China