Microsoft uses AI to transform smartphones into meeting room mics

Microsoft uses AI to transform smartphones into meeting room mics
MIcrosoft is presenting a research paper this week at Interspeech 2019 in Austria entitled 'Meeting Transcription Using Asynchronous Distant Microphones', which shows the potential to allow meeting participants to use smartphones, laptops, and tablets, which are already equipped with microphones, instead of specially designed mics.

The full details are posted on a blog on the Microsoft website

The central idea is to use any internet-connected devices, such as the laptops and smart phones that attendees typically bring to meetings, and form an ad-hoc microphone array in the cloud. With this approach, teams would be able to choose to use the smartphones, laptops, and tablets they already have to enable high-accuracy transcription without needing special-purpose hardware.

"While the idea sounds simple, it requires overcoming many technical challenges to be effective. The audio quality of devices varies significantly. The speech signals captured by different microphones are not aligned with each other. The number of devices and their relative positions are unknown. For these reasons and others, consolidating the information streams from multiple independent devices in a coherent way is much more complicated than it may seem. In fact, although the concept of ad hoc microphone arrays dates back to the beginning of this century, to our knowledge it has not been realized as a product or public prototype so far. Meanwhile, techniques for combining multiple information streams were developed in different research areas. At the same time, general advances in speech recognition, especially via the use of neural network models, have helped bring transcription accuracy closer to usable levels.
microsoft cloud mics

"The diagram shown above depicts the resulting processing pipeline. It starts with aligning signals from different microphones, followed by blind beamforming. The term 'blind' refers to the fact that beamforming is achieved without any knowledge about the microphones and their locations. This is achieved by using neural networks optimised to recover input features for acoustic models, as we reported previously. This beamformer generates multiple signals so that the downstream modules (speech recognition and speaker diarisation) can still leverage the acoustic diversity offered by the random microphone placement. After speech recognition and speaker diarisation, the speaker-annotated transcripts from multiple streams are consolidated by combining confusion networks that encode both word and speaker hypotheses and they are sent back to the meeting attendees. After the meeting, the attendees can choose to keep the transcripts available only to themselves or share them with specified people."

The work published at Interspeech 2019 is part of a longer focused effort, codenamed Project Denmark.





Most Viewed