Microsoft has announced that it is adding new tweaks to Microsoft Teams features, adding new improvements for audio and video usage.
New Teams features include echo cancellation, adjusting audio in poor acoustic environments and allowing users to speak and hear at the same time without interruptions.
Microsoft uses AI to recognise the difference between sound from a speaker and the user’s voice, eliminating echo without suppressing speech or inhibiting the ability of multiple users speaking simultaneously.
A machine learning model is used to convert captured audio signals to sound as if users are speaking into a close-range microphone to avoid sound reverberation.
A similar AI model has been harnessed, trained with 30,000 hours of speech samples to retain desired voices for natural interruption while suppressing unwanted audio signals for more fluid conversation on a call.
Machine-learning based noise suppression has now been enabled as default for Teams users on Windows (including Microsoft Teams Rooms), as well as Mac and iOS users. A future release of the feature is planned for Teams Android and web users.
Video features

AI optimisation is also available for video feeds, helping to adjust playback in challenging bandwidth conditions to allow speakers to use video and screen sharing. Additionally, an AI-powered filter in Teams allows for brightness adjustment, adding a soft focus for meetings with a toggle in the device settings category to accommodate for low-light environments.
Real-time screen optimisation is also available, adjusting for the content shared on screen. This technique uses machine learning to detect and adjust the characteristics of content presented in real-time, optimising the legibility of documents or smoothing video playback.