All audio is processed on device, but is the goal to use the public training/learning to tailor a more robust model which they can sell commercially/integrate into apps/phones/etc?
My suspicion is that the founders/engineers will be acqui-hired for a not-insignificant sum by one of the existing big players in the teleconference space (Zoom, Google, Cisco, etc.).
Further, this is probably their goal. In fact, I wouldn't be surprised if Apple bought something like this as system-wide background noise canceling would be a fantastic OS feature.
iPhones already have this feature if you hold your phone up to your ear. There is a microphone in the back of the phone near the camera that phase cancels out background noise from the input from the front mic.
They need this for the AirPods then. Everyone I call while outside in NYC says the microphone is super sensitive and picks up all the surrounding noise.
they also recently added 4 (or 5?) mics on the new iPad Pro so that it could do facetime calls while cancelling out the feedback.
Wouldn’t be surprised if a software update makes it possible to use all of those mics for some very good ambient noise cancellation during face time calls
All audio is processed on device, but is the goal to use the public training/learning to tailor a more robust model which they can sell commercially/integrate into apps/phones/etc?