Google announced a new AI model for multimodal search called MUM (Multitask Unified Model) at its developer conference Google IO in May. Last night, the firm announced a bunch of consumer-facing features, including visual search, that’ll be coming to your screen in the coming months. The Big G currently serves contextual information such as Wikipedia snippets, lyrics, or recipe videos to you based on your search phrase. Now, for the next step of the search, it aims to get you results by understanding context beyond just the phrase you’ve used. The first feature announced last night is visual search. When…
This story continues at The Next Web
Or just read more coverage about: Google