A couple of highlights from the annual California developer conference in California. The main events of the first day.
Google goes all-in in the development of its artificial intelligence and rebrands its research unit in Google AI.
Just before the main talk, Google announced that it was rebranding its Google Research division into Google AI. This step signals how Google is increasingly focusing R&D on computer vision, natural language processing, and neural networks. Google makes talking with a voice assistant more natural.
Google has announced that Google Talk Assist will lead the conversation in a more natural way. Now instead of saying “hey google” or “ok google” every time you want to say a command, you only need to do this for the first time. The company also adds a new feature that allows you to ask multiple questions in a single request. All this will be released in the coming weeks.
Why it is important: when you conduct a typical conversation for you, it is likely that you will ask additional questions if you do not receive the answer that you need. But it can be annoying to say “hey google” every time, and it breaks the whole conversation and makes the other person feel rather unnatural. If Google wants to become a leading player when it comes to voice interfaces, then the interaction of the voice assistant should feel like a conversation, not just a series of requests. Google Photos get BUST using Artificial Intelligence.
What Google has announced: Google Photos already makes it easy for you to fix photos with built-in editing tools and AI-powered features to automatically create collages, movies, and stylized photos. Photos now get more AI-powered AI-based features such as photo coloring, black/white, brightness, and rotation correction. The new version of the Google Photos app offers faster fixes and settings.
Why it matters: Google is working to become a storage center for all your photos, and it is able to attract potential users by offering powerful tools for editing, sorting, and changing these photos. Each additional Google photo provides her with more data and helps them better and better recognize images, which ultimately not only improves the user interface for Google but also improves its own tools for its services. Google is basically a search company and it takes a lot of data to get a visual search. Google Assistant and YouTube go for smart displays
What Google has announced: Smart Displays. At the conference, we received a little more information about the company’s “smart” efforts in this direction. The first Google smart displays we will see in July, and, of course, Google Assistant and YouTube will be used for them. It has already become clear that the company invested some resources in the creation of the first version of the visual Assistant (assistant), a visual (video interface) is added to the voice assistant.
Why is this important: users are increasingly getting used to the idea of using some kind of smart device in their living room that will answer their questions. But Google seeks to create a system in which the user can ask questions, and then be able to have some kind of visual display for actions that simply cannot be solved using the voice interface. The Google Assistant handles the voice part of this equation – and having YouTube – makes the process of interacting with the visual assistant very easy. Google Assistant began to understand Google Maps
What Google announced: The Google Assistant comes to Google Maps, available on iOS and Android this summer. Adding the Google Assistant is designed to provide the best recommendations to users. Google has long been working to make maps more personalized, but since Maps has much more functionality than just a navigator, the company introduces new features to give users better recommendations for local maps.
Map Integration also combines a camera, computer vision technology, and Google Maps with street view. With a camera/map combination, it really looks like you looked in person inside Street View. A Google lens can identify buildings or even dog breeds, just point your camera at the subject. He will also be able to identify the text.
Why it matters: Maps are one of Google’s largest and most important products. Around the augmented reality technology, a huge money market is forming – just remember phenomena like Pokémon Go, and companies are just beginning to grope for the best options for using this technology. Exploring the possibilities of augmented reality seems like such a natural use case for the camera, and although it was a kind of technical feat, it gives Google another advantage for its users. This is another additional way to keep users within the Google ecosystem, rather than switching them to alternatives. Again, at Google, everything revolves around data, and Google is able to capture more data if users stay inside their applications. Google announces the release of a new generation for its machine learning equipment.
What Google announced: as the war for the creation of customized AI hardware reached its climax, Google said the company was releasing a third-generation processor – the Tensor Processor Unit 3.0. Google CEO Sundar Pichai said the new TPU is 8 times more efficient than last year and has a performance of 100 petaflops. Google joins almost all other large companies seeking to create a custom processor to process their AI operations
Why it matters: There is a race in the world to create the best machine learning tools for developers. Regardless of whether it is at the platform level with tools such as TensorFlow or PyTorch or at the hardware level itself, a company that can keep developers in its ecosystem will have an edge over its competitors. This is especially important as Google is trying to build its cloud platform while competing with AWS from Amazon and Microsoft Azure. By providing developers who already use TensorFlow en masse, a way to speed things up, it can help Google continue to attract more and more developers to the Google ecosystem. Google News is getting an AI-based redesign
What Google Announced: Beware Facebook! Google also plans to use AI in the updated version of Google News. The updated application for news applications with AI support will “allow users to read the news that they are interested in, understand the whole picture of what is happening in the world, enjoy and support their favorite publishers.” The new design will use elements from the Google digital app for magazines, Newsstand and YouTube, and introduces new features such as “news releases” and “full coverage” to help people get a more holistic view of the news.
Why it matters: Facebook’s main product is literally called the News Feed, and it serves as the main source of information for the non-trivial part of the planet. But Facebook is embroiled in a scandal with personal data of 87 million users who fell into the hands of a political research firm, and there are many questions about Facebook’s algorithms and whether they can again make friends with the law. This is a huge flop of Facebook that Google can use, offering the best news product and, again, locking users in its ecosystem.