Google won’t ship tech from Project Astra, its wide-ranging effort to build AI apps and “agents” for real-time, multimodal understanding, until next year at the earliest.
Google CEO Sundar Pichai revealed the timeline in remarks during Google’s Q3 earnings call today. “[Google is] building out experiences where AI can see and reason about the world around you,” he said. “Project Astra is a glimpse of that future. We’re working to ship experiences like this as early as 2025.”
Project Astra, which Google demoed at its I/O developer conference in May 2024, encompasses a range of technologies, from smartphone apps that can recognize the world around them and answer related questions to AI assistants that can perform actions on a user’s behalf.
In a pre-recorded demo during I/O, Google showed a Project Astra prototype answering questions about things within view of a smartphone’s camera, like which neighborhood a user might be in or the name of a part on a broken bicycle.
The Information reported this month that Google was planning to launch a consumer-focused agent experience as early as this December — one capable of purchasing a product, booking a flight, and other such chores. That now seems unlikely — unless the experience in question is divorced from Project Astra.
Anthropic recently became one of the first companies with a large generative AI model able to control apps and web browsers on a PC. But, illustrating how challenging building AI agents can be, Anthropic’s struggles with many basic tasks.
Kyle Wiggers is a senior reporter at TechCrunch with a special interest in artificial intelligence. His writing has appeared in VentureBeat and Digital Trends, as well as a range of gadget blogs including Android Police, Android Authority, Droid-Life, and XDA-Developers. He lives in Brooklyn with his partner, a piano educator, and dabbles in piano himself. occasionally — if mostly unsuccessfully.
Subscribe for the industry’s biggest tech news