AI continues to evolve at breakneck speed, and this week brings a host of exciting announcements, especially with CES 2025 taking center stage. From Nvidia’s hardware announcements to Sam Altman’s thoughts on AGI, there’s a lot to unpack. Let me simplify it for you—here’s everything exciting and new in AI this week.
Must read: Did you check out our last Week in AI? Do read to stay up to date on what happened last week.
Nvidia Unveils RTX 50 Series GPUs
Nvidia stole the spotlight at CES this year with a series of major announcements, including the much-anticipated RTX 50 Series GPUs. These GPUs, built on Nvidia’s new Blackwell architecture, are designed to meet the needs of gamers, video editors, and especially AI enthusiasts. With up to twice the performance of the previous generation, they can run many generative AI models locally.
The lineup features four GPUs: RTX 5070, RTX 5070 Ti, RTX 5080, and RTX 5090. A standout in the lineup is the RTX 5070, Nvidia’s budget-friendly offering. It delivers performance comparable to last year’s top-tier RTX 4090 for just $549—about a third of the RTX 4090’s price.
Nvidia Launches DIGITS, a Personal AI Supercomputer
Another announcement from Nvidia is their new AI Supercomputer called DIGITS. Picture a Mac-mini-sized device capable of running AI models with up to 200 billion parameters but available to end users like you and me. At its heart is Nvidia’s Grace Blackwell Superchip, paired with 128GB of memory and 4TB of NVMe SSD storage.
Essentially, it’s like having an AI server on your desk, designed to let you run or build AI models locally. DIGITS runs on Nvidia’s DGX OS, a system based on open-source Ubuntu Linux, and supports Nvidia’s AI tools as well as popular AI frameworks. Priced at $3,000, it’s expected to hit the market in May 2025.
DeepMind Develops World Simulation Models for AI Training
DeepMind is working on AI models that simulate real-world environments. These are perfect for training robots and autonomous systems without needing real-world data. Imagine testing a self-driving car in a snowy environment or optimizing a factory layout—all in a virtual space. This tech speeds up AI development while cutting costs making the end product more affordable.
Google Introduces Daily Listen for Personalized AI-Generated Podcasts
Google’s new Daily Listen feature turns your Google Discover page into a personalized AI-generated podcast. Think of it as your own custom daily news update in a podcast form. This feature will be accessible on Google Discover soon.
Similar to Google NotebookLM, the podcast includes two AI hosts. The feature is rolling out gradually, so keep an eye out for it.
Microsoft Open-Sources Phi 4 Model
Phi 4 is Microsoft’s AI model. It is now open-source and available on Hugging Face. It’s built for tasks like math, multilingual problem-solving, and functional code generation. At just 14 billion parameters, it’s lightweight yet powerful, making it an excellent option for developers and users looking to run smaller AI models locally or integrate them into their apps.
XAI Releases Grok as a Standalone iPhone App
Grok, the AI chatbot from Elon Musk’s xAI, now has an iPhone app in the US. Previously, Grok was only accessible through the X (formerly Twitter) website or app.
While the new app is simple and stripped-down, it promises less censorship compared to other AI chatbots. Whether you’re looking to chat or generate creative content, Grok is free to try, making it worth checking out.
Sam Altman Outlines Plans for AGI and ASI
OpenAI’s CEO, Sam Altman, announced they are confident in achieving AGI (Artificial General Intelligence) and are now aiming for ASI (Artificial Super Intelligence). AGI refers to AI as smart as humans, while ASI goes beyond human intelligence. According to Altman, we could see AI agents transforming workplaces as early as 2025. Too soon for humanity?
Gaze-LLE Tracks Eye Focus in Videos and Images
This AI tool predicts where someone is looking in an image or video. For example, you can upload a video, and it will show where each person in the video is staring. The AI generates heatmaps to highlight focus areas, allowing you to analyze attention in real-time.
GAZE LLE is an open-source model, so you can download it to run locally or use it on platforms like Hugging Face and Google Colab. It’s useful for surveillance, research, and interactive experiences.
Stereocrafter Converts 2D Videos Into 3D
This new AI model can transform 2D videos into 3D with ease. Normally, creating a 3D effect requires a VFX artist to manually separate each layer for depth. However, this model automates the entire process. Once converted, you can watch the video with classic red-and-green 3D glasses.
It doesn’t stop there though. Stereocrafter can also make videos compatible with VR headsets like the Apple Vision Pro. It uses your input as the left view and generates the corresponding right view, creating an immersive VR experience. It’s an open-source model available on GitHub.
Razer Introduces Project Ava, an AI Coach for Gamers
Razer’s Project Ava is an AI-powered gaming assistant built to take your gaming skills to the next level. This tool analyzes your gameplay in real-time, identifying attack patterns, pinpointing mistakes, and suggesting smarter strategies. The gaming community is divided on this one as some think this looks like cheating taking away from gamers who rely on experience, skill, and strategy to win.
Whether you’re battling a tough boss or strategizing for competitive multiplayer, Ava has your back. Once the game ends, it generates detailed post-game reports, including stats, replays of critical moments, and personalized advice to help you improve. Think of it as a professional coach who’s always by your side, ready to help you level up with every match.
Stability AI’s Spar3D Generates 3D Models From Single Images
Spar3D will help create or rather generate 3D models. This AI-powered tool lets you generate a 3D model from just one image in less than a second. It claims to create accurate, detailed 3D representations for AR, VR, game design, and animations using advanced point-cloud and mesh-building techniques.
Spar3D also supports real-time editing—you can tweak models, change colors, and reshape objects on the fly. It’s fast and accessible, making it suitable for designers and developers looking to save time.
VLC Debuts AI-Generated Subtitles and Translations
At CES this year, VLC introduced a new AI feature that provides real-time subtitles and translations for videos in over 100 languages. The best part? It works entirely offline, meaning no internet connection is needed. This not only ensures faster processing but also protects your privacy. The details about when the feature will be available will not be available.
Adobe Enables Transparent Video Creation
Adobe’s new AI feature allows users to generate videos with transparent backgrounds. You can use it to generate green screen animations, graphics, and special effects for your existing footage. For example, you can generate smoke effects, explosions, lighting, and weather overlays. This should make video editing easier and faster.
Video Anydoor Lets Users Edit, Replace, and Add Objects in Videos
Video Anydoor is a cutting-edge AI tool that allows you to add, replace, or manipulate objects in videos seamlessly. A couple of examples where you can use it are swapping a face, adding logos, and inserting entirely new objects naturally. It automatically adjusts lighting, shadows, and colors to match the original video’s environment. For example, you could add a butterfly to a nature clip or swap a character’s outfit in a movie scene, and the final output should blend. It can be a valuable tool for filmmakers, advertisers, and content creators.
Hailuo’s AI Keeps Video Characters Consistent Across Scenes
One of the major issues with most AI video generation models is that they often create videos with inconsistent characters. But what if you want all your videos to feature the same character? Hailuo video generator’s Subject Reference says you can now upload a character’s image, type in a prompt, and the generated video will consistently feature your chosen character.
This feature works with real human faces, animations, cartoons, and even animal faces. The Subject Reference feature is rolling out, and you can check it out on the Hailuo site.
Movano EvieAI Medical Chatbot Claims 99% Accuracy
EvieAI is a medical chatbot that claims to deliver 99% accurate answers without hallucinations or guesswork. Trained on 10,000 medical journals from trusted sources like the Mayo Clinic, it promises reliability and precision.
If the chatbot doesn’t have the information you need, it simply says, “I don’t know,” instead of providing a random or inaccurate response. Currently available in beta, EvieAI is free for Evie Ring users through the companion app.
Omnia AI Smart Mirror Monitors Your Health
Withings’ Omnia, revealed at CES 2025, is a smart mirror designed to make health monitoring part of your daily routine. It scans your body to measure health metrics like weight, heart health, and body composition. Using AI, it provides real-time feedback and personalized insights to help you stay on track with your health goals.
Roborock’s Saros Z70 Vaccum Cleaner Comes With a Robo Arm
Roborock’s Saros Z70 is a smart robot vacuum with a twist—it has a five-axis OmniGrip arm. This arm can pick up objects like socks, toys, and small items (up to 300 grams) that usually block regular vacuums. Clearing the way first ensures a better and uninterrupted clean. The Z70 uses advanced AI and sensors to navigate your home and control the arm to pick up things.
Mudra Link Brings Hand Gestures to VR Devices
Mudra Link, showcased at CES 2025, is a wristband that lets you control devices using simple hand gestures. It detects subtle finger and wrist movements and translates them into commands for your smartphone, computer, and AR/VR headsets.
Imagine playing VR games with no controllers and nothing but your hands. The wristband was recognized in the XR Technologies category at CES for its innovative design.
Movie Released with Putin’s Deepfake
A new English movie called Putin about Vladimir Putin’s life just hit theaters this week. But instead of using heavy makeup or the usual visual effects, It uses advanced AI and deepfake technology to place Putin’s face onto Polish actor Slawomir Sobala – who spent two years studying Putin’s body language and mannerisms to enhance the portrayal.
While the technology is impressive, it raises, again, ethical questions about the use of AI in storytelling. The movie was released on January 10, 2025, in multiple countries, including the United States and Ukraine, but it will not be released in Russia.
So, what did you read in AI this week? Let us know on X.
Ravi Teja KNTS
Tech writer with over 4 years of experience at TechWiser, where he has authored more than 700 articles on AI, Google apps, Chrome OS, Discord, and Android. His journey started with a passion for discussing technology and helping others in online forums, which naturally grew into a career in tech journalism. Ravi's writing focuses on simplifying technology, making it accessible and jargon-free for readers. When he's not breaking down the latest tech, he's often immersed in a classic film – a true cinephile at heart.