Luma Labs, the startup behind the popular Dream Machine AI creativity platform, is releasing a new video model. Ray2 promises to be faster, more realistic, and offer a better understanding of real-world physics than other, similar models such as Runway’s Gen-3 or even OpenAI Sora.
According to Luma, Ray2 can distinguish interactions between different objects and object types. This includes between humans, creatures and vehicles, adding to the realism.
Ray2 will be able to create up to 10 seconds of high-resolution video from a text or an image prompt and will be available through Amazon's AWS Bedrock as well as in Dream Machine.
According to Luma, the model will create clips showcasing advanced cinematography, smooth motion and "eye-catching drama." It is a native multimodal architecture that was trained directly on video data to improve the accuracy of characters.
While it isn’t available yet, there are dozens of clips made using Ray2 appearing on social media and I’ve gathered some of the best I’ve seen.
1. A ball of matter
Scaling video model pretraining is leading to far more accurate physics! this is #Ray2 pic.twitter.com/BPoFx32VNXJanuary 13, 2025
First up is a video shared by Luma CEO Amit Jain showing a ball of opaque matter floating and rotating counter to the motion of the camera. This would be great for planet scenes. He said: “Scaling video model pretraining is leading to far more accurate physics!”
2. Woman on a motorbike
I've had access to @LumaLabsAI #Ray2 for the last 24 hours, enjoying the improvements they have made to motion. 🏍️ pic.twitter.com/1uiI0YayIeJanuary 14, 2025
The next clip to draw my attention was shared by self-titled “AI rockstar” Ryan J Phillips (aka Uncanny Harry) and shows stunning Mad Max-esque footage of a woman on a motorbike in a desert. He said he was “enjoying the improvements they have made to motion.”
3. Sweeping mountain vista
New frontier. This is @LumaLabsAI #Ray2 pic.twitter.com/aQbIMoTCKDJanuary 14, 2025
I loved this sweeping mountain vista from Luma’s Theo Panagiotopoulos. It shows how well the model handles camera motion over complex terrain. Other models develop a slight “game-like” style in these scenes.
4. Cat on a couch
"a cat running and leaping on a couch"This is @LumaLabsAI #Ray2 in motion. pic.twitter.com/p723Qi4Q36January 13, 2025
Slow but complex motion can be difficult for AI models, especially when accurately depiction a living creature but this video shows how Ray2 gets us closer to the ideal. Shared by Luma’s William Shen it depicts a cat tentatively moving across a couch before leaping.
5. Man on a tightrope
Look at the physics in the @LumaLabsAI Ray2 generation...Holodeck is coming. #Ray2 pic.twitter.com/oesvP8t9X1January 13, 2025
Finally, this might be one of my favorite videos I’ve ever seen and was created by filmmaker Allen T using Ray2. In it a man tentatively crosses a ravine on a tight rope and shows some very accurate footwork. Allen wrote: “Look at the physics in the Ray2 generation… Holodeck is coming.”
Final thoughts
AI video is advancing faster than almost any other form of artificial intelligence. The next big leap is in an understanding of reality, and with Google’s Veo2 and the new Ray2 from Luma, we might be getting close.
I haven’t tried Luma Ray2 yet, it hasn’t been released. But from what I’ve seen from both Luma staff and others that were given early access — its one to watch.
More from Tom's Guide
- Best food delivery services: Grubhub vs Uber Eats vs Doordash
- The best cast iron skillets 2024: Tested and rated
- How to make images with AI using Leonardo