New AI Model Turns Photos Into Explorable 3D Worlds, With Caveats
4 septembre 2025 à 13:00
An anonymous reader quotes a report from Ars Technica: On Tuesday, Tencent released HunyuanWorld-Voyager, a new open-weights AI model that generates 3D-consistent video sequences from a single image, allowing users to pilot a camera path to "explore" virtual scenes. The model simultaneously generates RGB video and depth information to enable direct 3D reconstruction without the need for traditional modeling techniques. However, it won't be replacing video games anytime soon.
The results aren't true 3D models, but they achieve a similar effect: The AI tool generates 2D video frames that maintain spatial consistency as if a camera were moving through a real 3D space. Each generation produces just 49 frames -- roughly two seconds of video -- though multiple clips can be chained together for sequences lasting "several minutes," according to Tencent. Objects stay in the same relative positions when the camera moves around them, and the perspective changes correctly as you would expect in a real 3D environment. While the output is video with depth maps rather than true 3D models, this information can be converted into 3D point clouds for reconstruction purposes. There are some caveats with the tool. It doesn't generate true 3D models (only 2D frames with depth maps) and each run produces just two seconds of footage, with errors compounding during longer or complex camera motions like full 360-degree rotations. Furthermore, because it relies heavily on training data patterns, its ability to generalize is limited and it demands enormous GPU power (60-80GB of memory) to run effectively. On top of that, licensing restricts use in the EU, UK, and South Korea, with large-scale deployments requiring special agreements.
Tencent published the model weights on Hugging Face.
Read more of this story at Slashdot.