I used an app to create amazing 3D models with my iPhone

I used an app to create amazing 3D models with my iPhone

The pace of innovation in AI imaging is phenomenal. One company, Luma Labs, provides an excellent example of practical, yet highly entertaining, use of the latest technology in 3D imaging.

Luma AI it is in beta testing on the iPhone and will eventually be available on Android as well. I entered the beta testing group and can share information about what this amazing app does and how easy it is to get amazing results.

What is Luma AI?

Alan Truly captures a 3D model of a figure with an iPhone 13 Pro Max
tracey for real

Luma AI is an application and service developed by Luma Labs. It captures three-dimensional images using a technique known as Neural Radiance Fields (NeRF). It’s similar to the ray tracing technique that makes high-end game graphics look so realistic.

NeRFs have been around for a few years, but have mainly existed in research facilities until very recently. With the explosion of AI image generation, spearheaded by photorealistic renderings of Dall-E, NeRFs are beginning to be explored by a much broader audience. The first wave of the new NeRF software required some developer skills and install software packages from GitHub, then training the AI ​​on a set of photos. It was a bit much for the average person.

Luma Labs is about to dramatically simplify the process with its Luma AI app. From start to finish, the entire process can be managed from an iPhone and the end result is also more accessible.

Luma AI iPhone support

Someone holding the iPhone 14 Pro Max.
Joe Maring/Digital Trends

Since Apple has set out to demonstrate the 3D depth measurement capabilities of LiDAR sensors, it is to be expected that Luma AI will require the more expensive iPhone 14 Pro either iPhone 14 ProMax to capture 3D models. However, the clever developers at Luma Labs use photogrammetry instead. That makes this technology compatible with iPhones as old as the iPhone 11.

In the future, the app will be available on Android and a web version is already in beta testing. In an interview, Luma Labs CEO Amit Jain said the iPhone app is expected to be ready for public release in a few weeks.

How to use Luma’s AI

The rear cameras of the iPhone 14 Pro Max.
Joe Maring/Digital Trends

To use Luma AI, simply rotate around an object slowly at three different heights. An AR overlay guides you through the process, which takes a few minutes and gets easier after a few tries as you become more familiar with the process. Before long, you’ll be able to capture a medium-sized object like a chair in a couple of minutes.

Objects of any size can be handled because, for Luma AI, it’s just a series of images, no matter how big the subject is. Whether you circle a cup, a statue, or a building, the general idea remains the same.

The app will notify you when it has enough images, and when that happens, a Finish button will appear. You can also keep going around and filling spaces in the AR cloud of rings and rectangles representing the photos taken so far. The app will automatically stop capturing when an ideal number of photos have been collected. There’s also a freeform mode that lets you capture even more photos, at different angles and distances. You can see the process in the YouTube video I created below. It’s an iPhone app, so it’s a portrait video.

Processing is the next step, which happens on Luma Labs’ servers. After about an hour, the finished NeRF will be available in the app in several different forms. The first view given is a generated video, showing a flyby of the object in its natural environment. Below is an interactive version that allows you to rotate the view by dragging a finger or mouse across the image.

Most impressive of all, the subject of the shot, drawn from the background, is also available. With this representation, you can rotate the 3D object on any axis and zoom in for a closer look. The sharpness depends on how many images were collected and how slow and steady it was during the capture process.

getting better all the time

Luma Labs is updating the app and service at a remarkable rate. One week after receiving the beta test invite, two powerful new features were added that greatly expand the possibilities. The first is a web upload option that allows you to capture video without the app and then upload it to the Luma Labs website for processing. Results appear online and in the app.

This means it’s possible to use any of the iPhone’s camera modes, capture video with a dedicated camera, or even shoot video with AR lenses like Ray-Ban Stories. For example, a drone video becomes even more epic when you can smooth the motion and change direction after landing. Luma Labs shared a nice example showing an aerial view of fall leaves in this tweet.

Fall in Palo Alto is beautiful! 🍂 https://t.co/EwNkiv0DQV pic.twitter.com/hdd7iBLYgV

– Luma AI (@LumaLabsAI) October 22, 2022

The other new feature opens up editing, painting, and 3D printing opportunities. 3D meshes can be exported with textures in OBJ or GLTF format. They are not optimized, but can be viewed with intact textures even with an online viewer like the free open source website. Online 3D viewer.

A Luma AI capture of a figure art being refined in MeshLab.
Fairy Sprout Sprite Figure

It is also possible to open the 3D files in a mesh editor like the free open source one mesh lab to remove stray artifacts that appear as floating spots, as well as to clean and simplify the model before exporting it in a variety of formats. The figure shown above is approximately three inches tall and was sculpted by my wife, Tracey, for her business, a little character. Luma AI captured a remarkable amount of detail in the sculpture and the trunk she was resting on. MeshLab could also have selected and deleted the record.

The ups and downs of 3D scanning

Kyle Brussell shared a display of desserts from a party and mentioned that he asked adults to wait for their treats so he could capture them as a digital diorama.

Use @LumaLabsAI at a birthday party last night i made a group of adults skip dessert so i could circle the table with my phone to make a 3d AI dream of setting up like a cool person pic.twitter.com/sP0vVPB3yx

– Kyle Russell (@kylebrussell) October 30, 2022

Although Luma AI can process video, it relies on still images to build a three-dimensional scene. This means that if the subject moves, it could reduce the quality or clarity of the capture. A 3D image of a seated person, as shown in Albert Bozesan’s Tweet, will be good. In the same tweet, the second screenshot of a sculpture shows what happens when there is movement within the scene. The background shows people walking near the subject as distorted shapes.

took two @LumaLabsAI #NeRFs by a bavarian lake today. Great way to capture memories, feels like Minority Report. #Tegernsee pic.twitter.com/HLC0ekF7uD

– Albert Bozesan (@AlbertBozesan) October 30, 2022

Luma AI price and availability

Luma AI is currently in beta testing, and invites are regularly delivered throughout the company. Twitter bill. If you have a compatible iPhone and are interested in this technology, you may be able to get early access. There is also a waiting list at the Laboratories Luma’ website.

Luma Labs CEO Jain said pricing is yet to be determined and depends on how broad the user base is and how the scan results are used. Based on these statements, there could be a professional subscription with more advanced features and a personal subscription for less. For now, it will remain free to use.

Editors’ Recommendations






Leave a Comment