Luma AI’s Dream Machine – New AI Video Generator Launched and Available to the Public




Since the tremendous buzz surrounding OpenAI’s Sora, no month goes by without an announcement of a new AI video generator. This time around, we’re looking at Luma AI’s Dream Machine. According to the product’s page, their freshly launched model makes high-quality, realistic videos from text and does it fast. What’s more exciting about this generator, though, is that anyone can try it out now and for free. Let’s give it a go, shall we?It’s not the first time we’ve written about Luma AI. I am a big fan of their automated 3D scans, which users can make out of simple smartphone videos. In my opinion, this feature is particularly useful for location scouting (you can watch the entire workflow explained in this video post). Developers even call themselves “The 3D AI Company”, so it was rather unexpected to see them join the video generation race. But again, maybe they could transfer their knowledge and tons of scanned footage into a working model. You never know until you try.What Luma AI’s Dream Machine promisesIn the description, Luma AI presents Dream Machine as a high-quality text-to-video (and image-to-video) model that is capable of generating physically accurate, consistent, and eventful shots. They also praise its incredible speed: The neural network can allegedly generate 120 frames in 120 seconds (spoiler: my tests showed that’s not always the case because some generations took up to 7 minutes). Another highlighted advantage of this tool is its consistency:Dream Machine understands how people, animals and objects interact with the physical world. This allows you to create videos with great character consistency and accurate physics.From the model description on Luma AI’s webpageJust a side note: Most AI video generators available on the market struggle with consistency and accurate physics, as we demonstrated during some thorough tests.At the moment, Dream Machine generates 5-second long shots (with the possibility to extend them) and is said to understand and recreate camera motions, both cinematic and naturalistic.Testing the language understandingWhen you head over to Luma AI’s website and log in, the Dream Machine launches automatically. It has a simple interface that consists of a text field and an icon for an image upload (we will take a closer look at it below).For the sake of fair comparison, the first prompt I fed to the model was the same one that I used in my previous AI video generator tests. I made a few adjustments, though, adding the description of the camera motion and how the character should act. After several minutes, the neural network spat out the subsequent result.A black-haired woman in a red dress stands by the window without motion and looks at the evening snow falling outside, the camera slowly pushes in.My promptAs you see, just like its competitors, this video generator had struggles keeping the snow outside the window. (Maybe that’s why the woman looks so sad and confused in the resulting scene). Additionally, although I asked AI to place my character by the window motionless, Dream Machine decided to add some action and drama.At the same time, the overall understanding of the described scene is amazing. I’ve got everything I asked for: a window, snow, a black-haired woman in a red dress. When the woman turns around, her face and figure do not suffer from dysmorphia. She stays consistent and looks pretty normal. Personally, I haven’t witnessed such consistency in AI video generators so far (excluding Sora and Google’s Veo, as they are not available for public testing). What about you?Enhanced prompt and prompting tipsThe only setting that you can try out so far in Luma AI’s generator is called “enhanced prompt.” After entering your description into the text field, a corresponding checkbox will appear. It is enabled by default, so my previous result already featured this option. According to the developers of Dream Machine, it provides the model with more creative freedom, so you don’t have to elaborate much to get beautiful and realistic results. Your prompts can be short, and the model will fill in the gaps with the best matching details.If you disable this option, you will need to describe your scene, action, movements, and objects as detailed as possible. Since my previous text request was already elaborate enough, for the second run I used it again and unchecked the “Enhance Prompt” box. Here is the result:Woah! What happened to my lovely woman? I don’t know about you, but I get chills when I look at this result. The reason is not only the displacement of the character’s left hand but also the way she moves her shoulders and turns her head. I swear, it could be a very appropriate sequence for a witch-hunt horror movie. Apart from that, the model had the same contextual issues, as with the enhanced prompt above.Image-to-video approachLike other AI video generators, Luma AI’s Dream Machine allows users to upload an image as their input and provide it with additional text. In that case, developers recommend enabling the “Enhance Prompt” button and describing what motions and actions (both with the camera and your characters) should happen in the scene.Let’s give it one more try. For this experiment, I asked the image generator Midjourney to create the same dark-haired woman but in the form of a still image. My original prompt was left unchanged, albeit without the camera directions. This is when I realized that text-to-image AI also has problems with windows and weather conditions:I managed to get a better result with some additional parameters, but for some unknown reason, my character became an anime figure. Doesn’t matter; let’s stick to the first attempt since the rest of the picture was quite good for a test:What do you think? Although snow falls everywhere, the woman keeps still this time except for a few hair movements. A bigger problem is that the video generator didn’t get the camera motion correct. I tried several times, but for some reason, I always get a boom-up instead of a simple zoom-in. So much for precision.Current limitations of Luma AI’s Dream MachineAs developers point out themselves, the model is still in the research and beta phase, so it does have some limitations. For example:This AI video generator (as the others already available on the market) can really struggle with the movement of humans or animals. Try generating a running dog, and you will notice it doesn’t move its paws at all.In the current version, Luma AI’s Dream Machine cannot insert or create any coherent and/or meaningful text.Morphing is also an issue and can occur regularly. It means that your objects can change their forms during complicated moves or actions.Current lack of flexibility. You cannot generate clips longer than 5 seconds from the get-go, add negative prompts, or change the aspect ratio. At least for now. Developers state in the FAQ section that they are working on additional controls for the upcoming versions of Dream Machine and are open to feedback on their Discord channel.Luma AI’s Dream Machine is available for tryoutsAll in all, Luma AI’s Dream Machine feels more advanced than other AI video generators I’ve tested so far. The consistency of results is higher, people’s faces look more realistic, and the motion is not so bad either. However, it’s still a far cry from what OpenAI’s Sora promises and showcases. But as long as we can’t put our hands on it, promises stay promises.You can try out Dream Machine here. Currently, users get 5 free generations per day. There are also paid plans that will get you watermark-free downloads, commercial rights, and 30 free + 120 paid generations.What are your first impressions of Luma AI’s Dream Machine? Have you tried it already? We’re aware there is a huge discussion on AI video generators in our industry. What is your take on it? Let’s talk in the comments below, and please, stay kind and respectful to each other.Feature image source: Luma AI

We will be happy to hear your thoughts

Leave a reply

AnsarSales
Logo
Compare items
  • Total (0)
Compare
0
Shopping cart