Zeev Farbman.

Lightricks challenges AI giants with open-source text-to-video platform

Text-to-video model produces high-quality clips in less time than playback. 

Four seconds to process a 5-second video: Israeli company Lightricks launched a text-to-video model on Friday that it claims can produce high-quality videos based on text commands faster than the length of the video itself. Unlike similar models that demand expensive computing resources and are distributed in closed code, Lightricks’s model can run on a home computer equipped with a high-quality video card and is accessible in open source. 
“We also want to appeal to AI enthusiasts who work with their home computers,” the company’s founder and CEO, Dr. Zeev Farbman, told Calcalist. 
1 View gallery
זאב פרבמן מייסד לייטריקס
זאב פרבמן מייסד לייטריקס
Zeev Farbman.
(Photo: Amit Shavi)
In February, OpenAI unveiled Sora, a generative artificial intelligence (GenAI)-based text-to-video model capable of delivering spectacular results at a level unseen before. In doing so, the company reignited an AI revolution that threatens to disrupt many industries. Major Hollywood studios, for instance, have already begun integrating text-to-video models into their workflows. 
Now, Lightricks entering the arena with the launch of LTX Video—the first text-to-video model developed in Israel. According to the company, it boasts several capabilities that set it apart from competitors, such as rapid video production with processing times shorter than the video length and the ability to run on consumer-grade hardware. 
Video created by LTX Video
(Lightricks)
“The model represents a new era of artificial intelligence-based video,” said the company’s CTO and co-founder, Yaron Inger, in a press release. “By designing a powerful video coding model that compresses the video in a very compact way, we achieved unprecedented speed while improving motion consistency and visual continuity. The ability to produce videos faster than their projection speed opens up possibilities for uses beyond content creation, such as games and interactive experiences in online purchases, learning, or social gatherings. We are excited to see what researchers and developers will build based on this foundational model.” 
The videos generated by LTX Video are striking. In one clip, a black woman dressed in white is shown standing in an office and speaking to another woman whose back is to the camera. In another, the camera moves forward while a man in armor straightens his body and gazes into the distance. Yet another features two police officers walking down a corridor with their faces shifting in and out of shadows. 
Other scenes include a squirrel sitting on a sidewalk, a herd of elephants roaming, a Native American tribal camp, and a man driving a car through changing landscapes. Despite their short duration—only five seconds—these videos exhibit complex transitions and realistic visuals that make it difficult to discern they were created by AI. 
Video created by LTX Video
(Lightricks)
In addition to text-based prompts, users can also upload still images as starting points for video creation. For example, Farbman uploaded an image of Tyrion Lannister from Game of Thrones (played by Peter Dinklage) and instructed the model to create a video of the character sipping from a glass. The result was a lifelike video of Tyrion performing the action—despite it never being filmed. For copyright reasons, the company has not distributed this video.
While the results are impressive, LTX Video is currently limited to generating clips of up to five seconds. Farbman explained why: “The system has learned to produce between 41 and 257 frames. You can create videos of 11 or 12 seconds, but the final result may include more errors. The more frames you produce, the more likely errors become. It also depends on the complexity of the scene. For simple scenes with stationary objects, errors accumulate more slowly.” 
Video created by LTX Video
(Lightricks)
A key advantage of LTX Video is its fast response time. “We developed a system that enables interactive work and allows changes to be made very quickly,” Farbman said. “This is something that cannot be done with other models. If you have to wait five minutes after each prompt, it’s not suitable for regular work. In our model, it takes more time to watch the video than to generate it. 
“This allows for very fast workflows, enabling users to produce multiple videos and connect them into a complete creation. At the end of the process, you can use a more advanced model to refine the output and correct distortions. With our model, you don’t need sophisticated hardware to run it. In the future, we plan to release a larger and better model to improve upon the results of the existing one.” 
Lightricks is emphasizing the accessibility of its open-source model. Farbman criticized the trend among companies like OpenAI to limit access to their best models. 
“When OpenAI unveiled their models in 2022, there was a moment of euphoria that AI would be open and accessible to everyone,” Farbman said. “But in practice, OpenAI decided to leverage the tactical advantage they gained by closing their models and limiting access. Today, the best models on the market are closed. 
“This creates problems beyond cost. Gaming companies, for example, want to produce simple graphics and then use these models to experiment with visual styles, but closed models don’t allow for that. It also stifles academic research and gives an advantage to large companies. 
“Therefore, we realized that if we want to be competitive, the models must be open. We embarked on the adventure of distributing an open model so that academia and industry could use it, add capabilities, and develop new features. This will make us more competitive. 
“We also want to appeal to AI enthusiasts who work with their home computers. Graphics cards for gamers today are very powerful—on par with Nvidia’s AI processors in terms of computing power, but with less memory. We made a significant effort to ensure the model can run on these graphics cards, allowing users to run it at home.” 
Farbman argued that the traditional methods for training GenAI models may be nearing their limits: “When the technologies reached a certain level of maturity, Sam Altman (CEO of OpenAI) acted as though OpenAI had unique knowledge no one else possessed. But today, those in the know understand that claim doesn’t hold water. There are already 10 different entities at the same level as OpenAI, and it’s unclear how much more performance can be extracted from the same architecture. 
“OpenAI poured $150 million into training Sora, while we trained our model for just $10 million. Investors are starting to realize that pouring billions into model training won’t create a moat. The future lies in open-source collaboration.”