Meta continues its foray into the AI ​​art game, allowing users to create computer-generated images based on their own sketches

Meta AI, the artificial intelligence arm of Facebook’s parent company, this week released a report about an exploratory research project it’s working on called Make-A-Scene, “which demonstrates the potential of AI to empower anyone to use their imaginations to create… to bring to life”. according to a July 14 blog post.

Make-A-Scene aims to go a step further than the typical text-to-image generator by adding the option for users to draw a free-form digital sketch of a scene on which the network can base its final image .

“Our model generates an image with a text input and an optional scene layout,” the report reads. “As demonstrated in our experiments, by conditioning scene layout, our method provides a new form of implicit controllability, improves structural consistency and quality, and adheres to human preferences.”

Images created by Refik Anadol in collaboration with Meta’s Make-A-Scene program. Photo: courtesy of Meta.

Generative art is an ancient, critical form, although the medium has recently flourished in the mainstream. Many may recall that a work created using AI by French art collective Obvious fetched $432,500 at Christie’s in 2018. And new media pioneers such as Herbert Franke and Refik Anandol have employed the medium to create works of conceptual depth and poignancy.

“To realize the potential of AI to boost creative expression, humans should be able to shape and control the content generated by a system,” the Meta AI blog post states. “It should be intuitive and easy to use so people can use the expressions that work best for them.”

“Imagine creating beautiful impressionist paintings in compositions you envision without ever picking up a brush,” the post continued. “Or instantly create imaginative storybook illustrations to accompany the words.”

To fine-tune their results, the Meta team asked human reviewers to rate the images created by the AI ​​program. “Each was shown two images generated by Make-A-Scene: one generated from text input only, and one generated from both sketch and text input,” their post reads.

Adding the sketch resulted in an image that matched the text description better 66.3 percent of the time. Meta notes that Make-A-Scene can also “generate its own scene layout with plain text prompts if the creator chooses to do so”.

Images created by Sofia Crespo in collaboration with Meta’s Make-A-Scene program. Photo: courtesy of Meta.

The software has not yet gone live to the public. So far it has only been made available to a few employees and select AI artists such as Sofia Crespo, Scott Eaton, Alexander Reben and of course Refik Anadol. “I stimulated ideas, mixed and matched different worlds,” Anadol noted of his experience with the program. “You literally dip a brush into the head of a machine and paint with machine consciousness.”

At the same time, Andy Boyatzis, program manager at Meta, used “Make-A-Scene to create art with his young children, ages two and four. They used playful drawings to bring their ideas and imagination to life.”

Since the report was published, Meta has quadrupled the potential resolution of Make-A-Scene’s output to 2048 x 2048 pixels. They have also promised to give open access to demos in the future. For now, though, they advise keeping an eye out for more details during their presentation at the European Conference on Computer Vision (ECCV) in Tel Aviv in October.

Follow Artnet News on Facebook:


Want to stay one step ahead of the art world? Subscribe to our newsletter to get the latest news, insightful interviews and incisive critical statements that drive the conversation.

Leave a Comment

Your email address will not be published.