It’s brush at dawn when artists feel the pressure of AI-generated art – TechCrunch

if you were somewhere around the interwebs you’ve been hearing about DALL-E and MidJourney lately. The types of art that the neural networks can create – and with a deeper understanding of the strengths and weaknesses of the technology – means we are faced with a whole new world of injury. Computer-generated art is often the target of tacky jokes (How do you get a waiter’s attention? Shouts “Hey, artist!?”) and is another punch line in the “they took our jobs” tale of man and machine.

For me, the interesting part is that robots and machines that do certain jobs have been reluctantly accepted because the jobs are repetitive, boring, dangerous, or just plain awful. Machines that weld car chassis do a far better job, faster and safer than humans ever could. Art, however, is another matter.

As with any technology, there will come a time when you won’t believe your own eyes or ears; Machines will learn and evolve at breakneck speed.

In the recent film Elvis, Baz Luhrmann quotes Colonel Tom Parker as saying that a great performance “evokes feelings in the audience that they weren’t sure they should enjoy.” To me, that’s one of the greatest quotes I’ve heard about art in a long time.

Commercial art is nothing new; Whether you’re thinking of Pixar movies, music, or the prints that come with the frames at Ikea, art has long been peddled on a large scale. But what it broadly has in common is that it was created by people who had some sort of creative vision.

The image at the top of this article was generated with MidJourney when I fed the algorithm a somewhat ridiculous prompt: A man dances like Prozac is a cloud of laughter. As someone with a lifetime of mental health issues, including some major depression and anxiety, I was curious to see what a machine would create. And, my goodness; None of these generated graphics are anything I would have come up with conceptually myself. But, not to lie, they did something to me. I feel more vividly represented by these machine-generated artworks than almost anything else I’ve seen. And the wild is i did this. These illustrations were not drawn or conceived by me. All I did was type a bizarre command prompt into Discord, but those images wouldn’t have existed if it wasn’t my crazy idea. Not only did it produce the image at the top of this article, but it also spat out four wildly different — and oddly perfect — illustrations of a concept that’s hard for me to think of:

It’s hard to put into words what this means for conceptual illustrators around the world. When someone can create artworks of anything at the touch of a button, mimic any style, and create just about anything imaginable in minutes—what does it mean to be an artist?

In the last week or so I may have gone a little overboard and created hundreds upon hundreds of images of Batman. Why Batman? I have no idea, but I wanted a theme that would help me compare the different styles MidJourney can create. If you really want to go down the rabbit hole, check it out AI Dark Knight Rises on Twitter where I share some of the best generated pieces I’ve come across. There are hundreds and hundreds of candidates, but here’s a selection that shows the breadth of styles available:

Generating all of the above – and hundreds more – had only three bottlenecks: the amount of money I was willing to spend on my MidJourney subscription, the depth of creativity I could muster for the prompts, and the fact that I could only generate 10 concurrent drafts.

Well, I have a visual mind, but there is no artistic bone in my body. But I don’t need any. I come up with a prompt — for example, Batman and Dwight Schrute engage in a fistfight – and the algo spits out four versions of something. From there I can reroll (i.e. generate four new images from the same prompt), render a high-resolution version of one of the images, or iterate based on one of the versions.

Batman and Dwight Schrute engage in a fistfight. Because… well, why not. Photo credit: Haje Kamps (opens in a new window) / Midway (opens in a new window)

The algorithm’s only real flaw is that it prefers the “take what you’re given” approach. Of course, you can get much more detailed with your prompts to give you much more control over the final image – both in terms of what’s going on in the image, as well as style and other parameters. If you’re a visual director like me, the algorithm is often frustrating because my creative vision is hard to put into words and even harder to interpret and render for the AI. But the scary thing (for artists) and the exciting thing (for non-artists) is that we’re still in our infancy with this technology and we’re going to have a lot more control over how images are generated.

For example, I tried the following prompt: Batman (left) and Dwight Schrute (right) fistfight in a parking lot in Scranton, Pennsylvania. Dramatic lighting. Photorealistic. Monochrome. High level of detail. If I gave that prompt to a human, they would probably tell me to fuck off for talking to them like they were a machine, but if they made a drawing I suspect humans could interpret the prompt in a way that makes conceptual sense. I’ve tried quite a few attempts, but there weren’t many illustrations that made me think, “yeah, that’s what I was looking for.”

What about copyright?

There is another interesting quirk here; Many of the styles are recognizable, and some of the faces are recognizable as well. Take this one for example, where I ask the AI ​​to imagine Batman as Hugh Laurie. I don’t know about you, but I’m very impressed; it has the style of Batman and Laurie is recognizable in the drawing. What I don’t know, though, is whether the AI ​​ripped off another artist in a big way, and I wouldn’t want to be MidJourney or TechCrunch in a courtroom trying to explain how that went horribly wrong.

Hugh Laurie as Batman

Hugh Laurie as Batman Photo credit: MidJourney with a prompt by Haje Kamps under a BY-NC-40 license.

Problems like this are more common in the art world than you might think. One example is the case of Shepard Fairey, in which the artist allegedly based his famous “Hope” poster of Barack Obama on a photograph taken by AP freelance photographer Mannie Garcia. It all turned into a fantastic mess, especially when a bunch of other artists started creating art in the same style. Now we have a multi-layered plagiarism sandwich, with Fairey allegedly plagiarizing someone else and being plagiarized himself. And of course it’s possible to generate AI art in Fairey’s style, which makes things infinitely more complicated. I couldn’t resist trying it out: Shepard Fairey themed Batman with the text HOPE at the bottom.

HE HOPES

HE HOPES. A great example of how the AI ​​can get close with the specific vision I had for this image, but no cigar. Yet the style is so close to Fairey’s that it’s recognizable Photo credit: Haje Kamps (opens in a new window) / Midway (opens in a new window)

Kyle has a lot more thoughts on where the legal future lies for this technology:

So where are the artists?

I think the most frightening thing about this development is that we very quickly went from a world where creative achievements like photography, painting and writing were safe from machines to a world where that’s not as true as previously. But as with any technology, there will come a time very soon when you won’t be able to believe your own eyes or ears; Machines will learn and evolve at breakneck speed.

Of course, it’s not all doom and gloom; If I were a graphic artist I would start using the latest generation tools for inspiration. How many times I’ve been surprised at how well something came out, and then I’d be like, ‘But I wish it was something more [insert creative vision here]” – if I had the graphic design skills, I could take what I have and turn it into something closer to my vision.

This may not be as common in the art world, but in product design these technologies have been around for a long time. For printed circuit boards, machines have been creating initial versions of the track design for many years – often, of course, to be optimized by engineers. The same goes for product design; Autodesk demonstrated its generative design capabilities five years ago:

It’s a brave new world for any job (including my own — I had an AI write most of a TechCrunch story last year) as neural networks continue to get smarter and there are ever richer datasets to work with.


Let me finish with this extremely disturbing image, where several of the people that the AI ​​placed in the image are recognizable to me and other TechCrunch employees:

“A TechCrunch Disrupt staff group photo with confetti.” Photo credit: MidJourney with a request from Haje Kamps under a BY-NC-40 license

The MidJourney images used in this post are all licensed under the Creative Commons Non-Commercial Attribution Licenses. Used with the express permission of the MidJourney team.

Leave a Comment