By Published On: October 16, 2023

We’re all in on generative AI. It’s the secret ingredient in our creative process and it adds huge value to our team by helping us generate copy and ideas within certain defined practices.

But the image thing? Whoa there, robot face. That’s a sacred place you’re treading on.

Maybe it’s fine for cosplay flyers and personal projects, but not for commercial use. It’s non-ownable. And it’s stolen from human creators who will not have any recourse for the infringement.

The truth is many AI image generation tools are trained on anything their developers can get their beady little computer eyes on. None of it can be trademarked or copyrighted and the original creator of a digital piece, photo, image, or style isn’t compensated any way.

This presents a problem for an agency like ours who wants to potentially use AI-generated images in the work we do for our clients.

Fortunately, the two 900-pound gorillas in the industry have our backs, both offering decent products and both attempting to do the right thing.

Getty Images’ tool is trained only on fully licensed, creator-compensated images and is fully indemnified with no cap on amount of protection. The interface is easy to use and gets better with more detailed explanations. It struggles a little if the image library it was trained on doesn’t have a lot of reference. It does good job of creating realistic humans (at first glance) and its ability to make convincing conceptual images is impressive. Cotton candy house? No problem.

Adobe’s AI tools are built into the creative suite. Photoshop’s fill and expand tool makes editing images a piece of cake. Need to add more grass? Instead of recreating it yourself, just tell it to add grass, and bam, you’re done. Generating from scratch is more hit and miss (how many toes do kittens have?), but like all things AI, it depends on the instructions you give it. Adobe says the images created with their tools will not infringe on anyone’s intellectual property and that “original creations” made using these tools may be copyrightable—but that’s still in flux.

With the ethics issue addressed (for now), let’s move on to the sexier side of AI image generation. Having recently emerged from beta-testing land, we asked our designers to weigh in further on these game-changing tools.

Sam is a designer, developer, illustrator, painter, artist, gamer, and coder.
Beth is a creative director, designer, developer, and AI fine tuner.

As a professional graphic designer, how do you feel about generative AI as a tool?

SAM: I’m excited about the possible efficiencies it offers in the process visually representing an idea. Many ideas I have as a designer don’t make it past the ‘presentation-ready execution stage’. If an idea is too time consuming to create or I am unable to rapidly prototype/refine it, the project scope and timeline restrictions sometimes force me to leave those ideas behind. The better AI tools get the less that might happen.

BETH: I’m interested in continuing to refine the list of tasks I can use it for. And I’m excited for it to evolve to a place where we can use it to really push ourselves conceptually.

As an artist, how do you feel about generative AI in general?

SAM: I think the sensibilities that an artist needs to create good work might become more accessible with generative AI. I believe it won’t be relied upon to create masterful works of art alone but could be a powerful tool to add to an artist’s toolbox. I’m concerned about the new legal gray areas being opened up by artwork created with generative AI and what it could mean for the future of artists in a professional sense. Generative AI has certainly already seen an adjustment in the value of artists and their work. Where that will shake out remains to be seen.

BETH: I’m less likely to call myself an artist than I am a craftsperson but I’m hugely excited about being able to get what I see in my head onto the screen without relying on the talent in my hands.

What types of projects might you use an AI generated image for?

SAM: I don’t see a limit to what projects could be aided by generative AI. If it can help ‘put pen to paper’ it can be used for anything visual.

BETH: Presentations and proposals will never be the same. Also, quick mockups and prototypes, conceptual images, and social posts. Using it for client work will require more thought, but it’ll be there soon.

How do you think AI tools need to improve to be more useful?

SAM: If the tools could visualize how prompts are affecting the output it would make adjusting prompts easier and the overall utility of the tools far greater. As it stands if a prompt isn’t yielding the expected result, it seems easier to start from scratch or give up and try something more manual. I could also see how relying less on words and making the UI more visual could improve generative tools.

BETH: I know people like to think of it as ‘intelligent’, but it isn’t. And it has no emotional investment in being good or right. It tends not to pay attention to details the way a human does. One expression in the human eye is very like another to the prediction algorithm. But a person will know in a heartbeat that something is wrong with the image if the eyeline is off by a micron. Until some of those things get worked out it’ll be tough to rely on.

What kinds of guardrails or limits do you think are important to have around AI tools for image or text generation?

SAM: Use of private photos/information should obviously be restricted for an AI’s learning. I don’t believe it will be possible to police its use to manipulate facts and create fabricated images. The public should be educated and expect to educate itself on what these tools are capable of. In terms of content, I think companies can place their own guardrails and should be held accountable by the public for any of their abuses.

BETH: I think we’re already too late with most guardrails. Adobe’s and Getty’s tools are built by ethical, well-respected companies with a lot on the line. Not everybody has that kind of pressure on them. I’m afraid it’s going to be another case of black hats vs. white hats until things settle down a little.

Any specific insights/comments about any of the image or text generation tools you’ve used?

SAM: So far, I’ve seen impressive execution, but most artwork looks like mimicry rather than anything truly unique. I expect this could change as the tools get better and artists improve their mastery of them.

BETH: What Sam said. Plus, I’ve been fine-tuning LLMs for a few months now and I’m most excited about how the curation of specialized training data will take things in a remarkable direction.

About the Author: cat-tonic

cat-tonic
Born of curiosity and enthusiasm, we’re a scrappy group of smart, passionate marketers who work hard and play hard. We show up every day and fight for our clients who are making the world a better place. We listen with curiosity, explore deeply, ask hard questions, and sometimes put forth ideas that might make you squirm. Because we believe the status quo is good for growing mold but not much else. The way we see it, change is the way forward and the magic happens when curiosity, math, science, instinct, and talent intersect.
RE+ 2023: A newbie’s perspective
Yes, you should be on social media, but first…