an informal op-ed nobody asked for

free-form ramblings on the role of Ai image & text generation (among other disruptions) in creative industries + education

(and some other teaching stuff)

Jarred Elrod, DALL•E 2, Writing, Creative Culture

“Tile mosaic of a human heart” one of my early experiments with DALL•E 2. Seems like a fitting image for this post 😂

Like a lot of folks in the creative world, I’ve had thoughts/fears/questions about this this topic floating around in my head for months now. I thought some good old fashioned journaling might free up some space in my mind so I can get on with life. So… Let’s start with a little foundation work. I identify as a creative practitioner here on my website because I’m not a huge fan of discipline labels—especially since most people that work in creative domains jump in and out of a variety of disciplines. That said, I am trained as, and have worked as a graphic designer. I’m also trained to teach graphic design in a traditional sense, like early 2000’s sense 😅. Things have obviously changed and the general discipline has broadened a lot since then. The term graphic designer doesn’t even make much sense anymore—so I landed on creative practitioner. I’ve tried my best to keep up, but my cranky grandpa streak really seems to be coming out quite a lot these days. But at any rate, I’m coming at this topic more or less as a graphic designer that likes to illustrate, print, cut, build, and write. I love using my hands and moving my body as part of my work. I still shoot film for god’s sake.
 
This post isn’t going to be about the different Ai generation platforms out there for image and writing generation—just hop on instagram and look at some hashtags for that. I’m not an Ai expert either. My goal here is to attempt to broadly conceptualize what is happening and provide some insight or questions about where it might lead us culturally. I’d also like to acknowledge that we have, in fact enthusiastically onboarded Ai for a while now and that it is deeply baked into the creative software we use (among many other things that are part of every-day life)—ahhh sweet content aware. I’m talking about the explosion of imagery and the high level public discussion going on about chatGPT we’ve seen circulating on social media, popular podcasts, and mainstream news outlets in the past year.
 
Before any questions or insights though, I want to start with a story. The other day I was scrolling on instagram (as one does). I happened upon an ad. In that ad there was a beanie-clad dude raving about how you can now build an entire ad using Ai platforms to write copy, build the imagery AND size the ad / build the layout. He proceeded to generate a candle using prompts and one of the platforms (maybe DALL•E 2, Open Ai) and then chatGPT to write the copy. Then there was some other browser-based software with a zingy name I can’t remember for generating sized ads for any media platform. Voila—a beautiful ad ready for publication. Obviously, the issue here is that there is no real product to back the ad, which is both sad and stupid (this might be different for students making concept work, but we’ll get into that later). But I felt a pang of real sorrow as someone who enjoys the process of making things. These prompted and automated processes pretty much eliminated the making aspect for every step. There was no candle or packaging prototyping, no product photography, no layout design. No kerning for god’s sake!!  Just beanie guy in his garage writing prompts. The quirky struggles that happen along the way as we make are the things I enjoy most about the creative process—something that can be practiced, but never fully mastered. This was just a click-baity ad that is certainly not 100% indicative of the end of graphic design, but it still illustrates model of alleged “creativity” devoid of actual making I’m not comfortable with.
 
The most common narratives I’ve heard designers use about coming to terms with Ai image and text generation are these: It’s another tool for the tool belt. A good way to rapidly ideate. In the event something is generated that is liked, a professional comes in and rebuilds or essentially post processes or dials in the imagery to a higher level. In the future it will make our work faster as Ai training models improve and move more into motion graphics. I hear this, and it all makes total sense to me. And yet.. I’m a little concerned about the tone of overconfidence in having a handle on how this thing will develop. Some designers—especially illustrators and those who write—are freaked out. And rightly so. Christoph Niemann (among others) recently posted an idea about a kind of Ai free certification for imagery and things got spicy in the comments. I made the mistake of going into those comments, and it seems everybody thinks they know exactly how it works, yet nobody seems to be able to come to a consensus as to whether training models are ripping artists off or creating original work that is built from “influence” or “inspiration” of other works.
 
Getty seems to think they are getting ripped off, as they are suing Stable Diffusion for using a shit load of watermarked images in their training sets. That’s a lot of infringement. Do Ai training sets rip artists off—in many cases using their work without legal permission? Yes. Do Ai training models build images from scratch using images as “inspiration”? Yes. Will lawsuits stop development of platforms like this? No. More than likely litigation will just produce a crappier version of the inevitable—a Windows 8 version, if you will. Truth is, it’s a both-and scenario. The difference is the speed and scale. Artists and designers have been ripping each other off …a-hem… I mean making work “inspired” by previous works forever. Where we had individual people building copycat works before we now have browser and app-based platforms that allow anyone with a pulse access to machines that possess insane amounts of “inspiration” and processing power… just cranking out droves of imagery and text in seconds. Whether you like this shift or not it is happening and the toothpaste ain’t going back in the tube.
 
I think what is happening and the arguments around this whole thing boils down to a “complicated” vs. “complex” issue. I’m reading “Team of Teams” by Gen. Stanley McChrystal right now. Love or hate the guy there are some deep insights about how technology and our interconnectedness have reshaped the global communication landscape (and so much more). I won’t get into any more specifics, but I do want to extract the “complicated vs. complex” concept from the book because I think it really applies here. According to McChrystal, “complicated” is something that has a lot of moving parts, but can ultimately be broken down and rebuilt in a predictive way—in other words, it can be figured out. A complicated machine (like a car engine) is complicated, but behaves in a predictable way and problems can be more or less diagnosed and repaired objectively. “Complex” is by nature organic and unpredictable. Complex problems are non-linear (like weather) and solutions cannot necessarily be planned ahead for strategically using traditional, linear planning methods. I think Christoph’s idea of having non Ai images certified (while well meaning), or even viewing this issue through a traditional copyright lens (which is already pre-internet level woefully outdated) are proposing complicated solutions to a host of very complex problems that we haven’t even begun to understand. But guess what? It’s happening anyway.
 
So, I am weary of this whole “it’s just another tool” justification. Until recently I taught at large universities. Everything within the structure of these universities from the president’s office down to the single classroom level are built around hierarchical, predictable, complicated frameworks. I’ll be honest, I loved the predictability in the structure—being able to plan 5 years out and actually believe we’d have the stability within our department, let alone the world to carry out that plan was super comforting. That’s just not the case anymore. At my last job, I witnessed the (at least in my opinion) the full collapse of the complicated model in academia. When the global Covid Pandemic swung into full force we probably saw most strikingly what it looks like when complicated systems clash with complex problems. We responded by attempting to push on in a fashion that was as “normal” as possible. In many cases, this resulted in wearing students to the nub trying to deliver educational content through traditional teaching methods via zoom or other video conferencing platforms. Or even better, forcing teachers back into the physical classroom super early against a strong consensus of actual expert advice. Of course, we got better at teaching remotely quickly, but it was all definitely still very much a round peg / square hole scenario, as McChrystal puts it in his book. But even aside from Covid, the traditional complicated model in education was breaking down, and had been for several years. In many cases students don’t go to the teacher if they have a problem anymore, they go straight to the dean—or to social media, where some of them have very large followings. Traditional hierarchies are definitely becoming a thing of the past. It’s not right or wrong, it just is. How could we not expect everything to shift given the massive technological changes and cultural upheavals fueled by our new, “complex” interconnectedness? I’m getting away from Ai a bit, but the complexity of Ai and how it fits into traditional education models is a perfect example of disruption that will continue to be the norm in academia and other large corporations. Heck, after working at one university with a massive lazy river and one that paid their football coach a 12-million-dollar buyout, firing him less than one year after giving him a contract extension…public universities are basically corporations.
 
Short of running design programs like independent design studios that don’t have to answer to schools or colleges and abolishing tenure I’m not sure where traditional design programs can go from here. Is there enough need in the market for school-studios and would existing professional studios want that? Probably gonna be a real-hard no. Do I want that? Heck no! I want a safe space for students to learn and experiment. I want to add at least 6-9 credit hours of mental skills training and self-care related curricula to all design programs. I want teachers that are hired and paid well and fully supported to kick ass teaching students at high level universities—the students deserve it for the price. I want students to be required to study abroad in a fully school-funded program for at least one summer. If we have to give up the lazy rivers for that, so be it. I’d definitely like to see some adjustments to the tenure process and some actual checks and balances once tenure is achieved. I would love to see a reasonable—like, 150K/year MAX—salary cap for athletic coaches. I’d like to see presidents of universities that have a recent background in teaching, not politics. All that said, I suppose as long as schools are making enough money from enrollment things will continue on.
 
How do we make and teach successfully in complex environments? I wish I knew for sure. McChrystal’s answer would likely be to increase our resilience to disruption. I’ve been informally asking talented young folks that are just out of school in addition to seasoned teachers and designers about this. How are they thinking about Ai image and text generation within the scope of their own creative practices? Nobody has an appeasing answer. One thing I do find both comforting and disturbing is that humans, too are very “complex”. Even with insanely powerful making tools at their fingertips—so powerful that you can just ask a platform to build something for you on demand in seconds—people still get a high from making things themselves and from collaboration at human to human scale. I see a bit of a creative insurgency on the horizon for those of us who like to get our hands dirty (figuratively or literally). The eff it just delete everything side of me thinks…can we just block this out and pretend it isn’t happening by moving back towards true specialization? BTW..that would be letterpress or film photography for me. If you wanna get real funky with something you’ve gotta get real focused, right? Then again—and this is something else McChrystal mentions in Team of Teams—is that specialization generally leads to less overall resilience. This is because we leave ourselves more vulnerable to external disruption when we are locked into one specialization or way of thinking. This makes a lot of sense to me, especially within the “complexity” context when I think about my last few years of teaching. Our day to day teaching was impacted deeply by political upheaval, a global health pandemic, natural disasters, etc… These outside factors obviously impacted everyone—and we don’t come to school or work in a vacuum. Pain manifests itself regardless of context. My role as someone that was a specialist at teaching graphic design felt like it shifted into something far more nuanced and complex by 2021.

Jarred Elrod, design, all or nothing, tutto o niente

⬆️ I’ve been afflicted with this sort of all or nothing mentality (tutto o niente in Italiano) for a while now. Stay in or come out of the shadow. Hmmm. I’ve made a lot of work about it. My MFA thesis was titled “Everything and Nothing.” Yeah—it’s a thing. This mentality really flares up when it comes to onboarding emerging technology I’m not comfortable with. Example: I can’t just have instagram and not existentially obsess about whether I should have it or not…how I should use it, what social media is doing to us in general, etc… I get sick of hearing myself talk to myself about this, lol. I’d say an all or nothing mentality is definitely not a resilient way to think. My urge to run back towards the safety of specialization / familiarity is just a deep urge to “figure things out.” Figuring something out is most certainly a “complicated” mode of thinking. The immensity and organic nature of—ethics and truth in social media, for example—is an incredibly “complex” problem. These things cannot be figured out, just navigated. I know this, yet I still find myself wasting precious creative fuel trying to “figure out” social media. Ain’t gonna happen! The Buddhists say desire is the root to all suffering…they’re right. So…how do we creative practitioners shift from thinking about complex things in complicated ways? Guess I’m not quite ready to answer that yet, but if I had to now: it’s becoming clear (for me at least) that letting go of the “figure it out”mentality and doing anything that feels like it’s reactive, or an attempt at a “quick fix” to a complex disruption is a good start. Pursuing long terms goals within a domain of genuine interest that requires daily discipline, patience, collaboration, and humility seem to be the best way forward. I’ve written a lot about my ongoing struggle to learn Italian here in the journal. This is a good basic example of this concept. The process of learning the language has taught me so much more than just learning the words.

Students and professionals alike have been using stock photography and found images forever in their work—this is more or less the super slow not efficient version of dialing up something with Ai. It will be interesting to see how Ai imagery and text changes the complexion of student and professional portfolios as we move forward. There’s no sense in trying prohibit it unless it’s for a focused exercise. Will we even notice a difference? And what about when training sets are trained on other Ai generated images? It will be very interesting/scary/exciting to see how this alters our visual and written communication landscapes. Typography is also an exciting frontier. We’ve had variable and parametric fonts for a while. But now people are using Ai to do everything from pairing and kerning type to designing full typefaces that leverage Ai image generation. I suppose the best of the best will rise to the surface in any domain regardless. This will both thin and thicken the herd (creative community) at the same time, but will the new complexion be ethical? Socially just? Not saying it’s those things now—and maybe that will make our existing problems exponentially worse as we ramp up speed and scale with Ai riffing off our existing biases.

Continue the daily creative practice in the face of uncertainty—focus on big projects that demand deep sincerity, time, developing relationships with others, and of course, detail/nuance. Resist the urge to produce outcomes for outcomes sake. That’s the space I try to occupy. As an educator and someone who went to school when building a static website was a big deal (I learned about HTML in undergrad from building myspace profiles 😅 ) this is all super scary and also exciting. It just depends on the day you catch me. One day I want to sample and design it all, the next day I want to move into our van and start chopping firewood for the impending apocalypse. I suppose that’s why I’m writing this—continue the practice. I’ve been experimenting with DALL•E 2 from OpenAi. It’s good at some things and terrible at others. I’m sure this will improve quickly. It’s out there. That’s really all I can say about it at this point. There is something super creepy about how Ai generates faces and snippets of words—they feel like they’re from some other universe or level of consciousness. I can’t say I’m 100% comfortable with it. It’s almost like looking at a dead person or an unfamiliar language from outside this planet.
 
We all know where this is heading…and the train left the station a while ago. My greatest fear within this context is living in a world where we don’t know if any piece of media we consume is real or fake. Younger folks seem to be more comfortable with this notion. I am not. Without sound ethics and sensible policy implementations (with broad public support) that protect citizens / consumers from the highest level of government we’re in trouble. I think we should be spending less time litigating copyright in legacy systems and more time trying to figure out how we are going to come to an ethical consensus on what is Truth, where are the lane bumpers in our world of communication, and how are we going to navigate this thing together. The 45th U.S. President among others, who’s name will not be mentioned here has already shown us there is a community of folks out there who will literally do anything in any domain to gain political / corporate power or leverage. Given the polarized nature of politics right now I’m fearful of potential outcomes. This is a discussion that goes way beyond creativity. And definitely fodder for another post.
 
When I think about this too much I still can’t help but have escapist fantasies of 23 year-old grad school me blissfully printing away on our Vandercook SP-15 at 3 AM. Good times. Ignorant times? And this is where I have to leave it for now—alla prossima.

and for no particular reason a few more selects from the mosaic binge I went on with DALL•E 2