AI-First Game Development. Part 3. The Chasm Between Design and AI-Generated Art

Mark Rodseth
8 min readFeb 22, 2024

Welcome to part three of my series, AI-First Games Development.

This series sets out to answer the following questions:

Can AI help a rusty ex-coder with a passion for games, not a lot of free time with work and family, create a decent computer game? And if so, how much load did the AI take, how has the software engineering process changed since I was in the driver’s seat, and what other aspects of game design and development can AI help out with beyond writing code? And finally, what are the best tools out there to help me with this job?

If you want to catch up or skip ahead, here is a list of all posts in the series.

Part 0: AI First Game Development (Intro)

Part 1: AI-First Game Development. Training the GPT

Part 2: AI-First Game Development. My AI Teacher

Part 3: AI-First Game Development. The Chasm Between Design and AI-Generated Art

Part 4: AI-First Game Development. The Ideas Machine

Part 5: AI-First Game Development. The Engineer Lives

In the previous posts, I talked about setting up my AI tools to build a fun, casual game based on the idea of popping bubble wrap. My tools included an OpenAI GPT customized to support me with Unity Development and GitHub Copilot. I also talked about how AI can be a great teacher if you know how to ask the right questions.

But Is It Art?

This is the part of my journey I was most excited about. I love art and graphic design and have worked with many great designers throughout my career.

Being a closet designer myself, I have also tried to design things like Web Sites, Apps, Games, Decks, Book Covers, and more. The problem is I am constantly reminded I am missing the natural ability to do a good job. Sometimes I can make something look OK; but never good, and great is beyond my wildest dreams.

And this is why using AI Art Generators excited me the most. After all these years of frustration looking at my rather ham-fisted attempts at style, usability and beauty, AI could do it with the magic of prompting. I’ve been playing around with Image generators like Dall-e, MidJourney and Stable Diffusion since they came out. I’ve used them successfully to help me with projects like writing my last book, Hello, World!

(They were brilliant in this regard, helping visualize settings, characters and future technology, and creating assets to promote my book).

Using them to design a game is a completely different story. Here is my story of disappointment, drafting in human support, conflict and then perfect unity… in Unity.

It Began With a Bubble

For those that have been following this series, my game idea was based on popping Bubble Wrap and I decided to learn Unity by building a bubble wrap popping experiment. In this experiment, I would have little characters behind the bubble you had to pop to collect, and booby traps you had to avoid.

Nipping off to GameSpark GPT which is now multi-modal, I asked it to create a bubble-wrap bubble. This is what it produced.

Many attempts later, I realized, bubble wrap wasn’t working well. I pivoted to just a pure old bubble. The result was much better, and something I could turn into a sprite with a bit of Photoshop jiggery pokery.

In my experiment, I created a grid of these bubbles, but needed to make them explode on tap. I asked GPT to make me a sprite sheet of a bubble popping.

Hmmm. Not quite what I had in mind. And what the hell happened in the last frame? After fiddling around with some different prompts, I still couldn’t achieve the result I had in my mind.

Never mind, let’s move on to the characters and booby traps behind the Bubble. My idea was to create cute little creatures behind the bubble and a smoke bomb, that when tapped would explode in a puff of smoke making it more challenging to tap the right bubbles.

The game mechanics were starting to brew in my mind, which included a countdown where you had to collect a certain number of creatures in a time frame and avoid booby traps.

Here are some of the monster characters and a smoke bomb GPT created for me. Not bad. Good enough for a POC.

A cute, 2d Monster
A cute, 2d Monster
A smoke bomb.

I got to work building my Grid, adding an object spawner to randomly spawn monsters and booby traps behind bubbles and respond to a player tapping the button. This post is about design, so I’ll leave how GPT helped me code to a following post.

Now, with my POC Done and being satisfied with the result I needed to think about what to do next.

The limitations I had experienced in creating sprites were starting to bother me. At some point, I would need to productionize my graphics which would mean creating images, sprite sheets, animations, menus, buttons and (Dear God!) Graphic Text. All of which would have to be unified in broader game design and style.

I was beginning to see the chasm between Design and AI-generated art.

Not beaten yet, I wanted to push Image Generators a bit more to see how good they were at game graphics beyond simple sprites.

Here is a “bubble popping level screen for a bubble popping game called bubble pop hero. make it clean, simple, bright and engaging.”

Ok, quite nice conceptually… But it seemed pretty derivative of other games out there, and it had made up its own Candy Crush-like dynamic. And how on earth would I decompose this into individual graphics to re-build the screen?

Side Note: It’s also an optical illusion where the bubbles on the left and right seem to move around when you stare at the center.

Yes, I needed help.

And I knew just the guy. My mate, Daz, a talented creative, illustrator and designer with lots of experience in designing graphics for games.

Enter the Designer

At this point, I had a conundrum. My experiment was called AI-First Game Development, not AI-First Game Development plus my mate, Daz, the Designer and Illustrator. Was calling in human help cheating?

No! Needing humans to complete a small project in an AI World was a valuable (and positive for humanity) lesson to have learned in this process. Also, I was very interested to see if I could get a Designer/Illustrator to work in an AI First way and see if they could speed up their workflow using AI Image Generators.

After quick Zoom chat, Daz was onboard. We have worked on a few pet projects in the past including Apps, T-Shirts, Digital Stickers and a KickStarter. We work well together and our pet projects have never gotten in the way our our friendship. We had never created a game together, though.

I told Daz about my idea of being AI-First and he had said he had started experimenting with MidJourney, but had not found a practical use for it yet. My little experiment above made me realize why. He was on board with the AI-First way, though, so this would be a good learning experience for him too.

He looked at my experiment and had some good ideas about the overall game mechanic, AND had a bank of 90 cute, illustrated animals we could use for the Collectables part of the game. Handy! We also decided to change the name from Bubble-Wrap Hero, to Bubble-Pop Hero, as pure play bubble popping would be visually appealing and easier to implement.

We were setup for success and wanted to get cracking. How to work together was the first thing to work out, and this is when our first collaborative bubble burst.

(Apologies for the bubble-popping metaphors, but I’m a dad and can’t help myself)

The Great War Between Engineering and Design

Daz and I butted heads over two things: How to design a game and how to work together.

My game design had been bottom up so far with inventing new features as I experimented in Unity. Daz was used to a top-down approach, designing the game in its entirety and sending the assets and specifications over to the engineering team, with fine-tuning beyond that.

Secondly, Daz had his workflow which was producing assets and shipping them over to the dev team to integrate into the game. There would be a document specification describing, sizes, states, behaviors and more.

We discussed how to work together. I wanted Daz to work directly in Unity and gave him a short overview of the game I had developed and how Unity worked, and said, it’s easy right?

We also discussed how to design a game. I wanted to experiment, tweak and build the game upwards from that. Daz wanted to design the game to 80/90% fidelity before we start building.

This post isn’t about collaboration, and AI didn’t help when it needed to defuse a few heated debates and almost bailing out of the collaboration.

One WhatsApp Message from Daz sums up this fractious stage:

What about a word game?

After an honest chat, we worked through our stubborn, middle-aged man points of view and agreed to find a middle ground. And that middle ground has been brilliant so far.

Daz is working in Unity. We don’t have long specification documents and I don’t have to load assets into Unity and try get the visuals and animations working.

We are also doing more top-down design.

And, importantly, the collaboration is fun and rewarding.

You will have to stay tuned to this series to see the results of a Human-first Design approach.

What’s in a game?

But I’m jumping ahead.

With our new collaborative model in place, we needed to flesh out the game mechanics. And this is when AI came back into the picture.

Read on in Part 4 when I explore how Generative AI can help with the ideation process.

--

--