AI-First Games Development: Part 5: The Engineer Lives

Mark Rodseth
6 min readMar 21, 2024
MidJourney: software engineer frankenstein 1940s — ar 4:1

Welcome to Part Five of my series, AI-First Game Development where I try answer the following questions:

Can AI help a rusty ex-coder with a passion for games, not a lot of free time with work and family, create a decent computer game? And if so, how much load did the AI take, how has the software engineering process changed since I was in the driver’s seat, and what other aspects of game design and development can AI help out with beyond writing code? And finally, what are the best tools out there to help me with this job?

If you want to catch up, check out my previous posts in the series.

Part 0: AI First Game Development (Intro)

Part 1: AI-First Game Development. Training the GPT

Part 2: AI-First Game Development. My AI Teacher

Part 3: AI-First Game Development. The Chasm Between Design and AI-Generated Art

Part 4: AI-First Game Development. The Ideas Machine

This part of the series has been the most daunting to tackle as it questions a skillset I have taken decades to develop: the craft of software engineering.

The Dusty Rusty Coder

Writing about how AI can help a rusty coder dust of his …rust?… and tackle a fairly complex coding challenge has involved a substantial period of remembering, updating myself on new engineering approaches and techniques, experimentation with GameSpark GPT and GitHub Copilot, learning Unity development, lots of trial and error, self-evaluation and reflection to get to a point where I have a meaningful point of view.

I think I’ve got there, and have three main perspectives that encapsulate my learnings: The AI Software Engineer, The Empowered Product Designer and The New Workflow.

Perspective One: The AI Software Engineer

A colleague said to me having Generative AI to help you write code is like being given a Junior Developer.

Through this AI-First Game development experience, I’d say it’s also like being given an Ivory Tower Architect, and the role of the Engineer is to integrate the outputs of both into the broader context of the solution.

Here is a diagram to illustrate this.

The N-Pizza Team

I’m calling this idea an N-Pizza Team as a play on Jeff Bezos’ 2-Pizza Rule:

We try to create teams that are no larger than can be fed by two pizzas.

In this perspective, AI-first engineers become N-Pizza Teams. That is, they become a one-person team with an infinite number of bots to help them architect and write code, without a pizza limitation on how many additional AI members they have.

I may have been hungry when thinking this up.

Nevertheless, The AI First Software Engineer sits in the middle integrating the guidance of the AI-Architect and code of the AI-Engineer into the wider (technical and business/product) context of the solution.

The opportunity to make our N-AI-Software Architects and N-AI-Software Engineers better is through context mapping. The more context they have the less ivory tower the Architects will be, and junior the Software Engineers will be.

Some additional thoughts on the AI-First Software Engineer:

  • Language and skill. Knowing what questions to ask and instructions to give the LLM requires an application of language skills a high degree of knowledge and experience.
  • Judgement. Judging the output of the AI bots requires a strong foundational knowledge of engineering best practices in the context of your solution.
  • Auto Complete vs Prompt first vs Prompt supported. There are three main modes of AI-assisted development. Auto-complete where you rely on your IDE tool (GitHub Copilot) to suggest code that you tab through and accept. Prompt first where you only interface with your code through Prompts, and make changes through prompts. And Prompt assisted, where you typically start with a prompt, but then shift into directly editing the code yourself with a back-and-forth with the LLM. I found myself mainly operating in this last mode.
  • Some things never change. Mysterious bugs, bad documentation, rabbit holes, bad estimation (fortunately I didn’t have a deadline), still remain. AI can help this a lot but I found myself in this all too familiar territory more than I thought I would with my AI assistant.
  • The Context Problem. I found myself often wishing for a better solution to have the LLM understand the broader context of my solution, and not having to inject this context into my prompts. Whoever solves this in the best way will be a winner. My guess is GitHub Copilot will be first over the line.

Perspective Two: The Empowered Product Designer

Generative AI tools give power to product designers. By product designers, I mean anyone who has an idea/vision for a product or feature and wants to test it out by building a proof of concept, but has limited engineering ability or resources.

This is best explained through Daz, my creative and design partner working with me on this game.

If you’ve been following the series you will recall we had some debate about how to work. I wanted Daz to work in Unity and use AI. Daz was initially resistant as this wasn’t how he worked in the past. After a healthy debate, Daz started working directly in Unity rather than providing me design assets and specifications to integrate into Unity. Daz also started playing around with ChatGPT.

Fast forward through some great collaboration and experimentation using this new collaborative way of working, and Daz dropped me a message to show me a short video of another idea he had been working on.

The above was a fully fleshed-out level idea for another game. Daz had created a fully functional demo in Unity with animations, physics and great design. And he used ChatGPT to write the code.

This made me realize that Product Designers are now able to take their ideas much further into working software that can be used to validate thinking, test hypotheses, put something tangible in the hands of others, and even prove engineering viability.

Perspective three: The New Workflow

Generative AI will have a major impact on our ways of working.

This will start with individuals who learn how to use Generative AI to help them improve their efficiency, quality, and creativity.

Then it will move to teams as newly empowered individuals figure out better ways of working together, taking advantage of their new AI team members.

Sequencing humans and bots together to complete a goal will result in different workflows that will revolutionize how we work.

This was demonstrated at a micro-scale with the Software Engineer becoming the AI Software Engineer, and the Product Designer becoming the Empowered Product Designer. But it is a glimpse into something much greater and more profound, where knowledge is no longer a barrier to achieving a goal and the best of humans work with the best of the bots in concert to create wonderful new things.

I’m reaching the end of my series, and hopefully will have Bubble Pop Hero out soon!

In the next chapter, I will cover off some other areas I’ve missed out in this series such as AI Music production, AI Product Management, and the perils of Corporate Politics.

Stay tuned!

--

--