AI-First Game Development. Part 2. My AI Teacher

Mark Rodseth
6 min readFeb 17, 2024
MidJourney: Socrates in the stars, illustration style — ar 4:1

Welcome to part two in my series, AI-First Games Development.

This series sets out to answer the following questions:

Can AI help a rusty ex-coder with a passion for games, not a lot of free time with work and family, create a decent computer game? And if so, how much load did the AI take, how has the software engineering process changed since I was in the driver’s seat, and what other aspects of game design and development can AI help out with beyond writing code? And finally, what are the best tools out there to help me with this job?

If you want to catch up or skip ahead, here is a list of all posts in the series.

Part 0: AI First Game Development (Intro)

Part 1: AI-First Game Development. Training the GPT

Part 2: AI-First Game Development. My AI Teacher

Part 3: AI-First Game Development. The Chasm Between Design and AI-Generated Art

Part 4: AI-First Game Development. The Ideas Machine

Part 5: AI-First Game Development. The Engineer Lives

Part 6: AI-First Game Development. Launch Day

In the previous post, I talked about setting up my AI tools to build a fun, casual game based on the idea of popping bubble wrap. My tools included an OpenAI GPT customized to support me with Unity Development and GitHub Copilot.

I was keen to flesh out the idea and start building the game, but I first needed to get to grips with Unity. I had played with it a little in the past as part of relentless tinkering with development software tools, but would classify myself as a noob.

I’m an advocate of learning by doing, so decided to build a rough proof of concept for popping bubbles, and in the process learn the fundamentals of Unity game development. I would do this, of course, in an AI-First Way and use my new tools to support the learning process.

Before I get into my experience learning with AI, here is a short video of my early game prototype.

Please forgive the amateur shaky-cam footage of this.

You will also see some graphics inside the bubble, I’ll talk about this and AI Generated art in the next post.

To build the bubble popping proof of concept there were a number of core Unity topics I had to familiarize myself with, including (a non-exhaustive list):

  • Game architecture (GameObjects, Cameras, Canvas, Components, Collision Detection, Animations, Prefabs, Sprites)
  • Code architecture (Configuration, Dynamic layout, game object generation and manipulation, Common Patterns, Debugging, Core libraries)
  • Target Platform Architecture (Testing on IOS and Android Devices).

I also had to dust off my rusty coding chops. But this was, as the adage goes, like learning to ride a bike again. Note, the last time I rode a bike I hit an invisible bump in the road and ended up throwing my 90 KG body onto the tarmac and seeing how far I could slide on a combination of denim and flesh.

I digress.

This was a sizeable amount of learning to do, but building a basic POC would help me prove that rapid bubble popping could be fun, get me started on the development road, and learn a whole lot in the process.

On AI-First Learning

The first thing to say about learning is that we all have different learning styles that work best for us. As mentioned, I am a strong advocate of learn-by-doing and do believe this applies to everyone. Beyond that, we all have our preferences for absorbing knowledge, whether it be through instructor-led training, self-learning, watching videos, reading articles, doing online training, code walkthroughs or searching the hell out of Google. In my learning process, I did use these traditional learning approaches, but my GameSpark GPT became a valuable guide and teacher throughout this journey in the question-and-answer modality of learning.

Socratic Questioning

GPTs are brilliant at question and answering (Q&A). The latest GPTs have a vast body of knowledge that can be accessed to answer these questions in a way that makes sense to you. Further to that, the ability to tune your GPT to be an expert in a specific domain gives you a knowledge deity that is at your beckoning call.

But, with self-learning through Q&A, knowing what questions to ask is key, and this is where human knowledge and experience comes into play.

If I didn’t have an engineering background and or a basic level of understanding about how to build games, my first question would have been, ‘How do I get started learning Unity?” This question would probably be better answered by watching an instruction-led training video.

But, asking precise questions, targeted and orientated towards deeper goals (and where finding the pre-canned answers to those questions would be difficult) is where GPT Q&A excels.

A simple example. In Unity there are two key views: the Camera View and the Canvas View. Both have very different contexts and usages within the platform. A quick query to GameSpark provided an excellent explanation of the difference between these two.

User
explain the difference between the canvas view and the camera view in unity

GameSpark
In Unity, the terms "Canvas View" and "Camera View" refer to two distinct aspects of the user interface and scene rendering, respectively. Understanding the difference between these two is crucial for both UI design and game scene setup.
...

Once I got a feel for the main building blocks of Unity, I needed to figure out the best way of structuring Unity projects both in the Unity Editor and in the code behind the scenes. The are many ways of doing this, many not-best practice or bad-practice, and I wanted to circumvent learning the hard way. So I asked GameSpark GPT to give me some guidance.

GameSpark’s describing a typical hierarchical structure in Unity.

User
What are the best software design patterns to use in Unity and when should you apply them

GameSpark
Design patterns in software development provide reusable solutions to common problems, and in game development with Unity, they can significantly streamline the process and enhance the maintainability and scalability of your games. Here are some of the best software design patterns to use in Unity, along with scenarios when they should be applied:

Singleton Pattern: This pattern ensures a class has only one instance and provides a global point of access to it. It's ideal for managing game managers, sound managers, or any other service-like component that should be globally accessible without duplicating the instance.

When to apply: Use the Singleton pattern for components that manage game states, settings, or resources that are needed globally and persistently throughout your game, such as a GameManager or AudioManager.

Asking GPT targeted questions to help me understand and drill into foundational Unity areas at different levels of depth, in any order and tailored around my own goals was invaluable to build up my own understanding.

I quickly moved from abstract Q&A learning (asking questions on concepts not specifically related to my mission) to targeted learning, or baking learning, and building elements of my game into my questions or prompts.

I will talk more in another post about the development process using my GPT and GitHub Copilot which gets much deeper into executing specific tasks, from setting out the high-level architecture of the game to mechanistic challenges like getting the damned thing to run on IOS and Android.

But if there is one lesson I’ve learned from learning with AI is that the more you know the better questions you can ask, and the greater educational value you will get from an LLM. Using this tool in conjunction with other traditional types of learning will help you accelerate the learning journey and get you faster to be able to apply your knowledge in useful and practical ways.

In the next post, I divert into the design part of my game and experience the gulf between AI-Generated art and Design.

--

--

Mark Rodseth
Mark Rodseth

Written by Mark Rodseth

VP of Technology @CI&T. Tech nerd and Indie Author. www.markrodseth.com

Responses (1)