Giles Penfold

Wands - Neural Networks Casting Spells In Unreal VR: Progression Page

Any updates on the progress of the project will be added here.

31/01/2019 - Quality of Life Changes

Summary: Two controllers (and two neural networks!), player movement, meshes/textures updated, and more QOL additions

Another busy week, and more progress made! Most of the changes this time have been quality of life fixes for the current build, aside from a defensive dome spell that I added (more out of interest of how to create a sphere spell). The first thing I did was to add a second controller. My idea with this is that the player can move and perform utility gestures (spellbook switches) with the second controller, whilst casting with the main controller. This could be changed to have dual hand casting later on, but for the moment, I like it as a utility controller. I can also load in the same neural network, or have completely separate ones linked to each controller. The touchpad of the right controller now changes the radius of the casting cylinder, which allows for the player to cast further or closer to them. If turning this into a game, I'd like the player to be able to control the speed at which the cylinder changes radius, from slow to very fast. The touchpad of the left controller allows the player to move about, nothing too complex.

I also made a quick rock in Maya and slapped a default UE4 texture on it, which makes the spells more earthy. However, I do like the more geometric look with the cubes. It could move towards a more geometric shape oriented style, which would be interesting to play around with. The raycast for the spells has been changed to a spheretrace, which I've found gives more precise results. I also spent some time on working with casting the spells based on controller orientation instead of camera (the spells currently cast towards where the player is looking, not where the controller is pointing). This has proved to be fickle, as the forward vector of the controller is not always quite where the player might think it is (hence the big red line in the video). I want to play around with this some more, but I've made it possible to toggle between hand and camera driven casting while I work on it.

I'm away next week in Northern Ireland, so no progress will be made on the project (I'm going to be brushing up on my rendering and game design knowledge instead). I'm hoping to add some basic opponents when I get back. Something along the line of test dummies that fire projectiles at the player that need to be blocked. The addition of a mana/chakra bar would be good as well, which would stop the player from spamming too many spells (though it is really fun to do that, so I'm a bit torn on that choice). If I have time before the end of Feb, I may look into the NEAT algorithm to make some more intelligent AI as I've always been interested in that concept - however I feel that might be more of a project in its own right.

23/01/2019 - Earthbending in VR

Summary: Full omni-directional gesture tracking, spell blueprints and being able to feel like you're an earthbender

This week has been a doozy! The network has come leaps and bounds since my last update. I feel the biggest change was how I was recording the data. I mentioned in a previous post that having the data tracked locally to the player was a good idea. What I neglected to realise was that if the player turns 90 degrees to the left and does the same motion, it doesn't recognise it. To counter this, I take each point of the vector and rotate it around the players position to the forward vector of the player. That way, no matter what way they are facing, all the data will line up in the network and it'll be able to figure it out. This did involve re-making my training data, which only took about 20 minutes, but it proved successful.

That's all fair and well, but now I need to figure out where in the game world the spell is going to be cast. To do this, I have a cylinder around the player and raycast from the controller outwards. When the cast hits the cylinder, it'll use that location as the spawn point for the spell. When I put in a second controller (it will happen soon, I swear!), I'd like the cylinder radius to be controller by the player so they can cast spells further or closer to them. The spells themselves are pretty basic. I went with instanced cubes to form the structures and simply replicated them in different patterns to simulate an earthbending style. I also allocated a flick gesture to the left as a "spellbook switch" which will scroll through a list of spellbooks. This means that I don't actually have to train dozens gestures into the network, but can instead stick with a solid 6-8 gestures and allocate them with different spells in the different spellbooks. At the moment, I just have two: Offensive and Defensive.

I posted my progress on the /r/gamedev subreddit to get some critical feedback from the game development community. There was a lot of positive feedback from everyone who commented, and a lot of questions about how, why, what, etc. I've connected with a few others who are doing similar things and have learnt a lot from the feedback, gathering a few new ideas to boot! I'm hoping to progress on with the project well into February to see how far I can push the current system.

17/01/2019 - It's Alive!

Summary: Proof of concept successful, neural network works and basic "spellcasting"

It's been a busy week and I've not been able to work on this as much as I'd have liked, but I have made some major breakthroughs with the project. The first was having a successful neural network being trained from UE4 input and data. This involved a lot of json wrangling, as I was passing through an array of arrays of vectors, which the default json creator in UE4 did not like in the slightest. After that was finished, I spent about twenty minutes training some data for the network. This was just a simple left swipe and downwards swipe to see if it could learn some simple movements. This worked well, but I then realised "what if the player tries the same movement, but in reverse". Of course, it didn't work on the current training set, so I created an omni-directional set to see if it could cope there. It did, very successfully in fact.

The next thing on my mind was: How far can it go? With my current training data setup, I can only efficiently record two different movements at a time as the buttons on the VR controller are limited at best. I would make a movement, such as a twirl, then press a button to allocate a specific tag to that movement and save it into a data set. I am planning on adding a UI menu with the left controller to select tags for movements instead, but working with what I have now, I was able to train it to differentiate between two similar but more complex omni-directional movements. A twirl and flick above the head (think casting a spell with a wand), and a circle. This proved a bit more challenging for the network to handle, but after some adjustments to the model and layers, the network is coped suitably well. At the moment the network has a flatten input layer, a 128 node dense layer, 0.5 dropout, 24 node dense, 0.2 dropout, 2 dense output structure. Nothing too complex. The data is normalized before training which also helped with accuracy. With more varied data, it might need a batch normalization layer further down the line, but time will tell. I added some very quick actor blueprints in UE4 to show the process working, which can be seen below.

The next steps are fairly open for the project. I could work on the visuals of the spell casting and make that more fancy, but I'm leaning more towards creating a full training data set with about 6-8 different movements and seeing how well the network handles all the variety. Setting up a left controller with a UI is something that has been on the list for a while, which will decrease the training data collection time drastically. As I'm heading to Northern Ireland for a week on the 1st Feb, I want to try and get as much done next week as possible, since I'll be without my VR setup whilst there.

09/01/2019 - Curses and Curve Fitting

Summary: Problems with TensorFlow Plugins, now fixed, and curve fitting for the neural network input done.

I do wonder if I've crossed some evil witch at some point who's put a curse on me that specifically denotes "Whenever Giles starts a new project using some API, said API shall always break with inexplicable errors." It seems a running theme over the past 3 years, but luckily most of the API creators have been very helpful in assisting with the problems. I noticed some very weird behaviour from my TensorFlow testing project, which ended up becoming corrupted and not loading. It appears that building the project in Visual Studio was causing issues, which I managed to track down to the UnrealPythonPlugin. I contacted the creator on Github and we spent a while trying to figure out the issue. In the end, I don't think either of us knew what specifically was going wrong, but at least we managed to fix the problem. The full story is on the Github issue thread. Using the examples from the TensorFlow Plugin, I setup a JSON system in Blueprint to carry across the data from UE4 to TensorFlow.

After working on the plugin bug for the better part of a day, I moved on to reading up on curve fitting, as I find my maths knowledge slips away from me if I don't use it. I have some experience in splines already with Unreal, as the previous course had used a USplineComponent to cast an arc from the controller to the ground. That was a parabolic curve, and I need a cubic curve, but the component will make life a lot easier. The player will be able to draw for as long as they like, creating however many points they want, and this needs to be converted into a fixed number of points for the neural network input. It is possible to give a distance along a spline in Unreal, and for it to spit out a FVector along that spline at the given distance. Perfect! I can put all the player drawn points into the spline, then interpolate my own points out of it which can then be thrown into the neural network. The coding for this didn't take long at all, it was mostly the researching around the area of curve fitting that took the longest. We now have player input being translated into viable vectors and being passed along into Python for TensorFlow to get its hands on. Next, I'll be working on creating my training data for this and constructing the neural network itself.

03/01/2019

3D Drawing & TensorFlow

Summary: Groundwork completed, learning Tensorflow & Keras, exploring different paths for NeuralNet input.

Christmas is over and we're in a new year! With the new year comes new project on the Wands project. I've been following the Unreal VR course posted in my previous update, which has given me a really solid grounding in Unreal Engine. This only became apparent to me when I went back to the TensorFlow plugin examples, 2 months after I first looked at them, and suddenly understood what was going on (compared to the abject confusion from the first time). I've made a template VR drawing app to work from and have a number of ideas on where to head with it, though first I need to learn TensorFlow.

I've mainly used Theano and Lasagne in the past, as this is what my original internship project was grounded in. A lot of the theory and structure is the same, and it's taken a couple of days to get used to how TensorFlow operates differently to Theano, but by and large, it was much easier than I expected. The next step in this project is to consider the different options for inputing my data into the neural network, as well as how to train the network with said data. In the original concept, I envisioned a 3D movement being used in some kind of CNN using pattern recognition - essentially a more complicated MNIST network. Having been able to sit on the idea for a while has made me realise there is a much simpler way of doing this. The path of the users hand will create a trail of points, which can be stored in an array. We have a vector of points, which will describe the movement of the hand from start to finish. At a fundamental level, that is all we need (we could add velocity, acceleration, etc but that is just over complicating things).

With this method, however, we have new challenges to face. Instead of a set size "image" being fed in, we have a variable length vector. Therefore, we have to specify what our input size is going to be for our network (i.e. how many points we are going to allow in our vector). Before we can make our network, or even create our training data, we have to decide on how this is to be done. In essence, we need to record a path in 3D space, transform it into local space relative to the user, perform some curve fitting to add/remove points from the path so that it fits our network, vectorize the data and then store it into a JSON for the TensorFlow plugin to latch onto. Not very complicated stuff, but a bit of a challenge when in unfamiliar territory. This is where I currently am, and below I'll explain some of the theories I have with the curve fitting.

I'm taking the assumption that this mechanic will fit into a wider game and am designing it with this in mind. This means I need to consider what variables might be passed into the network from other areas of the game and how to develop the system to cope with that. For example, the UI developers might want to use the network to create an interactive menu for the players to use with gestures. If I design this system for only 5 gestures, it'll fall flat on anything more complex. Therefore, to make this adaptable, I need to decide on a suitable length for the network input. Too few points and we lose the detail of our gestures. Too many and we run the risk of taking up too much computation time. In the (very) basic example on the right, we have two separate gestures. Figure 1 is a simple swipe to the right. Figure 2 is a more complex flame shape. The swipe to the right can be described in 2/3 points, whereas the flame takes many more points to be detailed properly. In figure 3, we see a third gesture, a circle being defined. If we were to simplify the flame, as in figure 4, we see that 3 and 4 look very much the same as each other. The neural network will find it hard to differentiate between the two gestures if there are too few points.

The next steps in this project are to create the curve fitting and json functions within UE4, which shall take any 3D gesture and turn it into a JSON format for the neural network. Once this is done, I can adjust the function to test different vector lengths until I can find a happy medium between performance and detail. Once this is done, I can begin work on the neural network proper and start getting a workout by creating my data sets for each gesture.

13/12/2018

First Courses Completed!

Summary: Completed 4 courses in UE4, begun on final course. Started work on the neural network VR project: Wands.

It's been a busy few months but I've finally completed the courses run by Rob Brooks within UE4. The first course was a crash course on getting used to UE4. I've used Unity quite extensively, as well as other pieces of software like Maya and Houdini, so not all the concepts were alien to me. There are a lot of similarities with Unity in the way the games are created, though I like the C++ compilation, components and blueprinting compared to C# scripting - even though I prefer C# as a language. It just feels more clean and versatile. The second course was taking these concepts and building a small puzzle based game out of them. I like to reinforce ideas when learning, so that they are easier to recall later, and this certainly helped me get into the swing on UE4 properly. While it didn't dip too deeply into the engine, it showed a few valuable areas within it, such as functions and interfaces within blueprints.

The next course took me into making a full game using blueprinting: a 2D bullet hell shooter. It was interesting to see the differences between C++ and blueprinting, but also the similarities. A lot of the things in blueprints I could see would be easier done in a simple C++ function, but others I would have no clue how to do (thank god for documentation). There are a lot of similarities with Houdini's node based structure, so jumping into blueprints was eerily familiar. The final course from Rob took me back to Pluralsight where I learnt how to integrate C++ and UE4 properly. While I'm fairly familiar with C++, it was odd seeing all the UPROPERTY and UFUNCTION declarations, which threw me a bit at first, but was easy enough to understand when coming from the blueprinting side. I enjoy jumping away from the courses every now and again whilst doing them and trying to implement bits around them, to really see if I understand what is being taught. This allowed me to explore further using the C++ integration.

With those courses out of the way, I embarked on a much larger course from Udemy. I'm sitting about 1/3 of the way through at the writing of this update, and it's been a huge challenge so far, which I've thoroughly been enjoying. Unlike Rob, Sam gives a challenge within each video and asks you to go away and work on it before coming back and seeing the answer. This has been invaluable to my learning, as I learn much better by doing, not watching. It's forced me to get familiar with the UE4 documentation and to experiment with all the various aspects of the engine. I've looked into post-process materials, dynamic pawns/actors, TArrays, and various other quirks within UE4. The next part of the course will form the basis for my project, as it gives me the knowledge to create a painting VR game - which is not too dissimilar from tracking the paths of a wand. It seems like I'm on track to start the next part of my project soon, and I'm excited to get into the really tough challenges.

Wands: Deep Learning and Unreal, Oh My! (Oct 2018 - Feb 2019)

Update 13/12/2018: I have started a progress log over on this page right here. Go there if you wish to see my progress so far!

After some incredible feedback from a job application, I've been able to ground myself slightly and begin directing myself in the right direction to where I want to be. I always seek to improve my weakest points, the hardest part of which is knowing where these are. Armed with this knowledge, however, I can push forwards and direct myself into a self improvement project. The first 3 months of this project shall be devoted to ironing out the weaknesses, with the following 2 months to research and construction of a game mechanic project in Unreal to see if I've really improved. I'm expecting to fail a lot, and I'm looking forwards to it!

The first part of all this is striking down my biggest enemy at the moment: Problem solving in a large codebase (I'm problem solving my problem solving, which is pretty amusing to think about. Unfortunately, I can't unit test it.). I'll be the first to admit, while I can problem solve fairly well, I have little to no experience in a larger codebase. How can I improve this without working in a larger company? Well, Rob Brooks has produced a number of Unreal Engine tutorials on Pluralsight, which throws you into the world of blueprinting and C++ in UE4. He gives a project structure to work with and encourages you to go ahead of the lessons and try to implement bits yourself before watching the videos. This is ideal, as it gives me a code base to work within (his project structure AND UE4) and gets me learning the new skills I want to learn. It feels like killing two birds with one stone: Learning UE4 and working on my weak points(especially learning how to unit test properly).

The second part, where I currently am, is to expand this further. I don't want to go into my project blindly with no knowledge of the A to Z pipeline for creating an Unreal game. So using two courses on Udemy, I am learning how to first create a full game from scratch, and then a VR game from scratch. The first course is rehashing a lot from the 3 Pluralsight courses, but will reinforce everything I have learnt in the past month. Where things really get interesting is the second course, where I'll be able to make a solid framework for my following project and get to grips with VR development, which I've been excited to jump into since purchasing a VIVE.

The final part will be creating the big project. Using the UE4 TensorFlow plugin as a base for a neural network, I shall be attempting to make a wizard dueling game along the lines of the dueling in the Harry Potter films. I need to be able to capture the movement of the players hand, and then teach the neural network to differentiate between which spell means what hand movement. If I have time, I would like to try and implement a NEAT algorithm for the AI to see if it can learn how to duel properly. This will bring together everything I have learnt since October and will be one of the most exciting challenges I've yet to tackle. Once the final project begins in mid-December, I'll be creating a separate webpage to track the progress of the project with weekly updates.