char-rnn does video games

How this all got started:

A co-worker sent me an Ars Technica article about neural-network-generated paint colors. This led me to the tumblr of Janelle Shane, which is pretty funny and mostly deals with neural networks being applied to various things.

I went to the github site for char-rnn, a character-based neural network. And the rest is history.

June 21, 2017

More video game name attempts! The entire video game file is reproduced here and after training, these are the most interesting things I managed to find with seed text and temperature values:

seeding with Devil


Prefect Golf: Street Fighter 2
Warname Stork: Masherat

seeding with Feck, temp 0.4


NBA Shit 2 (...um.)
Crack to Beer
Caltreakio Games: The Dragon Challenge

seeding with Feck, temp 0.3


Nombarelad's Infruck of Final
Blank wonding 2
Creaturs III: The Rocket of the Ninja Challenge
The Legend of War of the Tennis
Crush Tennis
Revilution 2

seeding with Foof


World Combatron
Chaspionship Bootball
Contra Boot
Cobraminge (The Baroness is not amused.)
Contra Slow
Toke (what is my neural network smoking?)
Cobra Super Bool (Commandos fighting aliens and using python!)
Breat Breakers: Black Bass Boyent
Robot Moos (what?)
A.T Doubolanion Goid: The Roboting

June 4, 2017

I took a number of screenplays ("The Matrix", "Aliens", "Buckaroo Banzai Against the World Crime League", and "Wolfcano"), catted them all together, and used that as training data. Tried this with a temperature of 0.3, then a temperature of 0.4, then of course a temperature of 0.5. Coherence? None. The outputs look like a screenplay, and there are some interesting individual lines in there, but this particular neural network model is just not good at telling stories.

I was talking to Steve about this a few days ago, and said, "What if we set up and trained multiple neural nets? Feed NN-1 several thousand exposition chunks, NN-2 gets several thousand rising action chunks, NN-3 gets several thousand climax parts, NN-4 gets several thousand falling action/resolution parts." Combine the best outputs from all networks, and see whether it's usable. OK, it probably wouldn't be, but it might give human writers unusual ideas.

May 29, 2017

More stuff! I added a bunch of data (Playstation games, N64 games) to the training list and did a few more runs through. This wasn't as productive as I hoped it would be though. I also used the "seeding text" parameter in a few cases just to see what would happen. At least with this data set, output appears to be deterministic--I get the same stuff out of it when I feed it a piece of seeding text and the same temperature. This is not how I thought it'd work.

(forgot to record seeding text)


Cow Racing
Truck's Pron. 2: The Streel of Fire
World Chompions (Chompions. OK then.)
King King of Gorge: Pro Road Dragon
Crash Hookey
Pro Yakmen 2 (Amateur Yakmen? Perish the thought!)
WWF Wars
X-Men: On Dragon
Double 3: The Rescue Store
The Destructa
Wingy Adventue
Toker (...what is my neural network smoking?)
Dragon Duping: Kingei Thinder
Super Golf Star Golf: Buttle Scoak

seeding with Ninja:


Vootball
Dragon The Load
Super Pop Winger
Super Donger
Super Cosrat

seeding with Fart:


World Cops
Power Crave Racing
Wars of the Lart
Wowled Cout
Turtle Batt

seeding with Cat People


Donky Man 2000
The Lootball
Super Bobble Dangers
Super Gars
The Socker

seeding with Wombat Power


Donky Mon
Moth Master
Rocky Kong
Robot of the Mood

seeding with Fart


Mega Man 200 (No, they never made this many.)
Fengual Footing 2
Freet Fighter (so close!)
Legend of the Minns of Tennis
Fream Trank Mansion
Pink Pook
Pop'n Punch

seeding with Power Foofage:


Wara or The Street Clampions
Super Mario Troping
Street Fighter II: The Legend of the Starch of Fire

seeding with Baked Beans:


Frank of Fire
Perchier Man 2000 (Fighting crime in the year 2000 while being more like a freshwater fish?)

seeding with Tacos:


Nimmy: Pawky Spoct Villivation
Winaldor's Power Panicle
Super Bomber's Luger Adventure
Super Buts World

May 21, 2017

My first attempts at doing things with neural networks. Trying char-rnn out on a screenplay and a file of funny quotes didn't work as well as I'd hoped. Copying the wikipedia list of released NES and SNES video games into a text file and using that as input worked a lot better. Here's the best of a few runs:

  • Star Super Shat Greet
  • The Bunnney Bupkey Bamel
  • Ninja Goofd
  • Firty Kid
  • Adverture: Lighteed of the Boul: The Legle Browherboot
  • Bark Blazer
  • The Blew & Hecterntfion
  • Rack & Sparm
  • Super Black Stam Shat: Sterke Thing Mightons
  • Thunder Ninja
  • Rid Grootferstorbat
  • Super Mario Soccer
  • Super Bark
  • Man Sping
  • Caopal & Pork
  • Roper Clamband
  • Baper Bark
  • Super of the Ball
  • Basty Star
  • Copta Ninja Cambain
  • Terio II: The Herige
  • Super Bool Adventures
  • Super Maria
  • Shampoon Coop
  • Super Street Quest
  • Spine Klunters
  • Castle Kings
  • The Adventures of Superthirk Saccer
  • SpiderTonk
  • Super SpickBall
  • Super Ackition
  • Stars if the Super Bits
  • The Gear of Diley's Divenster Starrice
  • Ballerball
  • Wankerbat (yes, those two showed up exactly like that.)
  • Starship Soccer
  • Ninja Gaiden III: The Secret Pastel

(I couldn't find any images of starships playing soccer. I thought the internet had everything.)

Technical details: The input file I used. The command I ran to train the network was th train.lua -data_dir data/videogames/ -gpuid -1 -savefile videogames -batch_size 20 -seq_length 20 . The command I ran to generate output was th sample.lua cv/lm_videogames_epoch36.59_2.3867.t7 -gpuid -1 -length 1000 -temperature 0.6 >> output/videogames.txt , and I repeated that a few times and then pruned the output file of the most incoherent lines. (This may be cheating, but I'm going for comedy, not purity.)

This could probably be improved. Not sure how just yet. Larger data set containing arcade, Playstation, Wii, and N64 games? Will keep fiddling.