Archive

Monthly Archives: July 2023

HuggingFace Jam 2023 happened to coincide with this month’s project. While I’d always intended to have the plan in place by the third day so I can work on it, practicality beats purity.

The theme of the jam is “expand”, which I’m going to take to mean “expand your skills”. A long time ago I wrote “The Rage of Painting” and I’m still proud of what I’ve done there. I’d like to rebuild that game to run on mobile and web platforms, and to use more of the latest advancements in computer vision.

The jam is 48 hours long, but 16 of that will be sleeping for me, at least, then add 12 hours for everything else I have to do. 20 hours of time budget.

4 hours: get a screen in Godot with the ability to draw to it and a variety of colors.

4 hours: get a text UI up and have the tutor give the prompts. (No judging yet.)

4 hours: polish the assets and work off the rough edges. Playable build uploaded to itch.io.

4 hours: try and run uploads of the image to a service for ‘judging’.

That’s 16 hours. The remaining 4 I’m going to write off as buffer.

Let’s gooooo.

Hour 1:

My first big decision is how to actually draw the points to the screen. While I’ve historically used a TextureRect and set pixels based on user mouse coordinates, I’m wondering if I should perhaps try and use a shader here to be faster and more memory efficient. I’m going to spend 30 minutes trying to figure it out before I fall back to the dumb solution (since I have some afternoon obligations).

Hour 2-3:

Unfortunately, using a shader to do the drawing was a no-go. I can’t write to the texture buffer from the shader unless I use a compute shader, and the web build doesn’t support compute shaders. Fortunately, using Image and blitting a brush works fairly well. I’m trying not to overengineer the brushes by adding support for movement speed, brush dynamics, etc. Perhaps I’ll come back and add that.

Hour 4:

I did end up adding some interpolation to the brush to help with smoothness. I thought there might be an issue with the number of substeps, but it still performs okay. I did a build and uploaded it to itch.io to make sure everything was fine on the web and it works even better there.

Hours 5-48:

Things have developed, not necessarily for the better. A planned visit to scout a new apartment and short stay at a birthday party turned into a full day and a half adventure. My Sundays are spoken for already, leaving no time to complete most anything. This was perhaps a complete failure.

I found myself also unable to get motivated enough to carry on with the project through the rest of the month, between disenchantment, apartment hunting, and life in general. I’m going to call July a failure and start again.

This month’s project was something of a success, though I’m late on the write-up. The suggestion came from my partner Lisa, who proposed something like it for our friends’ group.

GooglyEyeBot sits in a server and, with a configurable random probability, replies to a message and applies Googly Eyes. It can also be set to only apply googly eyes when tagged, which I’ve made the default.

Things that went well:

  • First version was very fast to produce. Used MediaPipe with the expectation that I’d find time to retrain a face model when the time came.
  • Discord.py is rather nice, even if “the right way” to do something is a little impenetrable at times. I found myself looking for how messages would connect with each other. I think Discord itself is moving through some changes, so it makes sense.

Things that were tricky:

  • I’m still not entirely sure how to determine if the user sending a command is an administrator of the server (guild).
  • Never got around to making my own face or training set.

Repo: https://github.com/JosephCatrambone/googlyeyebot