When we were experimenting with random word and AI image generators a few weeks ago, one thing I noticed that was very fascinating to me was that different AI image generators seem to have different... "personalities", if you will (yes, I am aware how dystopian it feels to use that word in this context). I suspect this likely reflects differences in training data. Take, for example, these two images I generated during the Getting Started with AI Lab:
Both of these images were generated with the same prompt (about 20 random words thrown together, generated using a "fancy words generator"), but with different models. The one on the left was generated using NightCafe, and the one on the right with Stable Diffusion. Both of these images have a very different feel, from color scheme to atmosphere.
I'm aware this project is about giving in to random chance, but I feel it could be very interesting to see how different generators react to similar prompts, and curate my potential results that way. I admittedly enjoy AI imagery with Stable Diffusion's vibe a little more (I've been feeling a little ✨existential✨ lately), so I'll likely be aiming for a darker tone for this project overall.
As for glitching techniques, I really liked pixel sort (I enjoy algorithm-based things, like any walking comp sci student stereotype should), so I'll likely end up playing around with that a lot. I also enjoyed how dramatically text edit glitches and Audacity glitches can mess with the colors of an image, so I'm interested to see how those methods will react with each other and pixel sort.
Comments