AI is slowly getting more creative, and as it does it’s raising questions about the nature of creativity itself, who owns works of art made by computers, and whether conscious machines will make art humans can understand. In the spooky spirit of Halloween, one engineer used an AI to produce a very specific, seasonal kind of “art”: a haunted house.
It’s not a brick-and-mortar house you can walk through, unfortunately; like so many things these days, it’s virtual, and was created by research scientist and writer Janelle Shane. Shane runs a machine learning humor blog called AI Weirdness where she writes about the “sometimes hilarious, sometimes unsettling ways that machine learning algorithms get things wrong.”
For the virtual haunted house, Shane used CLIP, a neural network built by OpenAI, and VQGAN, a neural network architecture that combines convolutional neural networks (which are typically used for images) with transformers (which are typically used for language).
CLIP (short for Contrastive Language–Image Pre-training) learns visual concepts from natural language supervision, using images and their descriptions to rate how well a given image matches a phrase. The algorithm uses zero-shot learning, a training methodology that decreases reliance on labeled data and enables the model to eventually recognize objects or images it hasn’t seen before.
The phrase Shane focused on for this experiment was “haunted Victorian house,” starting with a photo of a regular Victorian house then letting the AI use its feedback to modify the image with details it associated with the word “haunted.”
The results are somewhat ghoulish, though also perplexing. In the first iteration, the home’s wood has turned to stone, the windows are covered in something that could be cobwebs, the cloudy sky has a dramatic tilt to it, and there appears to be fire on the house’s lower level.
Shane then upped the ante and instructed the model to create an “extremely haunted” Victorian house. The second iteration looks a little more haunted, but also a little less like a house in general, partly because there appears to be a piece of night sky under the house’s roof near its center.
Shane then tried taking the word “haunted” out of the instructions, and things just got more bizarre from there. She wrote in her blog post about the project, “Apparently CLIP has learned that if you want to make things less haunted, add flowers, street lights, and display counters full of snacks.”
“All the AI’s changes tend to make the house make less sense,” Shane said. “That’s because it’s easier for it to look at tiny details like mist than the big picture like how a house fits together. In a lot of what AI does, it’s working on the level of surface details rather than deeper meaning.”
Shane’s description matches up with where AI stands as a field. Despite impressive progress in fields like protein folding, RNA structure, natural language processing, and more, AI has not yet approached “general intelligence” and is still very much in the “narrow” domain. Researcher Melanie Mitchell argues that common fallacies in the field, like using human language to describe machine intelligence, are hampering its advancement; computers don’t really “learn” or “understand” in the way humans do, and adjusting the language we used to describe AI systems could help do away with some of the misunderstandings around their capabilities.
Shane’s haunted house is a clear example of this lack of understanding, and a playful reminder that we should move cautiously in allowing machines to make decisions with real-world impact.
Banner Image Credit: Janelle Shane, AI Weirdness
* This article was originally published at Singularity Hub
0 Comments