top of page
ctbg.jpg

PUBLISHED ARTICLE

Prof. Alex Dimakis Talks AI Dreaming at TMI 2022

Originally Published in Cognitive Times Vol. 7 No. 2 // 2023

Read on Cognitive Times Website:

The future applications of generative models that could change the way we solve problems

The Time Machine Interactive 2022 event boasted dozens of conversations on innovative and exciting new technological advancements, especially around artificial intelligence. Speakers shed light on artificial intelligence (AI) in defense, energy, and business, but one speaker, Professor Alex Dimakis, set the stage with a different topic: dreaming.

Artificial intelligence is now dreaming more than ever, he said. From DeepDream to Dall-E to ChatGPT, AI is learning to imitate human cognitive behavior. What started as a desire to understand how computers are learning has now become the playground for discovering what all AI dreams are made of.

Dimakis, a Professor and Co-director of the National A.I. Institute for Foundations of Machine Learning at UT Austin, is looking into this gateway that leads to AI’s ability to dream. His research is centered on AI for imaging, focusing on handling unlabeled data and using it to generate predictable and realistic images. His lively presentation went from working with adversarial examples where change can have a butterfly effect to predictive computing through imaging.

The solution to the problem of unlabeled data is generative models, Prof. Dimakis explained. These are neural networks that have imagination. They’re able to imagine fictional data, but Prof. Dimakis shared how these models can do much more.

“What I call generative models solve problems like noise tomography, super-resolution, and many others,” he stated.

One of the more prominent threads of Prof. Dimakis’ research deals with self-supervision or creating customized labels from data and then training models. The first type of network, called classifiers, takes an image as input and then produces labels as the output. So right now, it is possible to train models; the desired image is input, which the AI recognizes as human or not. At Time Machine, he spoke of networks he’s developed that have only a few inputs, perhaps only a hundred, and their capability to generate or dream up an image.

Websites have been springing up for nearly a decade with the option to create seemingly real objects using this approach, like the websites Thispersondoesnotexist.com and Thismapdoesnotexist.com. AI has been using such dreaming techniques to produce people, maps, and many other examples that are very realistic and yet not real at all. Prof. Dimakis suggested this functionality is more useful than just creating random people or fake accounts on social media. He pointed out that if a generative model can be trained to create fake people, it can also be used to detect fake people. This ability alone can help reduce noise and blur from images, increase resolution, colorize, compress sensing, and much more. It can help with accelerating magnetic resonance imaging or seismic imaging, an area that Prof. Dimakis is working on with SparkCognition.

Besides facial recognition, this ability of the AI to dream can also help tackle very challenging tasks. For example, MRIs can be trained to detect tumors more accurately with AI. Seismic data can be analyzed to predict natural events more reliably and detect deposits. Prof. Dimakis is using generative models based on public data sets to match the state-of-the-art performance of deep learning in these areas. He’s been able to train a generative model on brains and use it to reconstruct MRI images of knees. For seismic imaging, he’s collaborating with SparkCognition to use a contrastive learning technique that can reconstruct and enhance images as well as detect objects.

The audience couldn’t help laughing when he spoke about turning people into frogs. His point? Illustrating the benefits of leveraging a pre-trained generator with a pre-trained classifier to create something completely new in order to try new thinking towards an old problem—a bit like gluing Legos to construct whatever you want. Even though that’s not how Legos are intended to be used, he argued that it could be useful to experiment with “pre-trained models” that can be downloaded, stacked, and fine-tuned from end to end to solve multiple problems across the field of predictive AI.

These models can be used as priors for all kinds of problems, and they can be combined with pre-trained classifiers to guide generation. He predicted that the future of AI would be unsupervised in general. “That is a new paradigm of programming, essentially,” he said. “And I think it’s going to be very impactful.”
Is there a method to the madness of the images that AI has been dreaming about? The people drifting in the skyscape, the dog-like faces on the bodies of swimming fish; could they help us understand the strange link between individual memories and image associations to invoke emotional or logical responses? Could AI dreaming even help us understand the mysterious nature of our own dreams? Only time will tell.

bottom of page