As usual, I'm playing more with CLIP neural network that converts text to images and VQGAN.
(openai.com/blog/clip/, compvis.github.io/taming-trans)
This time, I wrote a tiny code that processed a lot of prompts from me and others and made a graph from the tags I found in each CLIP prompt.

There it a visualization I got with a resulting graph:
6r1d.github.io/CLIP_graph_visu

If you go to EleutherAI discord and open "-faraday-cage" to play with said neural network, run it in Colab or locally, this may help you.
(Invite link from the site: discord.gg/zBGx3azzUn)

Follow

Update: data extraction code and dataset are available at:
github.com/6r1d/tagnet

Documentation is available here: tagnet.readthedocs.io/en/lates

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.