As usual, I'm playing more with CLIP neural network that converts text to images and VQGAN.
(https://openai.com/blog/clip/, https://compvis.github.io/taming-transformers/)
This time, I wrote a tiny code that processed a lot of prompts from me and others and made a graph from the tags I found in each CLIP prompt.
There it a visualization I got with a resulting graph:
https://6r1d.github.io/CLIP_graph_visualized/index.html
If you go to EleutherAI discord and open "#the-faraday-cage" to play with said neural network, run it in Colab or locally, this may help you.
(Invite link from the site: https://discord.gg/zBGx3azzUn)
Update: data extraction code and dataset are available at:
https://github.com/6r1d/tagnet
Documentation is available here: https://tagnet.readthedocs.io/en/latest/