Update: data extraction code and dataset are available at:
github.com/6r1d/tagnet

Documentation is available here: tagnet.readthedocs.io/en/lates

Show thread

As usual, I'm playing more with CLIP neural network that converts text to images and VQGAN.
(openai.com/blog/clip/, compvis.github.io/taming-trans)
This time, I wrote a tiny code that processed a lot of prompts from me and others and made a graph from the tags I found in each CLIP prompt.

There it a visualization I got with a resulting graph:
6r1d.github.io/CLIP_graph_visu

If you go to EleutherAI discord and open "-faraday-cage" to play with said neural network, run it in Colab or locally, this may help you.
(Invite link from the site: discord.gg/zBGx3azzUn)

Got tired while reading articles.

Made a tiny thing to send text from Chrome to RHVoice. Let's call it "Something TTS".

gist.github.com/6r1d/d21a9348d

On one side we have a REST server, on the other side we have a Chrome plugin that adds an item into a right-click menu.

Typically I would ask myself why bother, but Chrome plugins that are popular did not work for me / interrupted while reading. I am not so happy about that part.

Sometimes it's hard to explain the homunculus problem / argument.

(Yes, there's no single part of the brain doing the thinking, because with that rhetoric a part of this part will soon do the thinking, so on and so forth.)

From now on, this webcomic page will make my life easier: falseknees.com/247.html

If you haven't played with VQGAN / CLIP programs that draw images based on a text you enter, you're missing quite a bit.

I've made a collection of tags I'm using to change CLIP rendering behavior.

There it is with the code to parse tags: gist.github.com/6r1d/fd3dca357

For the context: I'm using two CLIP instances, EleutherAI's and Google Colabs by Katherine Crowson.

There's a very nice 3D brain atlas tool called Allen Brain Explorer. It allows to enable or disable the visibility of different brain regions and do some other stuff.

connectivity.brain-map.org/sta

portal.brain-map.org/

Thanks to GrimSqueaker for recommending it to me on EleutherAI's Discord.

6r1d boosted
#Google and #Harvard #Unveil the #Largest #High-Resolution #Map of the #Brain Yet

Last Tuesday, teams from Google and Harvard published an intricate map of every cell and connection in a cubic millimeter of the human brain.

singularityhub.com/2021/06/06/…

A tiny test with CLIP network: what if I feed it three inputs: "sphere", "a sphere" and "the sphere"?

At the first glance, results differ. Sometimes smooth, low frequency features are very visible. Then the amount of noisy, higher frequency features and textures grows.

Another way to think about it is to tinker with Gimp's "wavelet decompose" plugin or look at an image here: docs.gimp.org/2.10/en/plug-in-

Often, only one side of the object or a section of it is being filled, which is interesting by itself.

CLIP fills edges with all kinds of textures very well, but generally, for a given token, it picks some zones and continually raises the image frequency detail.

In no way these three images are a representative sample, but it's fun to play with. Adding cubes and cylinders to the same semantic may help to test more stuff.

For the context: I am using BoneAmputee's CLIP+VQGAN.

Returning to the topic of brain implants interacting with hippocampus given a small amount of channels: there's an article called "Developing a hippocampal neural prosthetic to facilitate human memory encoding and recall" and a related video.

Apparently, electrodes used for treating epileptic seizures and a bit of research was enough to find correct activation patterns and improve patient's memory.

Short-term/working memory was facilitated by 37%, short-term and long-term retention of visual information was improved by 35%, and this process did not require hundreds or thousands of electrodes.

Article: iopscience.iop.org/article/10.

Video: facebook.com/ioppublishing/vid

I haven't written about the Computational Cognitive Neuroscience book by R. C. O'Reilly, et. al. and his video course on Qoto, only mentioned some moments I notice in videos, time to fix that.

The idea behind those is simple: "how to build a brain". Yes, we can't just go and build a fully working organic brain yet, but by thinking how brain regions may function, how they should be connected to function properly, what signals should be passed and by doing simulations we'll have a pretty good understanding.

I was lucky to find it some time before, and it's free for everyone to watch, so if you are interested in how our brains work, I can't recommend it enough.

Site with PDF: compcogneuro.org/

Video course on YouTube: youtube.com/playlist?list=PLu0

Emergent simulator: emersim.org/

Model downloads for Emergent: github.com/CompCogNeuro/sims

An article "A Hippocampal Cognitive Prosthesis: Multi-Input, Multi-Output Nonlinear Modeling and VLSI Implementation" by Theodore W. Berger, et al.
(ncbi.nlm.nih.gov/pmc/articles/) tells about a brain implant with 16 analog-to-digital converters and 32 channels of output.

I've been discussing it with Prozion (github.com/prozion) before and he has a very good question.
"How does the implant function at all with that amount of channels?"

The previous explanation from "Computational Cognitive Neuroscience" about memory de-indexing sheds some light on that: the article is about the same brain regions: CA1, CA3 and DG.

A mental note: "Computational Cognitive Neuroscience" videos have a very informative moment about the inner workings of the hyppocampus in a context of the memory retrieval, where Randall O'Reilly compares data retrieval to a hash:

CA1 region of the hyppocampus as a "memory de-indexer", unpacking activity pattern when we retrieve the memory and activating related cortical areas.
Dentate gyrus and CA3 as networks forming the said pointer.

youtube.com/watch?v=AdRx73BfJr

Continuing the topic of "how our brains work", both the Neuronify program (ovilab.net/neuronify/) and "Handbook of Brain Microcircuits" book by Gordon M. Shepherd and Sten Grillner are very relevant.

We have many types of neuron cells arranged in different ways to continue functioning.
Some of them help to control our heart, some help with detecting motions or some other type of input information.
The fact remains, though: some neuron arrangements are useful for specific groups tasks.

Neuronify has a library of models, "Handbook of Brain Microcircuits" has a lot of explanations.

There's a blog / newsletter about AI by Jack Clark, co-founder of Anthropic, previously a policy director of OpenAI: IMPORT AI.
jack-clark.net/

It might be interesting for people playing with Transformers and NLP in general; Anthropic plans to do quite a bit:
anthropic.com/news/announcemen

A tiny note for myself about "neurons that fire together, wire together": there's a temporal precendence, one cell should fire before the other so the learning, the actual wiring, can happen.

en.wikipedia.org/wiki/Hebbian_

Listening to "CCN Course 2020, Neuron 11: Ions". It tells about membrane potential, sodium and potassium pumps, allowing our neurons to work.
youtu.be/B0ziLwEhHfM?list=PLu0

…Have you ever thought that a neuron had evolved in a sea water?We carry a trace of an ancient ocean with us throughout our lives.

(Using a Silas Baisch image from Unsplash as a base for CLIP render.)

Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.