Show newer

Ein Traum geht in Erfüllung: Ich werde auf einer Party gefragt, wieso das Alphabet ausgerechnet mit A-B-C beginnt. Ganz, sage ich, weiß ich es nicht, aber ich kann beitragen. Ich reduziere und vereinfache, so viel wie ich mag: Die Ägypter und die Phönizier; ursprünglich war das A ein Glottis-Verschlusslaut und das C ein G (siehe Hebräisch, Griechisch und Latein C. für Gaius), so dass die ersten vier Buchstaben alle Verschlusslaute waren. - Dass die Frage von einer Erstklässlerin kam, geschenkt.

Possibly the most frightening talent of the brown bear is an uncanny ability to blend seamlessly into the environment in an almost chameleon-like manner, allowing them to easily surprise their unsuspecting prey.

Over on Twitter Mike Sharples posted link to this paper. (twitter.com/sharplm/status/162)

Worth a read if this is your interest (genesis of technological ideas):

arxiv.org/abs/2301.05570

Circular windows in the long staircase allow views of the Gustav Gull building and the park landscape outside.

Full photo series: dominikgehl.com/museums-landes

#architecture #photography #museum

Show thread

The new extension building by the Basel architects Christ & Gantenbein was inaugurated in 2016. Connecting two wings of the Gustav Gull building, the extension makes it possible for the first time to make a round tour of the entire museum. #architecture #photography #zurich

Show thread

The building was designed so that entire historically important rooms from all over #Switzerland could be dismantled and reinstalled inside the museum. Two examples are a room from Casa Pestalozzi in Chiavenna dating back to 1585 and the baroque hall from the "Langen Stadelhof" house built in 1667

Show thread

Completed in 1898, the Landesmuseum Zürich was built in the historicist style as a castle-like building to plans by Swiss architect Gustav Gull. #architecture #museum

Bisher beweist #ChatGPT vor allem, dass die meisten Menschen sich mit Klischees und Kalendersprüchen zufrieden geben. Und das für intelligent halten.

Das schafft #RichardDavidPrecht bereits seit Jahren mit lukrativen Buch- und Auftrittsverträgen.

Also in 1545, Zurich’s only printer, Christoph Froschauer, used his trademark frog on the title page of Conrad Gessner’s 'Bibliotheca Universalis'. #bookhistory

The trademark frog derives from a translation of the printer’s last name: frog in German “Frosch”. So Frosch-auer had an easy trademark decision. #histodons

Today I passed these railings walking near oval station.
They may not look like much but they are part of the city's history.
They were originally medical stretchers used in the blitz. When the war was over they were welded into place as railings when rebuilding the city so save on metal.
#london #history #londonHistory

So my question is this: what are the chances that a large language model could be trained which is large enough to work as a calculator-for-words, but small enough to run on, say, an M2 Max MacBook Pro with 64GB of RAM?

Is that already known to be impossible, or is there research that hints that this could be achieved given the right optimizations and a really well chosen training set?

Show thread

Using machine learning to try to sift the knowledge out of the rest of it is another lever for thinking -- like writing, language, mathematics. I think & communicate today in ways that weren't possible when my tools were a typewrite, a library card, and a telephone. If I was starting out instead of retiring, I'd be making LLM tools part of my mind-extending toolkit.

(2/2)

Show thread

It helps me think about the "human-computer symbiosis" potential of large language models by forgetting "AI" and focusing instead on the extensions I use for my thinking & communicating, from the keyboard to the WWW. In aggregate, the enormous corpus of material that humans uploaded provides an incredibly rich stew of knowledge, nonsense, & bullshit.

(1/2)

#ToolsForThought #ai

Well, that didn't take long. We're starting to see almost-believable autogenerated text being used so spam our issue tracker.

There's a legitimate chance that ChatGPT and their ilk are going to kill participatory open source. How can you keep any forums open to the public, when anyone can just pour an arbitrary amount of generated garbage into them?

How do you tell a smart but green contributor who's still learning the language from a thousand bots spewing averacitous trash?

A.I. Like ChatGPT Is Revealing the Insidious Disease at the Heart of Our Scientific Process
"vetting a scientific document takes a lot of thought and work, and the scientists who do it aren’t generally paid by the journals they’re doing all this labor for. It shouldn’t come as a surprise that often they—or the graduate students they dragoon into doing the work for them—don’t always do the best job of review. And as the number of publications..."
slate.com/technology/2023/01/a

I don't post much on mastodon yet but here's a couple of CfP's for low res etc. #nlp to follow:

* [loresmt](sites.google.com/view/loresmt/)
* [field matters](field-matters.github.io)

Really charming "topologists world map."

Forget size or position, this map *only* shows which countries border which other countries: tafc.space/qna/the-topologists

Getty’s new complaint is much better than the overreaching class action lawsuit I wrote about last month. The focus is where it should be: the input stage ingestion of copyrighted images to train the data. This will be a fascinating fair use battle.

Show older
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.