Show newer
Parienve boosted

Does anyone know Arc Search's user agent so we can block it?

alt text 

@nixCraft
A screenshot of an exchange between two Reddit users:

[–] **SlowDownBrother** • 9 points
I thought SSL certificates were around $100 a year. Is there a free way?

[–] **isometricpanda** • 41 points
lets encrypt

[–] **SlowDownBrother** • 39 points
Yes, let's. But that doesn't answer my question..

Parienve boosted

"Miyazaki absolutely eviscerating an AI art demonstration" is my new standard for measuring how badly my presentation went

alt text 

@hungry_joe
First image:
[Top text]
"Nobuo Kawakami, Chairman, DWANGO Co., Ltd.
(Japanese telecommunications and media company)"

[Bottom text]
"This is a presentation of an artificial intelligence model which learned certain movements."

Second through fourth images,
Hayao Miyazaki, with English subtitles:
"I am utterly disgusted."
"I would never wish to incorporate this technology into my work at all."
"I strongly feel that this is an insult to life itself."

Parienve boosted

@timnitGebru
Perhaps there should be more funding for fully reproducible models like LLM360-Amber and LLM360-Crystal.

arxiv.org/abs/2312.06550

LLM360: Towards Fully Transparent Open-Source LLMs

The recent surge in open-source Large Language Models (LLMs), such as LLaMA, Falcon, and Mistral, provides diverse options for AI practitioners and researchers. However, most LLMs have only released partial artifacts, such as the final model weights or inference code, and technical reports increasingly limit their scope to high-level design choices and surface statistics. These choices hinder progress in the field by degrading transparency into the training of LLMs and forcing teams to rediscover many details in the training process. We present LLM360, an initiative to fully open-source LLMs, which advocates for all training code and data, model checkpoints, and intermediate results to be made available to the community. The goal of LLM360 is to support open and collaborative AI research by making the end-to-end LLM training process transparent and reproducible by everyone. As a first step of LLM360, we release two 7B parameter LLMs pre-trained from scratch, Amber and CrystalCoder, including their training code, data, intermediate checkpoints, and analyses (at https://www.llm360.ai). We are committed to continually pushing the boundaries of LLMs through this open-source effort. More large-scale and stronger models are underway and will be released in the future.

arxiv.org
Parienve boosted

Naming my new AI startup DemiUrge™

Parienve boosted
Parienve boosted

Reposting this because frankly, it's some of the best writing I've done in a while, and I'm damn proud of it.

There's nothing a user interface designer loathes more than complexity. Every design—at least, every modern design—seeks to minimize clicks, icons, visual noise. What if instead of a button, we had a borderless icon? What if instead of navigation controls, we used gestures?

And what if—hear me out—instead of search results, we had language model-distilled text delivered to you, hot and fresh?

taggart-tech.com/interfaces

@sachinsaini
I've had three Motorola phones with LineageOS, and all three times I had chop-chop without sketchy apps.
@Linux_in_a_Bit

Parienve boosted

OK, poll time, and this one is a simple one (please share so I can get a decent sample). What is your main OS?

@foolishowl
Maybe a combination of hyperbolic discounting, anchoring effect, and gambler's fallacy.

Parienve boosted

I've been playing around with using a message bus system in #love2d to loosely couple modules.

One of the weaknesses of love is that its built in message passing feature (handlers) require modules to know about each other. This makes it difficult to to reuse components between games.

Parienve boosted

"Pixelosis"
Since #genuary4 prompt is "pixels" the best way to show them is by serious upscaling of sprites. Here's a #winter Outrun-like #tweetcart

#genuary4 #genuary #pico8 #generative #codeart #sizecoding #pixelart

for i=0,999do
x=rnd(i/40)-i/90c=3if(i>799)x/=9c=1
sset(x+9,i/30,c+i%4)end::_::?"\^1\^cc\^!5f11█░⬇️3⬅️"
for i=2,63do line(0,i+64,127,i+64,(28/i+t())%2+6)end
j=t()*4for k=40,1,-1do
i=k-j%1x=(cos((k+j\1)/9)+.3)*4^6z=i*9+9s=600/z
sspr(0,0,32,40,x/z+64-2*s,64-3*s,4*s,6*s)end
goto _

Parienve boosted
Parienve boosted
Parienve boosted

“We obtain three decades of computer vision research papers and downstream patents (more than 20,000 documents) and present a rich qualitative and quantitative analysis. This analysis exposes the nature and extent of the Surveillance AI pipeline, its institutional roots and evolution, and ongoing patterns of obfuscation.” arxiv.org/abs/2309.15084

The Surveillance AI Pipeline

A rapidly growing number of voices argue that AI research, and computer vision in particular, is powering mass surveillance. Yet the direct path from computer vision research to surveillance has remained obscured and difficult to assess. Here, we reveal the Surveillance AI pipeline by analyzing three decades of computer vision research papers and downstream patents, more than 40,000 documents. We find the large majority of annotated computer vision papers and patents self-report their technology enables extracting data about humans. Moreover, the majority of these technologies specifically enable extracting data about human bodies and body parts. We present both quantitative and rich qualitative analysis illuminating these practices of human data extraction. Studying the roots of this pipeline, we find that institutions that prolifically produce computer vision research, namely elite universities and "big tech" corporations, are subsequently cited in thousands of surveillance patents. Further, we find consistent evidence against the narrative that only these few rogue entities are contributing to surveillance. Rather, we expose the fieldwide norm that when an institution, nation, or subfield authors computer vision papers with downstream patents, the majority of these papers are used in surveillance patents. In total, we find the number of papers with downstream surveillance patents increased more than five-fold between the 1990s and the 2010s, with computer vision research now having been used in more than 11,000 surveillance patents. Finally, in addition to the high levels of surveillance we find documented in computer vision papers and patents, we unearth pervasive patterns of documents using language that obfuscates the extent of surveillance. Our analysis reveals the pipeline by which computer vision research has powered the ongoing expansion of surveillance.

arxiv.org
Parienve boosted

📢 Calling all creators, publishers, and content contributors on the web! 🌐

Today we are announcing an important open letter which proposes a simple specification to enable fair usage of content for search and AI. This is a threat now, not an #AISafetySummit future one.

#NoML #OpenLetter #AI

Join us by signing & sharing the open letter 👇

noml.info/

Show older
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.