Show newer

Also, one observation of LLaMa 2, where I have to exit almost all programs to free up as much memory as I can.

It supports multiple languages including Chinese, but facing some questions that it doesn't know how to handle, the language becomes broken and creepy (while I know it's generated, but seeing those texts randomly on the internet kinda feels creepy). However, English is generally fine. No matter what I ask, it can at least generate proper human language.

Also, WTF is Google's WEI?

"a way for browser clients to establish trust with a server through a third party (eg, Google Play) that presents a token attesting to the integrity of the client environment."

They just want to control your PC like how they control Android.

Well, only the Nintendo users will figure their own way out...

Maybe those handheld PCs are going to change this?
----
I cant believe I beat Nintendo to it...
by Zac Builds
youtube.com/watch?v=KboTw3NBuu

https://www.solidot.org/story?sid=75626

本人对此等行为表示强烈谴责,并呼吁各位 Chrome 和 Chromium系(包括但不限于M$ Edge,Opera)用户尽快转投至 Firefox,为开放的互联网贡献一份力量。

延伸阅读:
Google vs. the Open Web
Unpacking Google’s new “dangerous” Web-Environment-Integrity specification
Mozilla 宣言
下载Firefox

@board@ovo.st

那些
Elon购买Twitter时没有跳船离开Twitter
Elon迫害女性时没有跳船
Elon迫害跨性别时没有跳船
Elon公开支持manosphere和neo-Nazi时没有跳船
Elon要把蓝鸟改成X的时候表示不行了我要跑路了
的人。。。是怎么想的?

虽然是能跑一个是一个没错。

Tried llamacpp for 70b model, for now it can only run on cpu, thus very slow. But can I tell from the texr it generated, it's even better than the 13b model.

13b model feels like a robot, it answers what you ask, but 70b feels more like talk, it will talk about the questions you give, feels like chatgpt.

And it can run on my local hardware (despite very slow)

又被国内媒体气死,又开始洗白21年7.20暴雨,作为本地人我真的觉得都是胡说八道的,虽说郑州是个全年都很干燥的城市,但睡在记忆可一点不少,不说什么大禹治水,花园口、75.8都是极其惨痛的教训。

7.20暴雨时气象预报连下5个红色警告,官僚系统瘫痪,骚操作不断,还有传闻没通知就泄洪,暴雨不可避免,但官僚系统瘫痪就是这次暴雨造成如此大的人员伤亡的罪魁祸首。

Setting up a NUC for my friend. I was plan to use some off the shelf solutions like truenas or something, until he told me the demand.

He want a usable linux desktop, while it also provides a way for accessing remotely andprivately. Also he want a media center and a storage device. And it would be so great if I can somehow put a software router in it too.

So I decide to go with opensuse Tumbleweed with Gnome Wayland. Setting up a proxy would ease the internet issues. Then, I use tailscale for private remote access and plex for media center. I also installed cockpit for management (so I can know what's going on if something going wrong in the future).

There are a lot of changes to the original plan. And mostly because I don't know how to create a bridge using want interface under network manager (I'd prefer wicked), so the kvm and openwrt will not run on this nuc.

The plex is a huge pain too. It keeps failing to scan the files and turns out it's me forgetting to change the permissions. I eventually setup a cron script running every minute to apply 777 to that folder, hopefully won't cause performance issue.

Also, btrfs support from opensuse is great. If something goes wrong, I can told him just boot from a previous working snapshot.

Definitely going to wait for the patch for LLaMa 2 70B model, even with 4bit quantization, I still want to see what this 70B model can do.

The 13B is really amazed me. It's 10% parameters of ChatGPT, but it's free with (almost) no restrictions (please do not use it for ilegal reasons), and can run on my own laptop.

llama 2 13b chat f16 model show off.

That's brilliant!

Can't imagine that's running on my laptop. That excitement reminds me of the time when I make a LED blinking using C51 when I was 12.

It's much better than 7B model!!!

Can't wait to see the 70B models. The llama 2 70B model uses a different design than the llama 1, so llama.cpp currently won't work on 70b model.

But that's amazing! A LLM running on my laptop!

It's much better than 7B model!!!

Can't wait to see the 70B models. The llama 2 70B model uses a different design than the llama 1, so llama.cpp currently won't work on 70b model.

But that's amazing! A LLM running on my laptop!

Switched to 13B chat f16, it can speak Chinese!!!!!

Well, Chinese is very slow, I think it might due to token related issues. I guess it treat each character as a token

Context: 7B f16. I'll try 7b-chat f16 and 13b-chat f16 later...

Show thread
Show older
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.