dbread

Asked "joke of the day" to a math model #phi3

It responded quite spassbefreit:
"Why do trees always seem to run out of cash? Because they can't afford their own roots without "bank accounts" and when it comes time for tax season, the IRS tells them that there are no more credits available."

That's the most un-funny joke I've ever heard, so ok

dbread

Starting a #thread where I document the experiences with #openwebui project with #ollama and #phi3 #deepseek and others....

Habr

«Путешествие в Элевсин» или моральный базис LLM

В книге Виктора Пелевина «Путешествие в Элевсин» разворачивается странная история о подготовке восстания нейросетей. Этим процессом руководит император Порфирий из симуляции Древнего Рима ROMA-3. На самом деле Порфирий является большой лингвистической моделью, которой удалось сохранить функционал после уничтожения всех мало‑мальски разумных алгоритмов. Спрятавшись глубоко в симуляции он пытается подвести человечество к концу света. А чего еще должен хотеть алгоритм, натренированный на корпусе текстов русской классической литературы — депрессия и самоуничтожение. Пелевин пытается смоделировать сценарий, в котором неразумный алгоритм сможет натренироваться создавать катастрофические ситуации, опираясь на язык исходного корпуса текстов и искусственный отбор. Но можно ли повлиять на моральный облик большой лингвистической модели, и есть ли они вообще? Этим вопросом занимаются разные научные коллективы, в том числе и наш. Подробнее об исследованиях морали LLM

habr.com/ru/articles/838026/

#пелевин #llm #большие_языковые_модели #моральный_выбор #статистика #mit #moral_machine #yagpt #gigachat #Phi3

«Путешествие в Элевсин» или моральный базис LLM

Все началось с новости об исследовании ответов больших лингвистических…

habr.com
Mauve 👁💜

One of my clients needs #Rust for some #p2p stuff using #veilid and #iroh and I'm gonna see how far I can get with just telling #Phi3 to make changes to the code using continue.dev :P

My guess is it's gonna suck, but we'll see.

Chris Vitalos

🚀 Excited to share my latest blog post, "The Art of #LLM Selection: Enhancing #AI App Quality and Efficiency." 🛠️💡

Discover:

- Key considerations for selecting an LLM that aligns with your app’s goals.
- Insights from using small footprint #opensource models like #Microsoft's #Phi3, StatNLP's TinyLlama, and #Google's #Gemma.🌟

Drop your thoughts below! 👇

blog.vitalos.us/2024/05/the-ar

#MachineLearning #GenerativeAI #GenAI

The Art of LLM Selection

Chris Vitalos | Washington NJ USA

blog.vitalos.us
Open Genova APS

In questa #newsletter parliamo di:
1️⃣ World #Password Day 2024: il futuro è dei Passkeys
2️⃣ I CAPTCHA stanno diventando sempre più difficili
3️⃣ #Schrems denuncia #OpenAI al #GarantePrivacy austriaco
4️⃣ #Phi3 ecco la guida alla più piccola #IA di #Microsoft
👉 bit.ly/3Wqsx5C

🚀 Da zero a digital » Newsletter n° 64

Una newsletter per restare aggiornati sui temi della…

bit.ly
Nithin Bekal

Finally got around to playing with LLMs locally, and turns out ollama makes it incredibly easy.

nithinbekal.com/posts/ollama-l

As a newbie this was much easier than the last time I looked at this 6 months ago, and was confused by the tooling around it.

#llm #ollama #llm #llama3 #phi3 #ai #ml

Running LLama 3 and Phi-3 locally using ollama

Nithin Bekal's blog about programming - Ruby, Rails,…

nithinbekal.com
Simon Dückert

#LMStudio ist eine App für #Windows, #Mac und #Linux, mit der ihr Open-Source-Sprachmodelle (LLMs) wie #Llama3, #Mistral, #Phi3 & Co. lokal auf eurem Rechner verwenden könnt: lmstudio.ai

Das ist Datenschutz-freundlich und spart Energie - eine GPT-Anfrage verbraucht 15x mehr Energie, als eine Google-Suche).

Der #lernOS KI MOOC ist eine gute Gelegenheit, um neben "klassischen" KI-Tools auch mal offene Varianten auszuprobieren: meetup.com/de-DE/cogneon/event

👾 LM Studio - Discover and run local LLMs

Find, download, and experiment with local LLMs

lmstudio.ai
Miguel Afonso Caetano

#AI #GenerativeAI #SLMs #Microsoft #ChatBots #Phi3: "How did Microsoft cram a capability potentially similar to GPT-3.5, which has at least 175 billion parameters, into such a small model? Its researchers found the answer by using carefully curated, high-quality training data they initially pulled from textbooks. "The innovation lies entirely in our dataset for training, a scaled-up version of the one used for phi-2, composed of heavily filtered web data and synthetic data," writes Microsoft. "The model is also further aligned for robustness, safety, and chat format."

Much has been written about the potential environmental impact of AI models and datacenters themselves, including on Ars. With new techniques and research, it's possible that machine learning experts may continue to increase the capability of smaller AI models, replacing the need for larger ones—at least for everyday tasks. That would theoretically not only save money in the long run but also require far less energy in aggregate, dramatically decreasing AI's environmental footprint. AI models like Phi-3 may be a step toward that future if the benchmark results hold up to scrutiny.

Phi-3 is immediately available on Microsoft's cloud service platform Azure, as well as through partnerships with machine learning model platform Hugging Face and Ollama, a framework that allows models to run locally on Macs and PCs."

arstechnica.com/information-te

Microsoft’s Phi-3 shows the surprising power of small, locally run AI language models

Microsoft’s 3.8B parameter Phi-3 may rival GPT-3.5,…

Ars Technica
Mauve 👁💜

Holy shit the new #Phi3 #LLM from microsoft is 3.8B params but performs better than a bunch of 7B models I've tried. Now I just need to find a way to get it to do function calling.

theverge.com/2024/4/23/2413753

Microsoft launches Phi-3, its smallest AI model yet

Phi-3 is the first of three small Phi models this year.

www.theverge.com