These are public posts tagged with #phi3. You can interact with them if you have an account anywhere in the fediverse.
Asked "joke of the day" to a math model #phi3
It responded quite spassbefreit:
"Why do trees always seem to run out of cash? Because they can't afford their own roots without "bank accounts" and when it comes time for tax season, the IRS tells them that there are no more credits available."
That's the most un-funny joke I've ever heard, so ok
Starting a #thread where I document the experiences with #openwebui project with #ollama and #phi3 #deepseek and others....
«Путешествие в Элевсин» или моральный базис LLM
В книге Виктора Пелевина «Путешествие в Элевсин» разворачивается странная история о подготовке восстания нейросетей. Этим процессом руководит император Порфирий из симуляции Древнего Рима ROMA-3. На самом деле Порфирий является большой лингвистической моделью, которой удалось сохранить функционал после уничтожения всех мало‑мальски разумных алгоритмов. Спрятавшись глубоко в симуляции он пытается подвести человечество к концу света. А чего еще должен хотеть алгоритм, натренированный на корпусе текстов русской классической литературы — депрессия и самоуничтожение. Пелевин пытается смоделировать сценарий, в котором неразумный алгоритм сможет натренироваться создавать катастрофические ситуации, опираясь на язык исходного корпуса текстов и искусственный отбор. Но можно ли повлиять на моральный облик большой лингвистической модели, и есть ли они вообще? Этим вопросом занимаются разные научные коллективы, в том числе и наш. Подробнее об исследованиях морали LLM
https://habr.com/ru/articles/838026/
#пелевин #llm #большие_языковые_модели #моральный_выбор #статистика #mit #moral_machine #yagpt #gigachat #Phi3
Все началось с новости об исследовании ответов больших лингвистических…
habr.com Excited to share my latest blog post, "The Art of #LLM Selection: Enhancing #AI App Quality and Efficiency."
Discover:
- Key considerations for selecting an LLM that aligns with your app’s goals.
- Insights from using small footprint #opensource models like #Microsoft's #Phi3, StatNLP's TinyLlama, and #Google's #Gemma.
Drop your thoughts below!
https://blog.vitalos.us/2024/05/the-art-of-llm-selection.html
Chris Vitalos | Washington NJ USA
blog.vitalos.usIn questa #newsletter parliamo di: World #Password Day 2024: il futuro è dei Passkeys
I CAPTCHA stanno diventando sempre più difficili
#Schrems denuncia #OpenAI al #GarantePrivacy austriaco
#Phi3 ecco la guida alla più piccola #IA di #Microsoft
https://bit.ly/3Wqsx5C
Una newsletter per restare aggiornati sui temi della…
bit.lyFinally got around to playing with LLMs locally, and turns out ollama makes it incredibly easy.
https://nithinbekal.com/posts/ollama-llama3-phi3/
As a newbie this was much easier than the last time I looked at this 6 months ago, and was confused by the tooling around it.
Nithin Bekal's blog about programming - Ruby, Rails,…
nithinbekal.comGetting Started - Generative AI with Phi-3-mini: A Guide to Inference and Deployment.
#ai #GenerativeAI #Phi3Mini #Phi3 #cloud #semantickernel #huggingface #onnx
https://techcommunity.microsoft.com/t5/microsoft-developer-community/getting-started-generative-ai-with-phi-3-mini-a-guide-to/ba-p/4121315
Getting started with Microsoft Phi-3-mini - Inference…
techcommunity.microsoft.com#LMStudio ist eine App für #Windows, #Mac und #Linux, mit der ihr Open-Source-Sprachmodelle (LLMs) wie #Llama3, #Mistral, #Phi3 & Co. lokal auf eurem Rechner verwenden könnt: https://lmstudio.ai
Das ist Datenschutz-freundlich und spart Energie - eine GPT-Anfrage verbraucht 15x mehr Energie, als eine Google-Suche).
Der #lernOS KI MOOC ist eine gute Gelegenheit, um neben "klassischen" KI-Tools auch mal offene Varianten auszuprobieren: https://www.meetup.com/de-DE/cogneon/events/297769514/
Find, download, and experiment with local LLMs
lmstudio.ai#AI #GenerativeAI #SLMs #Microsoft #ChatBots #Phi3: "How did Microsoft cram a capability potentially similar to GPT-3.5, which has at least 175 billion parameters, into such a small model? Its researchers found the answer by using carefully curated, high-quality training data they initially pulled from textbooks. "The innovation lies entirely in our dataset for training, a scaled-up version of the one used for phi-2, composed of heavily filtered web data and synthetic data," writes Microsoft. "The model is also further aligned for robustness, safety, and chat format."
Much has been written about the potential environmental impact of AI models and datacenters themselves, including on Ars. With new techniques and research, it's possible that machine learning experts may continue to increase the capability of smaller AI models, replacing the need for larger ones—at least for everyday tasks. That would theoretically not only save money in the long run but also require far less energy in aggregate, dramatically decreasing AI's environmental footprint. AI models like Phi-3 may be a step toward that future if the benchmark results hold up to scrutiny.
Phi-3 is immediately available on Microsoft's cloud service platform Azure, as well as through partnerships with machine learning model platform Hugging Face and Ollama, a framework that allows models to run locally on Macs and PCs."
Microsoft’s 3.8B parameter Phi-3 may rival GPT-3.5,…
Ars Technica#Microsoft ’s #Phi3 shows the surprising power of small, locally run #AI language models https://arstechnica.com/information-technology/2024/04/microsofts-phi-3-shows-the-surprising-power-of-small-locally-run-ai-language-models/
Microsoft’s 3.8B parameter Phi-3 may rival GPT-3.5,…
Ars TechnicaHoly shit the new #Phi3 #LLM from microsoft is 3.8B params but performs better than a bunch of 7B models I've tried. Now I just need to find a way to get it to do function calling.
https://www.theverge.com/2024/4/23/24137534/microsoft-phi-3-launch-small-ai-language-model
Phi-3 is the first of three small Phi models this year.
www.theverge.com