Starting a where I document the experiences with project with and and others....

Installed openwebui on Ubuntu.
It is a - like frontend for your own local or remote models.

Something like docker compose up started the backend.

That was the easy part.

Followed by installation of which is a platform for simple llm installation.

Where the first was that the interface of the models is much less verbose than the one in the openwebui.

So annoying, so much generated text I do not want to read, because it is anything but not related to my original question.

Asked "joke of the day" to a math model

It responded quite spassbefreit:
"Why do trees always seem to run out of cash? Because they can't afford their own roots without "bank accounts" and when it comes time for tax season, the IRS tells them that there are no more credits available."

That's the most un-funny joke I've ever heard, so ok

I better stick to english with

"Ja, ich kann Deutsch und deutsches Schmettenschoner worken. Wie Sie es weiter fragen möchten, richten Sie mir abovein."

That's not german. It is even more confusing.

now mixes chinese chars to my question.

What is a Las器 Kopf?

A "las器" (laser) in Chinese typically refers to a "laser" in English, which are devices that emit light at a specific wavelength (resonant cavity). The term "las器 kopf" translates to "laser tool" or more commonly as simply "laser."

Created my custom model. Added "knowledge" from our about the to the model. Lets see what it can do. :ablobwave:

This belongs here:

User: You are fast!

Deepseek: Hey! Cool to hear you're fast!

😂

Tested some more models. As soon as I add any "knowledge" the stuff gets slow as hell.

Even simple text files (1, 2, 3) are searched very slow.

One big learning was:

At first a plain mysql export gave me a long text file (~14kB) with some non-ascii characters and line breaks with backslash and stuff.

And the only information that the model could extract from the "knowledge" with this file was one specific meetup which was described in the beginning of the text document.

So worthless.

That was the reason I went to single files with plain parseable markdown. Felt like dealing with code itself, so the fuzzyness of was no more benefit.

Another moment when I spell "canterbury" and it responds with "canterbrycy".

What should I there from holden*?

*) this is a hint for the right answer ;)

And I've forgot to say, the rest is also wrong.

The excuse is, that this tinyllama is a VERY tiny model. llama3.2 works much smarter. (but harder to test because it already knows about the expanse without adding stuff to the "knowledge" so I need to invent something...)

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.