Show newer

Beyond Popper, other possible characteristics of pseudoscience include a lack of progress over time compared to rivals, or the absence of a clear mechanism for proposed effects. But these criteria have their own issues

What makes something scientific? Philosophers call it the problem of demarcation, finding a principled way to distinguish genuine sciences from pseudosciences. It's a surprisingly hard question!

Monitoring latency (the time from API request to response) is important for production LLM applications. Optimizing prompt design and token usage can help reduce latency.

Deploying GenAI applications to production requires managing challenges like latency, scalability, costs, quotas, and ensuring observability. Best practices address these areas.

Locke, Berkeley, and Hume, emphasized the idea that experience is the ultimate source and justification of knowledge.

Critics argue that Popper's criterion is both too restrictive (excluding some legitimate scientific claims) and too permissive (allowing some pseudoscientific theories)

Popper emphasized the asymmetry between confirmation and falsification, arguing that while no finite amount of evidence can conclusively prove a theory, a single counterexample can potentially disprove it.

A critical step for RAG with large documents is chunking – breaking text into smaller, manageable parts. This ensures relevant sections can be retrieved and fit within the LLM's context window

Building a chat application over your own data often involves RAG and a vector database. Vector databases store and allow efficient searching of data embeddings.

Retrieval-Augmented Generation (RAG) is a powerful technique to ground LLMs on external data. This enhances the relevance of responses and helps reduce hallucinations by providing context.

Control the randomness of LLM outputs using temperature or top_p parameters. Lower values yield deterministic results, while higher values increase creativity and diversity.

Large Language Models (LLMs) process text by converting it into tokens. For English, a token is roughly 4 characters or 0.75 words. Managing tokens is vital for cost and performance.

Isaac Asimov introduced his famous Three Laws of Robotics in 1942, establishing a hierarchy for robotic decision-making, like prioritizing preventing harm over following orders.

Imagine assessment that focused on:: habits, comparison to others, & ability to create valued work.

There's growing recognition that diverse assessment data is needed for a complete picture of student learning, beyond traditional tests.

Today, I encountered what appears to be Chinese characters in a generative AI response.

Before AI can be widely adopted, people must trust it, especially that it can make accurate and fair decisions. AI should be aware of and aligned with human values.

Show older
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.