Show newer

Adopting Generative AI requires a strategic approach for enterprises. Considerations include risks like generating incorrect or biased content (hallucinations) and needing robust governance & responsible AI policies.

Large Language Models (LLMs) are central to the recent excitement around generative AI. Trained on immense text, they understand & generate human-like text for diverse tasks like answering questions or writing code.

Traditional AI models are generally less complex and require fewer resources, running on various hardware. Generative AI models, especially LLMs, are large, complex, and often require large cloud compute nodes.

Training differs greatly: Traditional AI uses smaller datasets of labeled data. Generative AI trains on massive datasets of existing content, like millions of images or vast amounts of text.

Traditional AI follows a deterministic, rule-based approach, resulting in predictable outcomes. Generative AI uses a probabilistic approach, leading to varied, non-deterministic outcomes not explicitly programmed.

The most fundamental difference? Traditional AI predicts or classifies (like identifying spam). Generative AI is designed to create entirely new content, such as realistic text, images, code, or music.

Traditional AI, also known as narrow AI, operates using classical data science and a systematic approach. It's focused on prediction or classification tasks based on existing data within predefined boundaries.

Generative AI is capturing public interest, driving discussions and seen as a key driver for the next wave of digital transformation. It's fundamentally different from traditional AI.

Can machines truly think or merely simulate thought?

Technologies are non-neutral. They influence how we act, interact, and especially how we think. Information itself may be treated neutrally by systems, but its impact on humans is not.

Problems that are complex, irreducible, and have a social dimension, like education, are recognized as wicked problems. Unlike tame problems, their causes aren't clear, they're hard to understand, and solutions are tentative.

Educational planning has historically focused on setting goals & measuring outcomes, often based on standardized tests. This approach, common for tame problems, is applied despite contentious debates about validity.

Unlike engineered tools, language technology don't have a blueprint or goal of optimality. Its evolution is gradual, piecemeal, and non-unilinear, shaped by adaptation and the co-option of existing structures.

School assessment answering the question "Can they create valued work?" seems more valuable than what we have.

Imagine a school assessment system that answers the question: Do students have good work habits?

AI-Induced Bias: Systematic and repeatable errors in an AI system's output that create unfair outcomes, often reflecting biases present in the data the AI was trained on

AI systems are being used for employee surveillance through monitoring emails, tracking online activity, analyzing facial expressions, and even gauging emotions through voice analysis software.

AI-based hiring is no more effective than traditional hiring methods, despite the appearance of efficiency and objectivity provided by AI.

“are obsessed with perpetual evaluation of a very limited range of skills even as they ignore, even discourage, curiosity and explanation.” -Geerat Vermeij Yup that pretty much captures what is wrong with it.

Gerry’s Vetmeij wrote of evolution, “Skeptics harbor legitimate questions and reservations as well as ill informed grievances.” This describes many areas of . Difficulties arises what advocates cannot differentiate the two.

Show older
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.