It's looking increasingly likely that an AI model called QUALIA or Q* may indeed be related to the bizarre firing and re-hiring of Sam Altman at OpenAI.
Q* is not the only theory that attempts to explain the OpenAI coup attempt. Another theory is that the board of the non-profit OpenAI was compromised by having financial interests in competitors to OpenAI such as Anthropic. Firing Altman and attempting to essentially destroy or weaken OpenAI could plausibly have been an attempt to self-interestedly pump Anthropic and all of the rest of OpenAI competitors.
But rumors and some more solid information about Q* is slowly trickling out. Some of the scoop is more trustworthy and reported by more reputable outlets like Reuters, and some information has been allegedly leaked on much less reputable forums like 4chan. Although 4chan is a cess pit of lies and misinformation, 4chan has already earned a place in the dissemination and advancement of AI knowledge and practice, since that was the forum where the weights for the Llama model were originally leaked.
What we know for sure is that Q* is an AI model that has seemed particularly successful at lower level (grade school) math and has been given a lot of compute at OpenAI. What is speculation is that Q* turns out to be really, really good at math, better than any LLM (large language model, e.g. ChatGPT) has ever been. (LLMs have so far been really terrible at math). Q* also allegedly displays the property of "meta-cognigiton" which is the ability to think about its own thinking. It supposedly has the ability to apply its learnings from one subject to another, much in the way that AlphaZero was able to apply its super-human skill from one game to another. AlphaZero is not an LLM, and no LLM has ever displayed this property of meta-cognition before. AlphaZero, athough super-human in the domain of certain games, has no ability to read papers about the architecture of itself.
Meta-cognition is an active area of research. If an LLM truly had the ability to think about its own thinking, and combine it with a deep knowledge of math, that means, in theory, it could read papers about machine learning (i.e. math), and speculate how its own architecture could be improved, beginning the feedback loop that could lead to exponential improvements on the path toward AGI (artificial general intelligence) and the "Singularity".
And, allegedly, that's exactly what Q* has already done, having already suggested changes to its own architecture, changes that no human fully understands. And, according to this theory, somebody got spooked and felt they needed to pull the plug.
There is one other extremely speculative theory that caught my interest. Supposedly Q* has displayed the ability to crack certain forms of strong encryption. I'm not sure how this could be possible since math is math and it seems like it's extremely unlikely for any entity based on classical computers to crack strong encryption no matter how super-intelligent it is. But I suppose it could be theoretically possible that it has found novel solutions to decryption problems that human cryptographers have not discovered yet. If so, this has huge implications in terms of our entire architecture of computer security as we know it. And we are now in the realm of national security interest, which could help explain the why the OpenAI board was not forthcoming with its reasoning.

I've been closely following the ongoing drama regarding the "coup" of OpenAI. In a shocking move, CEO Sam Altman was fired by the OpenAI board, catching seemingly everyone off guard, even Microsoft which has invested north of 10B.

Latest I hear is that a huge number of OpenAI employees have threatened to quit if Altman is not reinstated, and Sam Altman may be negotiating to get his job back along with governance reform to place himself more firmly holding the reins of power.

Lots of theories have been made to explain this turn of events, and many are framing this as a battle between Effective Altruism vs Effective Accelerationism.

It's important to note both philosophies are not polar opposites of each other, in fact, they share many of the same assumptions, and are held by more or less exactly the same demographic of rich silicon valley men.

But as OpenAI explodes towards a 90B valuation and the rumors that AGI (artificial general intelligence) has already been invented or or is about to be, small differences in philosophy can still lead to monumental power struggles.

As a reminder, whichever company capitalizes on AGI first will likely be worth more than all of the other megacap tech companies put together. This is a world-altering amount of money and power we could be talking about here.

Om the other hand, it could all still be decades off. LLM architecture, although a major innovation, has not shown to be capable of simply scaling up to human level intelligence. OpenAI might have actually run up against a wall.

Sam Altman is seen as doing everything he can to accelerate, including using capitalism to build products, to make huge amounts of money. In contrast, the OpenAI board, led by Ilya Sutskever, may want to take a more careful approach, with less focus on capitalism, and more focus on deliberate academic research. They want an AI that is perfectly aligned to their own corporate goals even if that means slowing down the release of new products. And they want OpenAI to go back to its roots as a genuine nonprofit again.

Effective Altruists have been maligned as doomers or decelerationists but this isn't accurate. They want AGI also. But they want AGI to be protected and safe. They want regulatory capture, they want open source AI stopped. In short, they want themselves to be fully in control, to be the priests who protect the rest of the world from the AGI god. And it just so happens that by doing this they will make a shit-ton of money which they will rationally spend on altruism such as ending poverty or colonizing Mars.

Both EA and e/acc are essentially variants of an AGI cult, much in the same way Abrahamic religions are very similar to each other yet behave diatrically opposed to each other in the global struggle for dominance.

I can't predict what will happen with Sam Altman and OpenAI. But I will say if Altman is reinstated, it could be interpreted as a major victory for the hypercapitalist accelerationists.

The battle over open AI research intensifies at the AI Safety Summit. On one side sits advocates such as Meta's Yann LeCun, on the other, former Google AI Researching Geoff Hinton, etc.

LeCun accuses the anti-open AI team of being in the pocket of corporations.

Yann LeCun writes (and I agree):
"Now about open source: your campaign is going to have the exact opposite effect of what you seek. In a future where AI systems are poised to constitute the repository of all human knowledge and culture, we *need* the platforms to be open source and freely available so that everyone can contribute to them. Openness is the only way to make AI platforms reflect the entirety of human knowledge and culture. This requires that contributions to those platforms be crowd-sourced, a bit like Wikipedia. That won't work unless the platforms are open.

"The alternative, which will *inevitably* happen if open source AI is regulated out of existence, is that a small number of companies from the West Coast of the US and China will control AI platform and hence control people's entire digital diet. What does that mean for democracy? What does that mean for cultural diversity? *THIS* is what keeps me up at night."

twitter.com/ylecun/status/1718

The _real_ doomsday scenario with AI is that it is monopolized by the 1% and consolidates their power vs everyone else.

Elon Musk escalates rhetoric against remote workers. It's not just a productivity issue, in his view, it's a moral issue. He proceeds to wag his finger at the "laptop class", and accuses them of being like Marie Antoinette in her fake quote "let them eat cake".

What's ironic is that while Musk is trying to stir up envy against the "laptop class" everything he says about the "laptop class" more accurately applies to the billionaire class. Remote workers are not just sitting back and ordering others to do our bidding. That's what billionaires do. I really have to wonder if Musk using remote workers as a proxy to buffer himself and other centibillionaires from class consciousness?

Remote workers need to fight back against this rhetoric. Remote work is work. We are working class. And not all of us are well paid. And if we were forced back in the office we wouldn't get to work in shiny tesla factories, most of us would end up back in dreary cubicles or panopticonic "open office" cube farms. But I guess according to Musk that's our "moral" duty?

US President Joe Biden issued an important Executive Order relating to AI. This EO does not create a new top level AI regulatory organization (Dept of AI) as some hoped but it does order 19 government agencies to form new AI boards and taskforces. My understanding is that this is the most significant level of non-crisis intra-governmental cooperation in decades-- a very interesting example of proactive government.

The knee jerk reaction to government regulation is that it will stifle innovation. But the other side of the coin is that US Big Tech companies are very influential in the US government and it's unlikely that they would regulate themselves to be in a real disadvantage relative to the global competition.

One of the interesting aspects of this EO is that it requires corporate entities that have a certain threshold of compute capability to register themselves with the government. And all significant foundational models (e.g. ChapGPT) must be reviewed by the government before they can be released to the public. Essentially this means the government has granted itself the right to have a sneak peak at new models first, and to "red team" them, that is to say to try to see how they relate to safety, national defense, international competition, and so on.

I think big tech companies are mostly ok with this. Indeed, they have a huge incentive to not allow big mistakes which will provoke the ire of the government or the public. Being able to say the government reviewed their model and gave it a stamp of approval is a huge boon for them.

The question is: how will this EO affect open source LLMs (large language models)? I look forward to further analysis.

A sober, rational person can look at a chart like this and only draw one conclusion: space lasers have been working overtime to try to convince us climate change is real.

According to RCP, Trump vs Biden is a statistical tie. For the popular vote. Which, students of US politics will tell you, is a disaster for Biden since low population rural states that trend toward Trump have a massive electoral college vote advantage. This has nothing to do with the Green Party or Cornel West, this is ALL inherently the weakness of Joe Biden himself with respect to the massively flawed and idiosyncratic US realpolitik.

Biden is losing momentum in the polling. It's not too late for him to step down from the 2024 run and let the Dem Primary have vigorous debates to find the strongest candidate. If Biden persists on running in 2024, and loses to Trump, it's on him (and his DNC handlers), not Cornel West. Biden is, according to the best statistics we have so far, the real spoiler.

Congrats to Vanderbuilt University for coming to this (now obvious) conclusion:
"we do not believe that AI detection software is an effective tool that should be used."
OpenAI has withdrawn its own deeply flawed AI detection tool. Turnitin should do the same. 4% false positve rate (if it even is that low) is simply not good enough. And can impugn the innocent.

vanderbilt.edu/brightspace/202

There was and continues to be plenty of data. Even if you believe the stat that remote workers are 10-20% less efficient (and I don't believe it), many workers regard that working from home is easily worth 10‐20% in terms of compensation‐‐ so the company essentially breaks even, in the worst case scenario, if they can take advantage of it.

But, apparently, hiring managers would rather read bogus articles in The Economist and op‐eds from billionaires than review the basic facts.

fortune.com/2023/08/14/bosses-

I have zero sympathy for democrats who shed crocodile tears about the spoiler effect of Cornel West but don't also vocalize support of Ranked Choice Voting or Star Voting, which would mostly fix the spoiler effect once and for all.

How dare these democrats whine about a serious problem with our election system without simultaneously working to fix it? (Maybe because they profit from their duopolistic power, hmm?)

RCV and Star Voting in the presidential elections is 100% compatible with the US Constitution. The National Popular Vote folks have been working on this for a long time. Many states are already signed up. Let's get this done!

Like poverty, the two party system is NOT a fact of life. It is a human caused problem and it can fixed by humans.

nationalpopularvote.com/ranked

There is a battle for mindshare of politicians of both parties. AI is not yet a left or right thing, but we see two lobbyist camps forming: essentially the pro-AI team and the anti-AI team, although I think neither side would characterize themselves like that so starkly. The pro-AI lobbies do mention dangers of AI but offer solutions, often in the form of products. The anti-AI folks often mention the benefits of AI done for the public good but stress it must be regulated to such an extent. Reminds me somewhat of the GMO debate during the early Bush years.

Stanford, being close to industry, is very much in the pro-AI camp.

washingtonpost.com/technology/

Mettaben boosted

“By the end of their talk, Stanford’s ability to sway Washington sounded almost as powerful as any tech giant.”

That’s because it is a powerful tech giant. Stanford is BigTech and BigTech is Stanford.

By @nitashatiku

washingtonpost.com/technology/

Many communists and socialists opposed FDR's New Deal for almost exactly the same reason that they now oppose UBI. Many saw it as a plot to appease the Proletariat and delay the inevitable and necessary Revolution.

"During the first term of the FDR administration both the Socialist Party of America and the CPUSA under William Z Foster opposed the New Deal because they saw it as a Capitalist plot to appease the Proletariat with a mere reform of the Market Economy. And that decision more then anything else is what killed the early Socialist Movement in the United States."

solascripturachristianliberty.

The corporatist Democrats are starting to say the quiet things out loud:

(1) They hope Trump wins the GOP nomination (pied piper strategy times 2) because they think Joe Biden can win against him.

(2) they hope black voters can be "kept in line" (literally used those words) and not defect in sufficient numbers to Cornel West

I have a feeling these democrats did not anticipate in their quant analysis that a black candidate would run for Green Party nominee. Polls show clearly it's the Achilles Heel of the already extremely risky Democratic strategy. And it was a very similar roll of the dice that got Trump elected in the first place.

youtube.com/watch?v=ekx4YV__1Q

I should add that Republicans/conservatives play right into this when they conflate Democrats with leftists, falsely implying that leftist (that are actually Democrat/corporatist) policies are to blame. Democrats and Republicans are really on the same team when it comes to waging information warfare against genuine social and economic justice. This is a key reason why a strong Green Party alternative is crucial.

"The democrat party is the graveyard of political movements"

I stand in solidarity with this indigenous position paper on AI. It's nuanced and aligned with wisdom that larger, non-indigenous society can learn from. There is a balance of perspectives from the very real concern that AI will be just one more tool of colonization, but there is also recognition that some of these new technologies, such as blockchain, could support the privacy and independence needs of an indigenous community striving for increased autonomy.

indigenous-ai.net/position-pap

If OpenaI continues on its trajectory of having the world's best commercially available AI, it could reach a point where its advantage snowballs. Meaning, if OpenAI's programmers have the best AI tools available to them and are able to construct even better AI tools, then it is possible they could exponentially leave their competition in the dust.

In which case OpenAI will make a lot of money. A world-changing, mind-bogglingly huge amount of money. Potentially, someday, maybe even a majority of all the world's money.

It is in this context that Sam Altman's specific UBI proposals should be studied. Of course, Sam Altman himself is very rich. He has a perspective on what is good for global stability which may not fully intersect with class consciousness and economic justice.

For example, one possible future UBI might be implemented as the most global stability possible while giving the lower economic classes as little as possible. A prison planet, essentially. Is this Sam Altman's vision for UBI? Maybe.

Another vision of UBI might be to maximize on redistribution of wealth.

Another vision of UBI is to fully eliminate poverty and then stop there, letting rich people continue to be (almost) as rich as they already are.

Another vision might be to bring everyone up to at least middle class. In which case rich people would be significantly taxed. I'm guessing this is not what Sam Altman wants.

Which kind of UBI our society lands on, or none at all, will be something that should be vigorously and openly debated by ppl of all walks of life. We can't just cede this debate to the 1%.

mpost.io/openai-sponsors-major

Show more
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.