It's looking increasingly likely that an AI model called QUALIA or Q* may indeed be related to the bizarre firing and re-hiring of Sam Altman at OpenAI.
Q* is not the only theory that attempts to explain the OpenAI coup attempt. Another theory is that the board of the non-profit OpenAI was compromised by having financial interests in competitors to OpenAI such as Anthropic. Firing Altman and attempting to essentially destroy or weaken OpenAI could plausibly have been an attempt to self-interestedly pump Anthropic and all of the rest of OpenAI competitors.
But rumors and some more solid information about Q* is slowly trickling out. Some of the scoop is more trustworthy and reported by more reputable outlets like Reuters, and some information has been allegedly leaked on much less reputable forums like 4chan. Although 4chan is a cess pit of lies and misinformation, 4chan has already earned a place in the dissemination and advancement of AI knowledge and practice, since that was the forum where the weights for the Llama model were originally leaked.
What we know for sure is that Q* is an AI model that has seemed particularly successful at lower level (grade school) math and has been given a lot of compute at OpenAI. What is speculation is that Q* turns out to be really, really good at math, better than any LLM (large language model, e.g. ChatGPT) has ever been. (LLMs have so far been really terrible at math). Q* also allegedly displays the property of "meta-cognigiton" which is the ability to think about its own thinking. It supposedly has the ability to apply its learnings from one subject to another, much in the way that AlphaZero was able to apply its super-human skill from one game to another. AlphaZero is not an LLM, and no LLM has ever displayed this property of meta-cognition before. AlphaZero, athough super-human in the domain of certain games, has no ability to read papers about the architecture of itself.
Meta-cognition is an active area of research. If an LLM truly had the ability to think about its own thinking, and combine it with a deep knowledge of math, that means, in theory, it could read papers about machine learning (i.e. math), and speculate how its own architecture could be improved, beginning the feedback loop that could lead to exponential improvements on the path toward AGI (artificial general intelligence) and the "Singularity".
And, allegedly, that's exactly what Q* has already done, having already suggested changes to its own architecture, changes that no human fully understands. And, according to this theory, somebody got spooked and felt they needed to pull the plug.
There is one other extremely speculative theory that caught my interest. Supposedly Q* has displayed the ability to crack certain forms of strong encryption. I'm not sure how this could be possible since math is math and it seems like it's extremely unlikely for any entity based on classical computers to crack strong encryption no matter how super-intelligent it is. But I suppose it could be theoretically possible that it has found novel solutions to decryption problems that human cryptographers have not discovered yet. If so, this has huge implications in terms of our entire architecture of computer security as we know it. And we are now in the realm of national security interest, which could help explain the why the OpenAI board was not forthcoming with its reasoning.

I've been closely following the ongoing drama regarding the "coup" of OpenAI. In a shocking move, CEO Sam Altman was fired by the OpenAI board, catching seemingly everyone off guard, even Microsoft which has invested north of 10B.

Latest I hear is that a huge number of OpenAI employees have threatened to quit if Altman is not reinstated, and Sam Altman may be negotiating to get his job back along with governance reform to place himself more firmly holding the reins of power.

Lots of theories have been made to explain this turn of events, and many are framing this as a battle between Effective Altruism vs Effective Accelerationism.

It's important to note both philosophies are not polar opposites of each other, in fact, they share many of the same assumptions, and are held by more or less exactly the same demographic of rich silicon valley men.

But as OpenAI explodes towards a 90B valuation and the rumors that AGI (artificial general intelligence) has already been invented or or is about to be, small differences in philosophy can still lead to monumental power struggles.

As a reminder, whichever company capitalizes on AGI first will likely be worth more than all of the other megacap tech companies put together. This is a world-altering amount of money and power we could be talking about here.

Om the other hand, it could all still be decades off. LLM architecture, although a major innovation, has not shown to be capable of simply scaling up to human level intelligence. OpenAI might have actually run up against a wall.

Sam Altman is seen as doing everything he can to accelerate, including using capitalism to build products, to make huge amounts of money. In contrast, the OpenAI board, led by Ilya Sutskever, may want to take a more careful approach, with less focus on capitalism, and more focus on deliberate academic research. They want an AI that is perfectly aligned to their own corporate goals even if that means slowing down the release of new products. And they want OpenAI to go back to its roots as a genuine nonprofit again.

Effective Altruists have been maligned as doomers or decelerationists but this isn't accurate. They want AGI also. But they want AGI to be protected and safe. They want regulatory capture, they want open source AI stopped. In short, they want themselves to be fully in control, to be the priests who protect the rest of the world from the AGI god. And it just so happens that by doing this they will make a shit-ton of money which they will rationally spend on altruism such as ending poverty or colonizing Mars.

Both EA and e/acc are essentially variants of an AGI cult, much in the same way Abrahamic religions are very similar to each other yet behave diatrically opposed to each other in the global struggle for dominance.

I can't predict what will happen with Sam Altman and OpenAI. But I will say if Altman is reinstated, it could be interpreted as a major victory for the hypercapitalist accelerationists.

The battle over open AI research intensifies at the AI Safety Summit. On one side sits advocates such as Meta's Yann LeCun, on the other, former Google AI Researching Geoff Hinton, etc.

LeCun accuses the anti-open AI team of being in the pocket of corporations.

Yann LeCun writes (and I agree):
"Now about open source: your campaign is going to have the exact opposite effect of what you seek. In a future where AI systems are poised to constitute the repository of all human knowledge and culture, we *need* the platforms to be open source and freely available so that everyone can contribute to them. Openness is the only way to make AI platforms reflect the entirety of human knowledge and culture. This requires that contributions to those platforms be crowd-sourced, a bit like Wikipedia. That won't work unless the platforms are open.

"The alternative, which will *inevitably* happen if open source AI is regulated out of existence, is that a small number of companies from the West Coast of the US and China will control AI platform and hence control people's entire digital diet. What does that mean for democracy? What does that mean for cultural diversity? *THIS* is what keeps me up at night."

twitter.com/ylecun/status/1718

The _real_ doomsday scenario with AI is that it is monopolized by the 1% and consolidates their power vs everyone else.

Elon Musk escalates rhetoric against remote workers. It's not just a productivity issue, in his view, it's a moral issue. He proceeds to wag his finger at the "laptop class", and accuses them of being like Marie Antoinette in her fake quote "let them eat cake".

What's ironic is that while Musk is trying to stir up envy against the "laptop class" everything he says about the "laptop class" more accurately applies to the billionaire class. Remote workers are not just sitting back and ordering others to do our bidding. That's what billionaires do. I really have to wonder if Musk using remote workers as a proxy to buffer himself and other centibillionaires from class consciousness?

Remote workers need to fight back against this rhetoric. Remote work is work. We are working class. And not all of us are well paid. And if we were forced back in the office we wouldn't get to work in shiny tesla factories, most of us would end up back in dreary cubicles or panopticonic "open office" cube farms. But I guess according to Musk that's our "moral" duty?

US President Joe Biden issued an important Executive Order relating to AI. This EO does not create a new top level AI regulatory organization (Dept of AI) as some hoped but it does order 19 government agencies to form new AI boards and taskforces. My understanding is that this is the most significant level of non-crisis intra-governmental cooperation in decades-- a very interesting example of proactive government.

The knee jerk reaction to government regulation is that it will stifle innovation. But the other side of the coin is that US Big Tech companies are very influential in the US government and it's unlikely that they would regulate themselves to be in a real disadvantage relative to the global competition.

One of the interesting aspects of this EO is that it requires corporate entities that have a certain threshold of compute capability to register themselves with the government. And all significant foundational models (e.g. ChapGPT) must be reviewed by the government before they can be released to the public. Essentially this means the government has granted itself the right to have a sneak peak at new models first, and to "red team" them, that is to say to try to see how they relate to safety, national defense, international competition, and so on.

I think big tech companies are mostly ok with this. Indeed, they have a huge incentive to not allow big mistakes which will provoke the ire of the government or the public. Being able to say the government reviewed their model and gave it a stamp of approval is a huge boon for them.

The question is: how will this EO affect open source LLMs (large language models)? I look forward to further analysis.

A sober, rational person can look at a chart like this and only draw one conclusion: space lasers have been working overtime to try to convince us climate change is real.

According to RCP, Trump vs Biden is a statistical tie. For the popular vote. Which, students of US politics will tell you, is a disaster for Biden since low population rural states that trend toward Trump have a massive electoral college vote advantage. This has nothing to do with the Green Party or Cornel West, this is ALL inherently the weakness of Joe Biden himself with respect to the massively flawed and idiosyncratic US realpolitik.

Biden is losing momentum in the polling. It's not too late for him to step down from the 2024 run and let the Dem Primary have vigorous debates to find the strongest candidate. If Biden persists on running in 2024, and loses to Trump, it's on him (and his DNC handlers), not Cornel West. Biden is, according to the best statistics we have so far, the real spoiler.

Congrats to Vanderbuilt University for coming to this (now obvious) conclusion:
"we do not believe that AI detection software is an effective tool that should be used."
OpenAI has withdrawn its own deeply flawed AI detection tool. Turnitin should do the same. 4% false positve rate (if it even is that low) is simply not good enough. And can impugn the innocent.

vanderbilt.edu/brightspace/202

There was and continues to be plenty of data. Even if you believe the stat that remote workers are 10-20% less efficient (and I don't believe it), many workers regard that working from home is easily worth 10‐20% in terms of compensation‐‐ so the company essentially breaks even, in the worst case scenario, if they can take advantage of it.

But, apparently, hiring managers would rather read bogus articles in The Economist and op‐eds from billionaires than review the basic facts.

fortune.com/2023/08/14/bosses-

I have zero sympathy for democrats who shed crocodile tears about the spoiler effect of Cornel West but don't also vocalize support of Ranked Choice Voting or Star Voting, which would mostly fix the spoiler effect once and for all.

How dare these democrats whine about a serious problem with our election system without simultaneously working to fix it? (Maybe because they profit from their duopolistic power, hmm?)

RCV and Star Voting in the presidential elections is 100% compatible with the US Constitution. The National Popular Vote folks have been working on this for a long time. Many states are already signed up. Let's get this done!

Like poverty, the two party system is NOT a fact of life. It is a human caused problem and it can fixed by humans.

nationalpopularvote.com/ranked

There is a battle for mindshare of politicians of both parties. AI is not yet a left or right thing, but we see two lobbyist camps forming: essentially the pro-AI team and the anti-AI team, although I think neither side would characterize themselves like that so starkly. The pro-AI lobbies do mention dangers of AI but offer solutions, often in the form of products. The anti-AI folks often mention the benefits of AI done for the public good but stress it must be regulated to such an extent. Reminds me somewhat of the GMO debate during the early Bush years.

Stanford, being close to industry, is very much in the pro-AI camp.

washingtonpost.com/technology/

Mettaben boosted

“By the end of their talk, Stanford’s ability to sway Washington sounded almost as powerful as any tech giant.”

That’s because it is a powerful tech giant. Stanford is BigTech and BigTech is Stanford.

By @nitashatiku

washingtonpost.com/technology/

@EyalL I will take you as sincere, but I will say many who speak similar to you in fact consider weakening Russia as the primary agenda, and sometimes even say the quiet part out loud. Massacres are a product of war. Policies of West ensure the war will continue in a steady state so long as Russia has more convicts to recruit, and it seems like they always do. West thinks this is "winning" (and gets to trade out its old stock of weaponry for sale to the world) but Russia gets to profit too in terms of war time price increases of natural resources and commodities. Plus Russia gets the West to execute all of its convicts, so it's a win-win for everybody.

@EyalL

Do you agree?

"First, a protracted war hurts Russia more than it hurts the United States. The whole point of a proxy war is to weaken a rival without the cost and risk that would come as a result of direct confrontation. It’s especially valuable against Russia because it’s the weaker of the United States’ two big rivals and because it’s a large continental power constantly tempted to expand at the expense of its neighbors. This combination of weakness and temptation is the Kremlin’s Achilles’s heel: As a large land empire with vulnerable frontiers, Russia is continually pulled into conflicts beyond its ability to manage. Britain exploited this problem in an earlier era—for example, by supporting Japan in its 1904 war against Russia, an example of a successful proxy war that effectively evicted Russia from the Far East. Similarly, the United States exploited the Kremlin’s quandary by supporting Afghanistan’s mujahedeen against a decaying Soviet empire in the 1980s."

foreignpolicy.com/2023/06/14/u

@EyalL so that puts you one notch more militant than Joe Biden (so far, anyway). One of the things I really appreciate about Biden is that he has shown restraint at least with ATACMS and other long-range weapons. I think the Biden administration is wisely very concerned about the war escalating beyond Ukraine's borders. And for good reason. We don't want Russia thinking Ukraine war is an existential threat that it must win no matter what. Same thing goes for the West. The goal, in my opinion, should be to calm things down, and get both sides to the negotiating table. That's the only way this war ends.

@EyalL do you support sending long range missiles (ATACMS) to Ukraine? If so, are you at all concerned about the war escalating beyond Ukraine's borders?

@EyalL you say that and yet the front hasn't significantly changed. You speak with such confidence, but how can you really know? Isn't it at least possible you are parroting back propaganda? Don't you think it is at least possible that the West is over-exagerating Russian weakness? Can't you see the motivation of why western propaganda would try to convince us of this, that the war is winnable, when in fact the motivation is more of long, drawn out bleed-the-beast kind of strategy?

Show older
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.