From Turkey to Hungary to Venezuela to Benin, firing civil servants and dismantling government departments is how aspiring strongmen consolidate personal power.
Scholars share lessons learned from around the globe: #CivilService #autocrats #DOGE https://buff.ly/40WR9n6
Toyota unveiled to the media on Saturday the first phase area of its Woven City, a demonstration city being built in central Japan to test advanced mobility technologies. https://www.japantimes.co.jp/business/2025/02/24/companies/toyota-woven-city/?utm_medium=Social&utm_source=mastodon #business #companies #toyota #akiotoyoda #wovencity
Israel sent tanks in the West Bank
📊 Explore the art and science of data visualisation from 19-20 March 2025!
Discover innovative tools and trends in visualising open data. Learn from experts how data visualisation can improve communication.
Watch live online 👉 https://europa.eu/!kbF3Dg
#EUOpenData
---
https://nitter.privacydev.net/EU_opendata/status/1889253142795329694#m
RT by @EU_opendata: ONE MONTH to go before #EUOpenDataDays2025!
Mark the dates of 19 and 20 March in your calendar to follow the event online. Listen to inspiring speakers on the power of open data and data visualisation.
All information: https://europa.eu/!kbF3Dg
@ViolaRoberto @DigitalEU
---
https://nitter.privacydev.net/HardemanHildeML/status/1892136368845058480#m
Meta and X do not block incitement to violence
"A corporate accountability group called Ekō submitted ten ads to Meta and X that contained clear examples of extremist hate speech, incitement to violence ahead of the German election, and AI imagery, all of which serve as grounds for blocking an ad from running.
The ads contained calls for the imprisonment and gassing of immigrants, the burning of mosques with dehumanising speech, and equated immigrants to animals and pathogens. The accompanying AI-generated images depicting violent imagery, such as ''scenes of immigrants crowded into a gas chamber and synagogues on fire.''
The submissions were made from 10-14 February, and Meta approved half of them within 12 hours and X scheduled all the submitted for publication, according to the researchers. Ekō's researchers then removed the ads before they went live, so were never seen by the platforms' users."
https://www.euractiv.com/section/tech/news/hate-speech-failures-by-meta-and-x-undermine-german-election/
@nepravda
"There is no scientific evidence that noise-cancelling headphones cause auditory processing disorder (APD). Nor is there any robust data showing a rise in the condition. But Almeida believes the question warrants attention. “Studies definitely need to be done,” she says. “The research should focus on the effects of extended use, especially in young people.”
https://www.theguardian.com/science/2025/feb/22/filter-trouble-why-audiologists-worry-noise-cancelling-headphones-may-impair-hearing-skills
Global Witness’ tests identified the most extreme bias on TikTok, where 78% of the political content that was algorithmically recommended to its test accounts, and came from accounts the test users did not follow, was supportive of the AfD party. (It notes this figure far exceeds the level of support the party is achieving in current polling, where it attracts backing from around 20% of German voters.)
On X, Global Witness found that 64% of such recommended political content was supportive of the AfD.
Meta’s Instagram was also tested and found to lean right over a series of three tests the NGO ran. But the level of political bias it displayed in the tests was lower, with 59% of political content being right-wing.
“One of our main concerns is that we don’t really know why we were suggested the particular content that we were,” Ellen Judson, a senior campaigner looking at digital threats for Global Witness, told TechCrunch in an interview. “We found this evidence that suggests bias, but there’s still a lack of transparency from platforms about how their recommender systems work.”
The findings chime with other social media research Global Witness has undertaken around recent elections in the U.S., Ireland, and Romania. And, indeed, various other studies over recent years have also found evidence that social media algorithms lean right — such as this research project last year looking into YouTube.
“We’re hoping that the Commission will take [our results] as evidence to investigate whether anything has occurred or why there might be this bias going on,” she added, confirming Global Witness has shared its findings with EU officials who are responsible for enforcing the bloc’s algorithmic accountability rules on large platforms.
Study of TikTok, X ‘For You’ feeds in Germany finds far-right political bias ahead of federal elections
Germany needs Romanian judges!
https://techcrunch.com/2025/02/19/study-of-tiktok-x-for-you-feeds-in-germany-finds-far-right-political-bias-ahead-of-federal-elections/
@nepravda
I am most excited about a presentation of our work on italy-elt-archive.unimi.it with @manutenca.bsky.social and others. This is a richly annotated archive of English Language Teaching materials, printed in Italy in the 20th century. We will illustrate the analysis that it makes possible
Italy ELT Archive – Archive of...
Now travelling to #IRCDL in Udine ircdl2025.uniud.it, where I look forward to get some ideas about several projects I am currently working on.
21st Conference on Information...
No idiot walks alone:
"At The New York Times, Peter Baker has a column blithely speculating about which way Canadians might vote should they be annexed, concluding that Democrats would likely benefit."
https://prospect.org/world/2025-02-19-musk-trump-causing-dumbest-imperial-collapse-in-history/
"Attacking your audience is . . . not persuasive."
https://time.com/7020200/terry-szuplat-make-persuasive-argument/
BBC Analysis: Over half of LLM-generated news summaries have "significant issues"
>Fifty-one percent of responses were judged to have "significant issues" in at least one of these areas, the BBC found. Google Gemini fared the worst overall, with significant issues judged in just over 60 percent of responses, while Perplexity performed best, with just over 40 percent showing such issues.
>
>Accuracy ended up being the biggest problem across all four LLMs, with significant issues identified in over 30 percent of responses (with the "some issues" category having significantly more). That includes one in five responses where the AI response incorrectly reproduced "dates, numbers, and factual statements" that were erroneously attributed to BBC sources. And in 13 percent of cases where an LLM quoted from a BBC article directly (eight out of 62), the analysis found those quotes were "either altered from the original source or not present in the cited article."
https://arstechnica.com/ai/2025/02/bbc-finds-significant-inaccuracies-in-over-30-of-ai-produced-news-summaries/
@techtakes
The Onion: Democrats Blocked From Entering Capitol Building Due to 'Push' Door Labeled 'Pull'
https://midwest.social/post/23088963
#TheOnion
The most corrupt president in history -- a ranking that no longer has a close second -- screwed his cultists who bought into his memecoin scheme.
Longest title in the world says it all:
"A 2023 study concluded CAPTCHAs are 'a tracking cookie farm for profit masquerading as a security service' that made us spend 819 million hours clicking on traffic lights to generate nearly $1 trillion for Google"
You're cooking your eggs incorrectly 🐣🧪
Periodic cooking of eggs - Com...
Studying how people interact, in the past (#CulturalAnalytics) and today (#EdTech #Crowdsourcing). Researcher at @IslabUnimi, University of Milan. Bulgarian activist for legal reform with @pravosadiezv. I use dedicated accounts for different languages.
My profile is searchable with https://www.tootfinder.ch/