Show newer

I have no idea if this will get traction but am putting it out there to the fediverse. Am also curious to see if anyone has had the same idea.

Come October 2025 Microsoft is going to end support for Windows 10. (see microsoft.com/en-us/windows/en) There are concerns this is going to result in a sudden influx of eWaste as many computers won't support the upgrade to Windows 11. Whilst there are options to continue using Windows 10 (ranging from "Who cares?" to "Let's pay Microsoft for security updates.") the reality is likely going to result in many of these devices being sent for scrap.

In the meantime there is a growing digital divide where people are unable to afford devices to get them internet access due to a multitude of reasons. This can affect an individual in many areas from not having the tools to use at school to not being able to access online services.

Which got me thinking of a new social enterprise / venture / approach to address both issues.

I suspect the vast majority of devices that are (a) still working and (b) cannot be upgraded to Windows 11 can still run Linux OSs like LinuxMint or ChromeFlex. I've had plenty of personal experience with both and can see the potential especially when it means getting online. My daily driver is now a Linux based device which I am using for my current studies. 😁

So how realistic would be it be to connect the dots and address both issues?

Curious, very curious. Might have to see what social enterprise funding might be available to make this a reality.

Anyone want to join me on the journey?

#eWaste #Linux #repurpose #reuse #recycle #DigitalDivide #Linux4Good

Study after study also shows that AI assistants erode the development of critical thinking skills and knowledge *retention*. People, finding information isn't the biggest missing skillset in our population, it's CRITICAL THINKING, so this is fucked up

AI assistants also introduce more errors at a high volume, and harder to spot too

microsoft.com/en-us/research/u
slejournal.springeropen.com/ar
resources.uplevelteam.com/gen-
techrepublic.com/article/ai-ge
arxiv.org/abs/2211.03622
pmc.ncbi.nlm.nih.gov/articles/

Show thread

Recently, Altbot has been targeted by DDoS attacks. While the motive is unclear, it seems tied to a misguided belief that the bot’s alt-text generation is harmful to the environment. Let me set the record straight.

Altbot uses Gemini-1.5-flash, a specifically low-power AI model. Processing a single alt-text request consumes around 0.0005 kWh, meaning that in the 4 months since I created Altbot—and tens of thousands of alt-text entries generated—it has consumed roughly 10 kWh in total. That’s about the same energy as running a single LED light bulb nonstop for 41 days or driving an electric car just 40 miles.

These DDoS attacks, however, have already consumed more electricity and computing power than Altbot has across its entire existence... in a matter of hours. This means the attacks have caused more environmental harm than the very thing they seem to be protesting.

Altbot exists to improve accessibility on Mastodon, not to harm the environment. Accessibility and sustainability can coexist, and I’m committed to keeping Altbot energy-efficient and purposeful.

If there are concerns, I encourage constructive dialogue—not destructive actions that undermine both inclusivity and sustainability.

Thank you to everyone supporting Altbot’s mission of making the Fediverse more accessible for all.

I wounder when #ALDI stops branding their #Toast as "#AMERICAN" because it's definitely not #USian / #USA - Style, as it's way too healthy and doesn't have #HighFructoseCornSyrup or #YellowNumber5 in it!

Seriously, that branding needs to bet wiped off before consumers start boycotting it Canadian Style...

Mind you #Germany has actual #bread #culture and that #AldiToast is considered "bare minimum slop" and not even proper bread by anyone with taste.

Spent the last 10 hours trying to get any operating system to work on a with .

First tried to get to boot via , but it black screened.

Next tried to use the same USB stick with only to realize that it was too small.

Got myself a second stick, but Windows wasn't reading it via "This PC", thus had to format it via Disk Management before using YUMI to install Ubuntu onto it.

And it didn't help that Windows was very sluggish with the existing hardware. Making the whole experience a mental challenge onto one's patience!

Nice post on:

"Why Blog If Nobody Reads It?" 🤔

andysblog.uk/why-blog-if-nobod

Short answer (IMO): Writing helps the thought process and helps future you.

Same reason why I post links on social. In the end I don't care if they get traction - they get sucked into davidbisset.social for future me.

I'm just going to leave Freedom of the Press Foundation's excellent guide to leaking to the press right here in case anyone happens to need it: freedom.press/digisec/blog/sha

If you are in the U.S., you can buy produce directly from black farmers and they will ship it to you. It can cost less than your supermarket and will piss off people in power.

blackfarmersindex.com/

#interesting #youshouldknow #food #economy #business #smallbusiness

The most fun I had was dealing with a a pre-existing reference as in

```
const page = { foo: 0, bar: 0};
const pageList = [];
if (someNumber > 4) {
for (let ii = 0; ii < someNumber; ii++ ) {
page.foo = ii * ii;
page.bar = ii + ii;
pageList.push(page);
}
return;
}
pageList.push(page);
```

When console logging within the for-loop everything worked as expected, but if one saved the `JSON.stringify(pageList)` then each item that was created in the for-loop equated to the last item created.

The solution is to create a `structureClone(page)` within the for-loop and reference it over `page`.

Show thread

As neat as or is, I miss the abilities of in the browser.

I don't remember how many times I tried to grab certain properties, which would have been available in the browser, but don't exist in cheerio.

And it is a bit annoying to constantly put various html elements into the cheerio wrapper class to get access to the various functionalities it offers. Thus instead grabbed the minimal viable data and just worked further with arrays.

Show thread

After spending 7 hours to publish the code while going through a 71 page PDF.

The 71 pages were reduced to 43 pages in the `clean-html.js` step. And in the next step of `create-question.js` it was expanded to 63 pages.

Most of the time I was in the cleaning phase, since this is where one can remove pages and add questions quickly without the need of either copy-pasting the wording directly to or manually typing it out.

One thing that has been holding me back is not having a in to generate snippets quickly.

Show thread

Just published the preliminary tool on

codeberg.org/barefootstache/pd

It mainly describes how to do it and is a semi-automation tool to get PDFs into .

In the current version one will still need to modify the pattern constant in the clean-html.js file to align with the PDF in use.

The last five days been working on getting lecture slides semi automatically into .

For the first three days battling with in extracting the data from a 77 page PDF.

On the fourth day finally got the first complete PDF worth of lecture slides into Anki after 8 hours. Most of the time manual pattern matching and setting up a good enough data structure.

On the fifth day, got the second set of lecture slides which are only 44 pages and were decreased down to 12 pages and took 5 hours. Most of the time was converting the manual pattern matching from the fourth day into an automatic sequence and writing up documentation on how to reproduce.

After spending so much time in trying to semi automate the process, I have been questioning if it would have been faster to do it all manually. Hopefully the upcoming sets of slides will go much faster and plan to release the code in the next couple of days.

After struggling to get to work and being close the deadline, I shifted to using a combination of other commands.

First using the command, which is so much faster than PyMuPDF and packages the result similar to saving a website.

Next with and format the file to be able to be quickly processed with and eventually through to be saved in .

Is it elegant and automatic? No, though it works!

that is a less frequently used tagging scheme on the .

This could be due to that some services break their internal tagging schema, e.g. doesn't work well with .

Or it could be due to the laziness of the users and subjectively arguing that doesn't add to the readability of or tags.

In return the argument is that snake_case can add value if the tag has a not obvious word break, especially if the tag is completely written in lowercase or UPPERCASE.

Or if the the underscore replaces a different character other than space like slash, pipe, hyphen, etc.

And in AReallyLongTag / a_really_long_tag it could aid readability.

Thus if one wants grouping and discoverability of posts while creating a brand identity consider using snake_case tags.

Show older
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.