Show newer

I'm just going to leave Freedom of the Press Foundation's excellent guide to leaking to the press right here in case anyone happens to need it: freedom.press/digisec/blog/sha

If you are in the U.S., you can buy produce directly from black farmers and they will ship it to you. It can cost less than your supermarket and will piss off people in power.

blackfarmersindex.com/

#interesting #youshouldknow #food #economy #business #smallbusiness

The most fun I had was dealing with a a pre-existing reference as in

```
const page = { foo: 0, bar: 0};
const pageList = [];
if (someNumber > 4) {
for (let ii = 0; ii < someNumber; ii++ ) {
page.foo = ii * ii;
page.bar = ii + ii;
pageList.push(page);
}
return;
}
pageList.push(page);
```

When console logging within the for-loop everything worked as expected, but if one saved the `JSON.stringify(pageList)` then each item that was created in the for-loop equated to the last item created.

The solution is to create a `structureClone(page)` within the for-loop and reference it over `page`.

Show thread

As neat as or is, I miss the abilities of in the browser.

I don't remember how many times I tried to grab certain properties, which would have been available in the browser, but don't exist in cheerio.

And it is a bit annoying to constantly put various html elements into the cheerio wrapper class to get access to the various functionalities it offers. Thus instead grabbed the minimal viable data and just worked further with arrays.

Show thread

After spending 7 hours to publish the code while going through a 71 page PDF.

The 71 pages were reduced to 43 pages in the `clean-html.js` step. And in the next step of `create-question.js` it was expanded to 63 pages.

Most of the time I was in the cleaning phase, since this is where one can remove pages and add questions quickly without the need of either copy-pasting the wording directly to or manually typing it out.

One thing that has been holding me back is not having a in to generate snippets quickly.

Show thread

Just published the preliminary tool on

codeberg.org/barefootstache/pd

It mainly describes how to do it and is a semi-automation tool to get PDFs into .

In the current version one will still need to modify the pattern constant in the clean-html.js file to align with the PDF in use.

The last five days been working on getting lecture slides semi automatically into .

For the first three days battling with in extracting the data from a 77 page PDF.

On the fourth day finally got the first complete PDF worth of lecture slides into Anki after 8 hours. Most of the time manual pattern matching and setting up a good enough data structure.

On the fifth day, got the second set of lecture slides which are only 44 pages and were decreased down to 12 pages and took 5 hours. Most of the time was converting the manual pattern matching from the fourth day into an automatic sequence and writing up documentation on how to reproduce.

After spending so much time in trying to semi automate the process, I have been questioning if it would have been faster to do it all manually. Hopefully the upcoming sets of slides will go much faster and plan to release the code in the next couple of days.

After struggling to get to work and being close the deadline, I shifted to using a combination of other commands.

First using the command, which is so much faster than PyMuPDF and packages the result similar to saving a website.

Next with and format the file to be able to be quickly processed with and eventually through to be saved in .

Is it elegant and automatic? No, though it works!

that is a less frequently used tagging scheme on the .

This could be due to that some services break their internal tagging schema, e.g. doesn't work well with .

Or it could be due to the laziness of the users and subjectively arguing that doesn't add to the readability of or tags.

In return the argument is that snake_case can add value if the tag has a not obvious word break, especially if the tag is completely written in lowercase or UPPERCASE.

Or if the the underscore replaces a different character other than space like slash, pipe, hyphen, etc.

And in AReallyLongTag / a_really_long_tag it could aid readability.

Thus if one wants grouping and discoverability of posts while creating a brand identity consider using snake_case tags.

Need emojis for a project?

OpenMoji is a collection of free open source emojis. 🥳

openmoji.org/

#opensource #foss #developers #developer #webdesign

Further while trying to extract and format data from PDFs using .

I was trying to create a perfect chain of functions that would format all the edge cases into the final desired format. This is where I quickly realized running every tweaked version of the functions on the 100 page PDF is quite time consuming.

Instead I can run it once and save the results in a database. Then create queries to do post processing on the edge cases while having a good enough way to observe the contents of each page over the pervious method of posting the output into the and scrolling to the desired page. And in the end, I am one step closer of having the data in a file, which is easily exported with .

Show thread

Currently trying to extract and format data from PDFs using .

Initially used the `get_text(value)` method with the `"text"` value, only to learn that I could have potentially saved time directly using the `"html"` value, since I have been creating pattern matchers to format the text into .

After investigation, although the html option exists, the post processing is more strenuous than the initial approach.

My fascination with the `get_text(value)` method is that each value packages the data differently. Where as `"html"` puts the text in `<p><span>text</span></p>`, `"xhtml"` puts it instead in `<h1>text</h1>`.

When starting a new project my preferred methods are 'cowboy coding' and 'jumping in deep end'. This way I can get a feel for the ecosystem and learn all the ways not to do it.

The initial goal is to get it to work and make it maintainable. Later one can always improve it and automate lots of processes.

The downside of such approach, especially if one already knows another language, is that one is more likely than not, not going to follow the best practices and thereby create a Frankenstein project.

This is where documentation should be added, so that if one comes back to the project, one can more easily pick up where one left off.

I'm afraid of a world where we effectively lost democracy and individual agency.

There is enough to go around to allow everyone to live a good life. And AI has the opportunity to add even more value to the world. But this will go with huge disruptions. How we distribute the wealth, value and power in the world is going to be one of the major questions of the 21st century. Again.

#ai #economics

7/7

Show thread

Further it seems that lots of are not familiar of the unmaintained nature of the packaging tool, thus when asking questions regarding how to setup within it will try to resolve the question using .

Show thread

Just realized that is unmaintained since August 2023 and it suggests to use either or . Thus went with the prior and could have potentially resolved a lot of headaches of the past couple of months dealing with breaking plugins.

After backing up and tagging the final packer config version, took the opportunity to set up the starter bundle in the existing git repo.

What astonished me is how similar the starter config aligned with the previous config, especially the keymapping.

Sabot in the Age of AI

Here is a curated list of strategies, offensive methods, and tactics for (algorithmic) sabotage, disruption, and deliberate poisoning.

🔻 iocaine
The deadliest AI poison—iocaine generates garbage rather than slowing crawlers.
🔗 git.madhouse-project.org/alger

🔻 Nepenthes
A tarpit designed to catch web crawlers, especially those scraping for LLMs. It devours anything that gets too close. @aaron
🔗 zadzmo.org/code/nepenthes/

🔻 Quixotic
Feeds fake content to bots and robots.txt-ignoring #LLM scrapers. @marcusb
🔗 marcusb.org/hacks/quixotic.htm

🔻 Poison the WeLLMs
A reverse-proxy that serves diassociated-press style reimaginings of your upstream pages, poisoning any LLMs that scrape your content. @mike
🔗 codeberg.org/MikeCoats/poison-

🔻 Django-llm-poison
A django app that poisons content when served to #AI bots. @Fingel
🔗 github.com/Fingel/django-llm-p

🔻 KonterfAI
A model poisoner that generates nonsense content to degenerate LLMs.
🔗 codeberg.org/konterfai/konterf

While looking into how to drop a commit, I have realized that rewriting the history might be a better option. This option is typically used if one wants to change the email or name of the author.

git-scm.com/book/en/v2/Git-Too

The example code from the site is

```
$ git filter-branch --commit-filter '
if [ "$GIT_AUTHOR_EMAIL" = "schacon@localhost" ];
then
GIT_AUTHOR_NAME="Scott Chacon";
GIT_AUTHOR_EMAIL="schacon@example.com";
git commit-tree "$@";
else
git commit-tree "$@";
fi' HEAD
```

One might need to force the function if one decides to run it multiple times for various `$GIT_AUTHOR_EMAIL`. Alternately, one could append the other emails with the OR operator.

TIL big specialized forums have started backdating millions of LLM-generated posts. Now you cannot be sure a reply from 2009 on some forum for physics or maps or flower or drill enthusiasts haven't been machine-generated and totally wrong.

hallofdreams.org/posts/physics

Show older
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.