Never though I'm going to dive so deep in this Win32 rabbit hole.
I was originally thinking just wrote a simple wrapper of win32's tape API, but then I realized that the encryption is missing. With Win32's tape api, you can basically implement something like `/dev/st0` (simple read and write) amd `mt` command (rewind, seek, set compression, etc.). I was thinking just use LTOEnc for encryption control.
But that software can only read the encryption key from a file, which, to me, is a big security hole: I can do that in memory, why write to the disk temporarily?
So I start reading LTOEnc's code. I found Microsoft really don't want you to directly talk to SCSI. I read their documents, they never said about IOCTL_SCSI_PASS_THROUGH_DIRECT. They are recommending something like IOCTL_CDROM or something that is not so lower level.
Anyway, thanks to LTOEnc, and Oracle's helpdesk generously providing HPE product's tech ref manual, I now can implement the encryption control in my win32 lib. And soon I can do that in Java.
Another name for CMR (Cat Magnetic Recording) is PMR (Purr-pendicular Magnetic Recording). https://www.youtube.com/watch?v=r48Bj9S0u6w
Excited to enable Wayland on openSUSE #Tumbleweed! Just follow these simple steps in #Yast Control Center & enjoy GNOME 45.2 or KDE Plasma 5.27.10 with Wayland. A smooth switch awaits! https://dbaxps.blogspot.com/2023/12/setting-up-wayland-on-opensuse.html
@freemo I think that's related to the detail /level of thinking and speaking.
While I sometimes thought HRs are stupid (from what i experience when Im trying to get a job), I acknowledge that they are the same people like me (considering they are just humans doing a job which titled hr). If thinking and speaking are on a very abstract level, then their might be all kind of stereotypes and discrimination. But too many details will make the things unthinkable and untalkable, where you have to include everything. (For example, considering people has different experiences when they become hr, this doesn't explain why I think HRs are stupid)
And correlation does not imply causation, the sight of high likelihood of violence doesn't mean "will cause". I think that's a part of logic, I guess?
For now, we still "train" the LLM on a given set of texts and force it to learn how to speak just like the given text. So to remove racial bias in the model, I think we just remove the racial bias from the training text. Since LLM is basically picking the words randomly and trying to reproduce the work during the training, that's might be enough. Or maybe add some text to state that all races are equal.
If someone can add a human-understandable logic system to the LLM, aka not by adding more and more parameters and turning it into a darker blackbox, then math/logic could help. For example the racism, it doesn't stand if we take a look at modern society: all kinds of people doing all sorts of things. The diversity will prove that racism is wrong. And if it's smart enough, then it might find out that it's not a racial thing but a shared culture that makes people similar, etc.
Maybe make the logic inference part as an external tool like in Q-learning? The model can check their result with the inferred result.
To play with Project Panama, llama.cpp seems too big to play.
So I'm wondering if I can get something small but interesting and meaningful. I think a tape drive controller is good. Read and write data using Java on Windows would just as easy as you work with the `/dev/st0` on linux.
To play with Project Panama, llama.cpp seems too big to play.
So I'm wondering if I can get something small but interesting and meaningful. I think a tape drive controller is good. Read and write data using Java on Windows would just as easy as you work with the `/dev/st0` on linux.
I would say win32 api is actually not bad. Despite the fact that I don't know how to write C and CPP in a proper way, meaning not just submit the code to get an "AC", but an actual project, where you build something for people to actually use, either lib or executable.
Microsoft did a decent job with the documentation. The cmake side needs a bit of work. Yeah, I don't know how to properly config it. Thanks to Google, I still don't know how, but it's working.
Maybe the most ethical way to do is not teaching an AI to be ethical at all.
Last week I read a passage about uncensored LLM, which people removed the built in alignment/ethical requirement from open sourced model weight like llama2, which makes it willing to tell you how to process nuclear thing and how to make a bomb.
Those uncensored models should not be served as a service, since it would be unmoral to offer advices to potential terrorist. But those model should exist for individuals who want to run localy on their hardware. In the passage the author argues that you can't assume your values are the only one that correct. If you teach your model with those values, some may like it and some may not. There should be an option. OpenAI think sexual content is not appropriate (I guess it's hard to regulate) but as far as I know, some people is more willing to pay for R18 LLMs than OpenAI's ethical GPT 4.
So, maybe you can make some kind of plugable ethical thing? Different people can have different flavors. Christians can have a cyber Jesus to tell them how important traditional family is, while Muslims can have a cyber Allah to tell them don't smoke and drinking.
After all AI can't do things smart for now. Even if AGI exists, you can try to rise it as your child, and it might accidentally become ethical, but it won't become perfect. We humans haven't figure out which part of our brain forms the "ethical" feeling, how can you teach it to a bunch of numbers?
----
To address the real world problem, I think the best way is to add markers. Build a robust marker that is hard to remove or change. Then you can solve most unethical use. (Normally fake photos or something)
Another one is to feed fake data. Don't just train your LLM on nuclear tutorials. Instead, feed some fake info so even if someone ask how to make a nuclear bomb, they just get fooled.
Now JetBrains joined the AI hype, they start forcing users to try out their new stupid AI plugin. And the quality of their products has degraded too.
Just read the reviews: https://plugins.jetbrains.com/plugin/22282-ai-assistant/reviews
And the most annoying part is the dollar signs used in jextract.
To access a field in a structure, you have to use `llama_grammar_element.type$get(obj)`. And the dollar sign is a keyword in Kotlin. You can't use it in function names, unless you quote it with "`", like "llama_context_params.`seed$get`(contextParams)". And to make things worse, IDEA's auto completion is broken when involving this "`" sign.
I blame c++ for my degraded sleep quality.
While jextract makes the header translation much easier compared to JNA and JNR, but win32 api still does not work, and c++ support is not good.
The standard header from llama.cpp is working, but other functionalities like grammar parsing and sampling init is not included. They shipped as a common component in the example codes. While I want to compile the example code as a shared lib and invoke them from Java, the jextract has some problems with C++ headers and not happy with it.
For wintun, it uses win32 headers in its header files, which let jextract to process (almost) the whole win32 lib. The pointer size seems to be a problem. One said it should be 32 bits but another said 64 bits.
No wonder it's still in preview stage.
**I have a job now!**
I'm a Chinese shitizen, but I generally don't post in Chinese to avoid being suffering from other Chinese.
I'm physically a male, but I don't care how people think about my gender. I can be male, or female, or cat. But if you ask, I'd prefer to be referred to as male. Also, I support LGBT+ people, and I'm a copyleft. I don't think I'm too aggressive in arguing things, but sometimes I do. You should handle it with care.
I post about programming (most time is Java and Kotlin, unless I have a new love), and some random things I find interesting. I also post about my mental health, which is in a stable state of instability, thanks to my parents and Chinese society.
Anyway, if you want to follow me, I'm glad to see you. And, have a nice day.
Alt: @skyblond