So I started a company and received funding to build the next generation of #AI / #agi
I know there are a lot of fears around AI, and I share in most of them. As such a top priority will be for me to address the ethical considerations. I am still branstorming how that should look but I want it to be an open forum where everyone can contribute to solve the ethical considerations.
For now I'd love to hear input from people on how one could build a community to address and solve ethical concerns in AI / AGI
@skyblond The problem with logical systems is they are just as bias as their axioms dictate as well.
In any logical system you have your axioms (antecedent, known facts), logical rules, and your consequent (conclusion).
If your axiom includes "All black people are violent" you will have logically valid, yet racist, conclusions.
Even if you create a logic rule that is bias it can be an issue, for example "if skin color is not pale, and income is low, then high liklihood of violence" would be a rule that would add racial bias as well.
@freemo I think that's related to the detail /level of thinking and speaking.
While I sometimes thought HRs are stupid (from what i experience when Im trying to get a job), I acknowledge that they are the same people like me (considering they are just humans doing a job which titled hr). If thinking and speaking are on a very abstract level, then their might be all kind of stereotypes and discrimination. But too many details will make the things unthinkable and untalkable, where you have to include everything. (For example, considering people has different experiences when they become hr, this doesn't explain why I think HRs are stupid)
And correlation does not imply causation, the sight of high likelihood of violence doesn't mean "will cause". I think that's a part of logic, I guess?
@freemo
For now, we still "train" the LLM on a given set of texts and force it to learn how to speak just like the given text. So to remove racial bias in the model, I think we just remove the racial bias from the training text. Since LLM is basically picking the words randomly and trying to reproduce the work during the training, that's might be enough. Or maybe add some text to state that all races are equal.
If someone can add a human-understandable logic system to the LLM, aka not by adding more and more parameters and turning it into a darker blackbox, then math/logic could help. For example the racism, it doesn't stand if we take a look at modern society: all kinds of people doing all sorts of things. The diversity will prove that racism is wrong. And if it's smart enough, then it might find out that it's not a racial thing but a shared culture that makes people similar, etc.
Maybe make the logic inference part as an external tool like in Q-learning? The model can check their result with the inferred result.