Zgadzam się, że skutki braku zaufania są inne (aczkolwiek wydaje mi się, że niedoszacowujesz te dla SORu).
Nie rozumiem za to, w jaki sposób sam proces decydowania o tym, czy czemuś ufamy jest w tych wszystkich sytuacjach różny (w wielu z tych pozostałych również mamy adwersarzy, którym zależy na zniszczeniu zaufania). Czy może chcesz powiedzieć, że ludzie generalnie by niezrozumiałemu systemowi ufali (dopóki nie mieliby podstaw[1] sądzić przeciwnie), ale skutki utraty zaufania są znacznie gorsze, więc akceptowalne ryzyko jest mniejsze?
[1] co do przykładu z USA -- nie wiem jak o nim myśleć, bo w tamtej okolicy następuje strasznie dużo przekonywania ludzi za pomocą absurdalnych stwierdzeń, którego nie umiem modelować (chociażby nie mam zbyt dobrego pojęcia o tym, co wpływa na to, które absurdalne stwierdzenia są bardziej przekonujące)
> Żeby głosowanie było godne zaufania, konieczne jest, by osoby głosujące rozumiały, jak system wyborczy działa.
Nie wiem, czy się z tym zgadzam, bo to nie jest standard, którego używamy w innych okolicznościach.
Popatrzmy na pogotowie ratunkowe i SORy. Pacjenci owych zwykle nie mają doświadczeń z osobami tam pracującymi, ani nie wiedzą wystarczająco dużo, żeby rozumieć ich postępowanie. Mimo to zwykle ufają, że instytucja pogotowia rękami swoich pracowników przeciwdziała ich śmierci lub poważnemu uszczerbku na zdrowiu.
Wydaje mi się, że takie podejście jest bardzo powszechne w naszych życiach (jednak często w sytuacjach, których regularność i charakter pozwala każdemu z nas zbudować zaufanie na bazie doświadczenia): raczej nie rozumiemy, jak to działa, że pedał gazu przyspiesza samochód, ale ufamy że tak jest. Podobnie z bezpiecznym przechowywaniem żywności, działaniem środków telekomunikacji, skutecznością ochrony pasażerów samochodów przed skutkami zderzeń.
Czy uważasz, że wybory są w jakiś sposób specjalne albo może źle rozumiem ten mechanizm (nadmiernie go generalizując?)?
@isomer For the average ratio to change, you don't need to change any of the averaged ratios but just their weights. They seem to say that the average churn changed and imply that those who use copilot have larger churn than they would otherwise have. There is at least one obvious alternative explanation to that (that those who use copilot had larger churn already, and now are just writing more code with the same churn) which is not obviously wrong.
Obv. they could be more precise somewhere in the whitepaper, but I find a result where the central point is implied without being stated outright very suspicious.
Code churn is a weird metric, because it is strongly affected by how commit-happy developers are (if you rewrite everything you write once, code churn will count it only if you commit it), which depends on e.g. code review practices. So, the effect on code churn (as described in the abstract, I didn't want to jump through the hoops to download the whole whitepaper) can be caused by e.g. organizations where code review standards are laxer increasing their code writing rate.
The style of documentation that describes what the author of the library wants people to try to do using the library is sadly quite common.
You still have such computers today: e.g. microcontrollers. Ones with external memory buses are rarer than they used to be though, so the next steps are larger.
Animal fat is not an explosion danger, even though it has a similar energy density to gasoline. Once it's aflame, it will be very hard to extinguish, but it's not that easy to get it going in the first place.
That's actually not true, but sadly in an academic way.
Consider a block of ice. I can extract work from it by running a heat engine that moves heat from the environment into the block. So, I have a battery that doesn't really store any internal energy.
For somewhat better energy density, consider a tank of liquid nitrogen. Even simpler than previously, I can just warm it up with heat from the environment and run the nitrogen through a turbine.
In all of these cases the rate of energy release is limited by the rate of heat flow from the environment.
Sadly, I don't think there are any practical batteries of that kind: the best I could come up with would have an absolute upper bound of energy density lower than practically achieved densities of Li-Ion batteries today.
@carnage4life similarly, Guiness book of world records is issued by the brewery.
@freeschool And easiest (as in, I would roughly know how to make a LED bulb, but I can point out many things I would have little clue about if I wanted to make a incadescent lamp).
The question of "why" is important here.
One could make a LED bulb by buying a E27 "plug", an LED, making a PCB with the power supply for the LED, putting that together and attaching a cover of some sort. It will be one to two orders of magnitude harder to make than an LED bulb made serially.
I don't know much about processes needed to make a filament bulb (at the very least that requires working with glass, but I don't even know in which order such bulbs get assembled).
So, if you just want a working bulb, I'd probably buy one. That need not be the case if you want to figure out how to make a bulb and are fine with having to make a few of them before they are good enough (because you'll learn about random important things along the way).
(Oh, and don't even try to make fluorescent bulbs. They are nasty in all ways I can think of -- require nasty gases inside, high voltage, can produce UV incl. UVC that needs to be filtered out lest they damage eyes of anyone nearby pretty quickly, ...).
@freeschool What kind of bulb? Incadescent (with a glowing filament) or some other?
@freeschool I'm sorry, I can't understand the question. What does your spotlight need? What do you mean by making your own bulb?
Do you mean the series that starts with https://www.youtube.com/watch?v=Qz0Dg5gIjhw ?
That said, solving that problem would not necessarily be a solution for the original issue:
Imagine that you have a hypothetical training procedure that always converges on some subspace of models with a uniform distribution across them. Imagine that 0.01% of that space is malicious in some way. Then there is no difference in probability density between the (very small) malicious subspace and the rest of the potential outputs of training.
Figuring out that the model we have is specifically chosen to be from _that_ part of the potential output space requires some understanding of how that part is special, and if we have that understanding we can ignore the question of whether the model came from the known training process.
That said, finding a malicious model _that is also a reasonably probable output of the normal training process_ might be computationally hard, or might be impossible (I don't know if people have tried to find adversarial models under that additional constraint).
I enjoy things around information theory (and data compression), complexity theory (and cryptography), read hard scifi, currently work on weird ML (we'll see how it goes), am somewhat literal minded and have approximate knowledge of random things. I like when statements have truth values, and when things can be described simply (which is not exactly the same as shortly) and yet have interesting properties.
I live in the largest city of Switzerland (and yet have cow and sheep pastures and a swimmable lake within a few hundred meters of my place :)). I speak Polish, English, German, and can understand simple Swiss German and French.
If in doubt, please err on the side of being direct with me. I very much appreciate when people tell me that I'm being inaccurate. I think that satisfying people's curiosity is the most important thing I could be doing (and usually enjoy doing it). I am normally terse in my writing and would appreciate requests to verbosify.
I appreciate it if my grammar or style is corrected (in any of the languages I use here).