🔥 take: the Weibull distribution is bad. There should not be so many bespoke parameterizations for a single distribution

I added badges that link to my articles in the repository that has publicly available code

@stefanforfan yeah, unfortunately the residual variation makes it hard to visually see. I suspect most relevant public health examples will have this same issue...

However, you could also view that as a benefit (i.e., we should use splines because a scatterplot may not be enough)

@stefanforfan if you use NHANES, I've seen HDL cholesterol as predicted by BMI to be non-linear in a few years

I'm working on adding automatic differentiation to `delicatessen` (to compute the variance exactly instead of approximating it). Still a work in progress but if you have time to test it out, it would help me out a lot

github.com/pzivich/Delicatesse

One of the things that absolutely wrecks my brain is that when looking into space effectively we are staring into the past

Ahh yes, thank you ResearchGate. The paper I wrote is probably related to my interests, I'll be sure to read it

If I were running a large NIH/NSF funded lab, I would be FOIA'ing as many funded grants as possible, have an army of grad students convert those to correctly formatted text files, and then tuning LLaMA to help churn out grants

@willball12 yes, that is the 'cost'. Baseline variables are nice because we can have a more reliable ordering in time, but we do lose some precision if there is no A -> X2 (that's probably fine in most settings because I would be more worried about the arrow and less about the SE size)

@willball12 from a methodological perspective, you can adjust for things that are (1) not mediators and (2) latter on the 'causal path'. In the figure, either {X1} or {X2} are minimally sufficient sets. So, we can adjust for things that are not baseline (as long as *not* mediators).

There is a benefit to this: adjusting for variables closest to the outcome (e.g., X2) result in estimators with the greatest precision.

In this particular case, I don't know if I reasonably believe that those aren't mediators

it's easy to lie with data, but it's even easier to lie with anecdotes

it's causal, you see, because it only predicts the *next* word, rather than predicting words *in* the prompt. so the prediction is unidirectional and thus causal :)

Show thread

Here's a surreal (at least to me) story. So, I was messing around with the coding capabilities of GPT-3. I asked it to code TMLE in Python for me.

The weird part was that GPT started using the Python library I wrote to do TMLE. However, it got the syntax and how the functions wildly wrong. Like the import statements are not even correct

So if you're using GPT to code, you better be familiar with the libraries it calls (and that it can even call the correctly)

automatic differentiation is such a cool tool / application of the chain rule.

Having code to return solutions (up to floating point error) without approximation through recursive calls is such a neat thing to implement

Lately, I've had a lot of fun trying to think about how to vectorize functions. Here is a function for applying the central difference method for multivariable functions

def compute_gradient(func, x, epsilon=1e-6):
x = np.asarray(x)
input_shape = x.shape[0]
h = np.identity(input_shape) * epsilon
u = (x + h).T
l = (x - h).T

gradient = (func(u) - func(l)) / (2 * epsilon)

return gradient

Show older
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.