Show newer

Great to see we are doing such a good job training the next generation of noise miners. I despair.

There are only three guarantees in life; Death, Taxes, and Windows updates that worsen their products

2.5 years later I went back and answered my own question. Nice to see that I've learned things in that time lol

stats.stackexchange.com/questi

RT @eleanorapower
This summer, I'll be running a 3-week course on #social #network analysis with the excellent @tsvetkovadotme as part of the @LSEnews #SummerSchool. In short order, we'll get you working in #R with real-world network datasets! Please RT! lse.ac.uk/study-at-lse/summer-

v1.0 of `delicatessen` is now available 🎊

pypi.org/project/delicatessen/

The biggest change is changes to supported versions of Python (now 3.8-3.11) and version dependencies on SciPy and NumPy. These changes allow for much faster computation times

Other changes include the planned syntax change for regression models. The legacy versions are no longer available

I feel pretty dumb for not realizing that GAMs are really just penalized regression with some automatically generated splines

Like I've been doing GAMs 'by-hand' for years now....

a part of my brain has slowly become devoted to thinking of funny software package names

In it, we 1: distinguish between identification and estimation (with machine learning being applicable to esitmation), 2: summarize the challenges of convergence and complexity and solutions, 3: point to various extensions, and 4: conclude with general advice for practical application

Show thread

Delighted to share my book chapter on machine learning and causal inference now available in Wiley StatsRef

onlinelibrary.wiley.com/doi/fu

Does base R just not have a forward fill function?

also because I support open-source bullshit, all the code is here github.com/pzivich/RNN-Abstrac

so you can train it on another topic if you want (should only take a few hours)

Show thread

no penalty would be a diagonal line with a slope of 1

Show thread

I made this plot for my M-estimation library, but I think the image nicely showcases how the various loss functions for robust mean/regression penalize the errors (of the 1st deriv of log-L)

Came across this paper and it provides a nice discussion of confidence intervals vs. confidence bands with the Kaplan-Meier

nature.com/articles/s41416-022

Show older
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.