one of my motivations for writing this debugging zine is that I've seen "USE THE SCIENTIFIC METHOD" as an explanation of how to debug a million times and -- while obviously that explanation works for a lot of people and that's great -- it's never been helpful to me.

"come up with a hypothesis, test it, repeat" is so rigid and what actual scientists do is so much more complicated!

does anyone else feel this way?

(please do not try to explain the scientific method to me :))

As a couple of people in the replies have said: "come up with a hypothesis you can check" is important for debugging but there are SO many steps before "I have a hypothesis" that it skips over.

To get to a reasonable hypothesis, you need to print things out, read the docs, try a debugger, reread the error message, look for suspicious recent commits, comb through the logs, read some code, talk to a rubber duck, and so much more

@b0rk 💯 a testable hypothesis typically requires some kind of mental model of what is going on and a lot of "early debugging" is building or refining this mental model

Follow

@d6 @b0rk

In a similar discipline, we did lots of hypothesis testing in XIX century physics without having anything resembling a complete model of the objects under observation (e.g. the whole investigation of "cathode rays": someone noticed a glow next to the cathode and then started trying to formulate hypotheses on how the effect that causes stuff to glow can travel from the cathode). Hypothesis testing is very useful during model building: you have a candidate part of a model, and so you pick something to observe that would behave in a very specific way in that model. If it doesn't, you know the partial model's wrong.

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.