You are a very busy, very important professor publishing very important work. Do you
a) just publish the code and data along with the paper because you know your work will survive close scrutiny and you have better things to do
b) spend your time handling individual data requests, negotiating over the scope of data shared, and re-describing individual analysis steps that are unclear in the methods
My flow chart for reading a paper is like
Abstract -> Data availability
if code & data available: -> figures -> methods -> get the data -> run the code on the data...
if no data available: paper is effectively creative writing. Ignore or begin adversarial evaluation.
Why would you pique suspicion in your reader and signal something all may not be as it seems by not publishing the data? Why would anyone believe a paper that doesn't have code & data? You claim to have gotten a bunch of data and done a bunch of things to it in code, and you had to have done that in an organized enough way to yield a paper, so even if its not pretty, why wouldn't you just dump whatever you have on zenodo and remove an easy avenue for someone to dismiss your work?
Edit: clearly, caveats apply like if the data is privacy-sensitive health/PII or data under a strict license where you'd get sued if you post it. That's not what I'm talking about, and its fine if you explain that and post whatever derived data you can ethically and legally post. I'm talking about most primary research in my field which has no such limitations.
Long post about data sharing vs not
@jonny I'll try to answer, although it was probably a rethorical question but I think it's useful to see the point of view of experimentalists..
First, what data are we talking about? The widely-processed, very simplified tables that you use at the last step of analysis just to make your figures? Or the raw data that hasn't been spike-sorted, checked, or corrected when needed? Or something in between?
If we're talking about the first one, sure, it doesn't cost much to share it, but it's also very easy to fake, and checking that you can remake the figures from those is not going to catch any kind of error, since the author would have very likely noticed if there was an error at that stage. Just looking at the figure-making code would probably be more useful to catch errors. So: does it really change anything to publish that data? I'm not sure.
If we're talking about raw data, not only that's going to be much more difficult to share (e.g. size might be a problem, storage might have a cost, reuse might be a problem without also providing an explanation of all the processing steps done on the data), it's also going to take the time of the person dealing with the sharing. There is the added problem that the author might want to analyse more aspects of the data and sharing exposes them to being scooped. Of course that wouldn't be a problem if people analysing your data were sharing authorship with you (but they generally don't) or if the system wasn't stupidly geared towards encouraging high-impact, non-collaborative publications.
Sharing something in between raw and final will have problems from either stage, so once again: is it worth it?
So, you ask:
"Why would you pique suspicion in your reader and signal something all may not be as it seems by not publishing the data? Why would anyone believe a paper that doesn't have code & data?"
-> it might be worth more to the author to use their time & money for something else. Plus, do we really think that scientists who don't share data are faking the data? I don't. Do we think that scientists who share the data never fake the data? I don't either.
If a scientist wants to bend results some way there are many ways to do this other than faking data (p-hacking, cheating at the experimental stage, hiding results, over-inflating results..). I think it's better to read the reasoning of the paper, the methods, what kind of analyses were done, and look at the individual data distributions in the figures (these should definitely be shown).
Overall: obviously, it is better, if everything else was equal, to have as much data shared as possible. Unfortunately, once you take other parameters into account, there is a question of cost-benefit and it's not always in favour of sharing the data.
PS: it is almost never the professor that actually deals with data-sharing, but I'm sure you probably know that
re: Long post about data sharing vs not
@jonny @elduvelle I might have been unlucky, but every time I've asked for data to the authors of a paper who said that data was available on request they either ignored me or the data had issues ...
Anyway, two interesting reads about this
Why don't we share data and code? Perceived barriers and benefits to public archiving practices - Gomes et al 2022
https://royalsocietypublishing.org/doi/10.1098/rspb.2022.1113
Why data is never raw - On the seductive myth of information free of human judgment 
https://www.thenewatlantis.com/publications/why-data-is-never-raw
re: Long post about data sharing vs not
@toxomat @jonny @elduvelle thanks for this, very interesting read!