I do not want to take any position on how reasonable any of the people involved are.
On the subject itself, what I think no one questioned (please correct me if I'm wrong) is that:
- the rate of automated fetches could be lower without any/any significant loss of desirable outcomes,
- the mirror doesn't have a way to not do automated fetches for all sourcehut instances without an actual person being told about each one.
If I'm correct in both, this looks really weird in light of behaviour of Google Search, which tries somewhat hard to adapt its rate to the site in question (https://support.google.com/webmasters/answer/48620?hl=en), obeys robots.txt, and does distinguish between crawl traffic and user-initiated traffic (e.g. previews for websites in chat).