"I would absolutely love to discover the original code review for this and why this was chosen as a default. If the PRs from 2011 are any indication, it was probably to get unit tests to pass faster."

This was a pretty interesting read.

withinboredom.info/blog/2022/1

@nyquildotorg Here you go, from the code's author: news.ycombinator.com/item?id=3

And John Nagle commenting on the tradeoffs and problems: news.ycombinator.com/item?id=3

And the rest of the comments go, in a roundabout way, into wondering why git-lfs is making write syscalls 50 bytes at a time in the first place, regardless of what the network stack does beyond that, because that's going to suck regardless of what the network stack does beyond that.

@danderson @nyquildotorg Gotta admit, I'm team TCP_NODELAY. Nagle has surprised people for generations, and "But I want to take 20k syscalls to transfer 1MB and have it look efficient" doesn't make me very sympathetic!

@sgf @danderson @nyquildotorg Nagle gets used to "fixup" a bunch of problems (eg silly window syndrome etc).

In general, there are two types of flow: elephants (bandwidth heavy) and mice (latency sensitive). You want nagle for the first class (keep overheads as low as possible for maximum throughput), and not for the second (keep latency low as possible).

@isomer @sgf @danderson @nyquildotorg

I would expect that elephants will buffer (and when they temporarily become mice they will either reshuffle things so that the bufferedwriter is out of the picture or they will simply keep flushing the writer at appropriate times). If that's the case then by disabling Nagle's algorithm we're wasting at most one packet each time the buffer is emptied (pessimistically we will emit one one-byte packet then). So, Nagle should be superfluous if we buffer with a buffer that's much larger than a packet or that's chosen to be a multiple of a packet's size. Am I missing some reason Nagle is useful?

@robryk @sgf @danderson @nyquildotorg that's all true, but you can't always keep your buffer full. Eg when reading data from disk. Esp for long fast networks.

Connections can often flip between mice and elephants. It's common to say "do you want this data?" then wait for a reply, then send the entire data. The first part is latency sensitive, the second part is bandwidth heavy.

@isomer @sgf @danderson @nyquildotorg

If the buffer is not full, the buffered writer will not write until it gets full. People who deal with buffered writers are used to flushing them at appropriate times (and there are all those funny affordances like stdio's "flush out when someone's reading in").

@robryk @isomer @danderson @nyquildotorg I've now got the dev party of my brain going "You could argue that Nagle is just a defense against badly-written programs that can't buffer properly", and the SRE part of my brain going "Yes! And we need defenses against badly written programs!".

Can we rename TCP_NODELAY to TCP_TRUSTME?

@sgf @robryk @danderson @nyquildotorg the application doesn't know what the network is doing. It doesn't know if it's slow to respond, or has high bandwidth or whatever.

The kernel doesn't know if the application is currently bandwidth driven or latency driven.

And all these factors change constantly.

The best way is for the application to be explicit about if after sending these bytes it will expect a response in a timely manner or not.

@isomer @sgf @danderson @nyquildotorg

> The best way is for the application to be explicit about if after sending these bytes it will expect a response in a timely manner or not.

Precisely. And anyone who uses buffering internally already has to do that, lest the whole thing deadlock.

@robryk @sgf @danderson @nyquildotorg right.

However due to historical reasons, userspace generally doesn't buffer network connections and instead relies on the kernels send buffer. The kernel often has a much better idea of the network performance to let it tune buffer sizes better than userspace (although it often doesn't do a great job there either causing head of line blocking)

@robryk @sgf @danderson @nyquildotorg my guesses:
* Ram was v. expensive, if you needed to buffer in kernel space anyway why have even more buffering.
* Most applications were single threaded and you couldn't tell when to reasonably flush (other than flushing on every write).
* stdio never supported sockets.
* Originally most apps were latency sensitive (eg telnet).
* People were trying to keep a similar API surface for datagrams/streams.

Follow

@isomer @sgf @danderson @nyquildotorg

Why do you need an output buffer at all? You could block until all is sent is a send/write syscall.

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.