everyone keeps asking me why x86 uses an 8-bit byte, but I'm struggling to find an explanation that makes sense to me. Can any of you help?

what I've found so far
- it looks like x86 evolved from the intel 8008 (from 1972), which was an 8-bit CPU
- the 8008 came after the 4004, which was 4-bit

some questions I have:
- was the reason to build an 8-bit CPU to increase the size of the instruction set? or something else?
- did x86 really directly evolve from the intel 8008?

would love any links!

or maybe the reason was that the 8008 was a popular microprocessor, and it happened to use an 8-bit byte so it became the foundation for all of intel’s future microprocessors, but in theory they could have also built a microprocessor with a 10-bit byte instead and that would have been fine too?

@b0rk I don't have any useful links other than the one you probably looked at first:

en.wikipedia.org/wiki/Byte

I remember (but can't find) a list of alternate historical sizes. My favorite was the 13-bit "baker's byte".

Just-so stories I can't back up with sources:

A power of two is convenient as a memory size. It takes three bits to address the bits in an 8-bit byte, but it would take four (with the last one partially wasted) to address the bits in a 10-bit byte.

If you want a character set that includes upper- and lower-case English letters, digits, and some punctuation marks, you're going to need at least 7 bits. I believe the 8th bit was originally used for error detection.

@b0rk As for the 8086 lineage, en.wikipedia.org/wiki/Intel_80… answers a lot of the question. 4004 -> 8008 -> 8080 -> 8085 -> 8086 were a sequence of processor generations, each building on the lessons from the previous one, but only 8085 was binary-compatible with the previous step in the chain.
Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.