What does the “x” in “x86” represent? Announcing the arrival of Valued Associate #679: Cesar Manara Planned maintenance scheduled April 17/18, 2019 at 00:00UTC (8:00pm US/Eastern)How does the LOADALL instruction on the 80286 work?What we commonly call PCs are in fact ATs, correct?The start of x86: Intel 8080 vs Intel 8086?x86 as a Pascal Machine?How do you put a 286 in Protected Mode?Why does Oracle use MINUS instead of EXCEPT?How to use the “darker” CGA palette using x86 Assembly?Does anyone have an x86 EGA draw pixel routine?Examples of operating systems using hardware task switching of x86 CPUsCan an x86 CPU running in real mode be considered to be basically an 8086 CPU?What was the last x86 CPU that did not have the x87 floating-point unit built in?
Is this homebrew Lady of Pain warlock patron balanced?
What does "lightly crushed" mean for cardamon pods?
Is there a kind of relay only consumes power when switching?
Why do we bend a book to keep it straight?
How to Make a Beautiful Stacked 3D Plot
If a contract sometimes uses the wrong name, is it still valid?
Would "destroying" Wurmcoil Engine prevent its tokens from being created?
How do I find out the mythology and history of my Fortress?
For a new assistant professor in CS, how to build/manage a publication pipeline
How could we fake a moon landing now?
Is it cost-effective to upgrade an old-ish Giant Escape R3 commuter bike with entry-level branded parts (wheels, drivetrain)?
Significance of Cersei's obsession with elephants?
Why wasn't DOSKEY integrated with COMMAND.COM?
How can I use the Python library networkx from Mathematica?
Does classifying an integer as a discrete log require it be part of a multiplicative group?
Can an alien society believe that their star system is the universe?
What does the "x" in "x86" represent?
Why are both D and D# fitting into my E minor key?
Why are there no cargo aircraft with "flying wing" design?
What font is "z" in "z-score"?
Fundamental Solution of the Pell Equation
Fantasy story; one type of magic grows in power with use, but the more powerful they are, they more they are drawn to travel to their source
Is CEO the profession with the most psychopaths?
Should I use a zero-interest credit card for a large one-time purchase?
What does the “x” in “x86” represent?
Announcing the arrival of Valued Associate #679: Cesar Manara
Planned maintenance scheduled April 17/18, 2019 at 00:00UTC (8:00pm US/Eastern)How does the LOADALL instruction on the 80286 work?What we commonly call PCs are in fact ATs, correct?The start of x86: Intel 8080 vs Intel 8086?x86 as a Pascal Machine?How do you put a 286 in Protected Mode?Why does Oracle use MINUS instead of EXCEPT?How to use the “darker” CGA palette using x86 Assembly?Does anyone have an x86 EGA draw pixel routine?Examples of operating systems using hardware task switching of x86 CPUsCan an x86 CPU running in real mode be considered to be basically an 8086 CPU?What was the last x86 CPU that did not have the x87 floating-point unit built in?
I have read the following in the x86 Wikipedia page:
The term "x86" came into being because the names of several successors to Intel's 8086 processor end in "86", including the 80186, 80286, 80386 and 80486 processors.
But what does the "x" in "x86" represent? Is it a "variable" that can be something like "801" or "802" or "803" or "804"?
cpu x86 terminology
New contributor
|
show 1 more comment
I have read the following in the x86 Wikipedia page:
The term "x86" came into being because the names of several successors to Intel's 8086 processor end in "86", including the 80186, 80286, 80386 and 80486 processors.
But what does the "x" in "x86" represent? Is it a "variable" that can be something like "801" or "802" or "803" or "804"?
cpu x86 terminology
New contributor
10
80 _ 86 (nothing in between), 80 1 86, 80 2 86, 80 3 86, 80 4 86...notice the pattern?
– user17915
2 days ago
3
x
in IC part numbering is a common way to declare variable ID for the same IC family. Its meaning can be anything in CPU its usually a generation of processor in MCU it might indicate RAM or EEPROM size, for voltage regulators its target voltage etc ... For TTL logic XXYY like 7474 etc the XX means quality (from commercial to military) etc ... so to be sure see datasheet of the part ... To get back to your question Intel CPU/MCU start using a shortcuted marking like x86, x51 its really an shortcut for 8086... and 8051 ... and it sort of stuck with the community too
– Spektre
yesterday
@bogl heh I did not consider that comment an answer rather some additional info I did not see in the other answers ... and was reluctant to create answer on my own as there are already good answers present ... Should I move into answer?
– Spektre
yesterday
1
Up to you, I have no say here. ;) But to me, it looks very much like an answer.
– bogl
yesterday
1
OT in Retrocomputing ... ;-)
– Peter A. Schneider
yesterday
|
show 1 more comment
I have read the following in the x86 Wikipedia page:
The term "x86" came into being because the names of several successors to Intel's 8086 processor end in "86", including the 80186, 80286, 80386 and 80486 processors.
But what does the "x" in "x86" represent? Is it a "variable" that can be something like "801" or "802" or "803" or "804"?
cpu x86 terminology
New contributor
I have read the following in the x86 Wikipedia page:
The term "x86" came into being because the names of several successors to Intel's 8086 processor end in "86", including the 80186, 80286, 80386 and 80486 processors.
But what does the "x" in "x86" represent? Is it a "variable" that can be something like "801" or "802" or "803" or "804"?
cpu x86 terminology
cpu x86 terminology
New contributor
New contributor
edited yesterday
Joel Reyes Noche
22510
22510
New contributor
asked 2 days ago
user12302user12302
89113
89113
New contributor
New contributor
10
80 _ 86 (nothing in between), 80 1 86, 80 2 86, 80 3 86, 80 4 86...notice the pattern?
– user17915
2 days ago
3
x
in IC part numbering is a common way to declare variable ID for the same IC family. Its meaning can be anything in CPU its usually a generation of processor in MCU it might indicate RAM or EEPROM size, for voltage regulators its target voltage etc ... For TTL logic XXYY like 7474 etc the XX means quality (from commercial to military) etc ... so to be sure see datasheet of the part ... To get back to your question Intel CPU/MCU start using a shortcuted marking like x86, x51 its really an shortcut for 8086... and 8051 ... and it sort of stuck with the community too
– Spektre
yesterday
@bogl heh I did not consider that comment an answer rather some additional info I did not see in the other answers ... and was reluctant to create answer on my own as there are already good answers present ... Should I move into answer?
– Spektre
yesterday
1
Up to you, I have no say here. ;) But to me, it looks very much like an answer.
– bogl
yesterday
1
OT in Retrocomputing ... ;-)
– Peter A. Schneider
yesterday
|
show 1 more comment
10
80 _ 86 (nothing in between), 80 1 86, 80 2 86, 80 3 86, 80 4 86...notice the pattern?
– user17915
2 days ago
3
x
in IC part numbering is a common way to declare variable ID for the same IC family. Its meaning can be anything in CPU its usually a generation of processor in MCU it might indicate RAM or EEPROM size, for voltage regulators its target voltage etc ... For TTL logic XXYY like 7474 etc the XX means quality (from commercial to military) etc ... so to be sure see datasheet of the part ... To get back to your question Intel CPU/MCU start using a shortcuted marking like x86, x51 its really an shortcut for 8086... and 8051 ... and it sort of stuck with the community too
– Spektre
yesterday
@bogl heh I did not consider that comment an answer rather some additional info I did not see in the other answers ... and was reluctant to create answer on my own as there are already good answers present ... Should I move into answer?
– Spektre
yesterday
1
Up to you, I have no say here. ;) But to me, it looks very much like an answer.
– bogl
yesterday
1
OT in Retrocomputing ... ;-)
– Peter A. Schneider
yesterday
10
10
80 _ 86 (nothing in between), 80 1 86, 80 2 86, 80 3 86, 80 4 86...notice the pattern?
– user17915
2 days ago
80 _ 86 (nothing in between), 80 1 86, 80 2 86, 80 3 86, 80 4 86...notice the pattern?
– user17915
2 days ago
3
3
x
in IC part numbering is a common way to declare variable ID for the same IC family. Its meaning can be anything in CPU its usually a generation of processor in MCU it might indicate RAM or EEPROM size, for voltage regulators its target voltage etc ... For TTL logic XXYY like 7474 etc the XX means quality (from commercial to military) etc ... so to be sure see datasheet of the part ... To get back to your question Intel CPU/MCU start using a shortcuted marking like x86, x51 its really an shortcut for 8086... and 8051 ... and it sort of stuck with the community too– Spektre
yesterday
x
in IC part numbering is a common way to declare variable ID for the same IC family. Its meaning can be anything in CPU its usually a generation of processor in MCU it might indicate RAM or EEPROM size, for voltage regulators its target voltage etc ... For TTL logic XXYY like 7474 etc the XX means quality (from commercial to military) etc ... so to be sure see datasheet of the part ... To get back to your question Intel CPU/MCU start using a shortcuted marking like x86, x51 its really an shortcut for 8086... and 8051 ... and it sort of stuck with the community too– Spektre
yesterday
@bogl heh I did not consider that comment an answer rather some additional info I did not see in the other answers ... and was reluctant to create answer on my own as there are already good answers present ... Should I move into answer?
– Spektre
yesterday
@bogl heh I did not consider that comment an answer rather some additional info I did not see in the other answers ... and was reluctant to create answer on my own as there are already good answers present ... Should I move into answer?
– Spektre
yesterday
1
1
Up to you, I have no say here. ;) But to me, it looks very much like an answer.
– bogl
yesterday
Up to you, I have no say here. ;) But to me, it looks very much like an answer.
– bogl
yesterday
1
1
OT in Retrocomputing ... ;-)
– Peter A. Schneider
yesterday
OT in Retrocomputing ... ;-)
– Peter A. Schneider
yesterday
|
show 1 more comment
7 Answers
7
active
oldest
votes
The term x86
is shorthand for 80x86
, which was used to refer to any member of the family 8086 (and also, incidently, 8088), 80186, 80286, etc. Things have since gotten a bit muddled by the fact that while an 80386 had a mode that was compatible with the old architecture, it also introduced some fundamentally new ways of doing things which were shared by the 80486 as well as "named" processors like the Pentium, Pentium Pro, etc., and thus it is sometimes ambiguous whether the name "x86" is used in reference to the architecture that started with the 8086, or the one which had its debut with the 80386.
2
@BrianH: Perhaps "mode" wasn't the best term. Maybe "ways of doing things" is better, though some of those new ways of doing things also included new 32-bit modes. Perhaps the most important point is that compilers targeting code for the 80386 and later processors will tend to do things fundamentally differently from those targeting the 80286 and earlier processors, so they really should be viewed as distinct architectures.
– supercat
2 days ago
13
@BrianH, 32-bit protected mode with paging and all that is pretty much fundamentally different from the 16-bit protected mode in the 286.
– ilkkachu
2 days ago
1
also, from memory the 286 had to reset to come out of protected mode, while the 386 could change modes at will, so protected mode wasn't widely used until the 386 came along.
– Joseph Rogers
yesterday
2
@JosephRogers: Protected mode was the only way to access more than 1MB of address space, so using storage beyond that region within a DOS program would require switching to protected mode, doing the access, setting a special "the reset handler should reload some registers and resume normal operation" flag, and then asking the keyboard chip to trigger a CPU reset. There was actually an undocumented way code could access upper memory without that rigamarole, but that wasn't discovered until the 80386 came along. It's really a shame that the designers of protected mode failed to recognize...
– supercat
yesterday
1
...what had been uniquely good about the way real mode segments worked. Had the 80386 protected mode incorporated the better aspects of 8086 segmentation, it would have been practical for a framework like .NET to allow programs to access many gigs of storage using 32-bit object references (which would only take half as much cache space as 64-bit ones). Make segment identifiers 32 bits, with the upper portion selecting a descriptor that contains a base and a scale factor, and the lower containing a scaled offset. That would allow every object to start at address 0 of some segment...
– supercat
yesterday
|
show 10 more comments
x is meant as wildcard, so this represents all CPUs able to run 8086 compatible code.
6
This answer is so far the only answer that addresses the original question about what the "x" represents.
– G. Tranter
2 days ago
4
@G.Tranter I agree up to a point. However, if you write "x86" people usually assume you mean compatible with the Intel 80386. i.e. capable of running in 32 bit protected mode. For example, if you compile a C program withgcc -march=x86
the code won't run on an 8086.
– JeremyP
yesterday
4
@JeremyP Exactly my thoughts. Even in the nineties, I don't remember anybody using x86 to mean 80286 or earlier. The gap was too large to put 16-bit systems in the same bag as 32-bit. When x86 appeared as a term I think everyone meant "80386+" by it.
– kubanczyk
yesterday
2
Read the question. I find it's always best to answer the question, not what I think is the question behind the question. The "x" is a wildcard. If you read the whole question, you'd see that the author already understands the architecture part.
– G. Tranter
yesterday
1
@JeremyP: gcc doesn't have-march=x86
. It has-march=i386
. See godbolt.org/z/xg19XI showsgcc -m32
's help for invalid-march=...
values, which lists all it supports. If you run x86 gcc with the default-m64
, it leaves out arches that only support 32-bit mode. gcc-m16
exists, but still requires 386+ because it mostly just assembles its usual machine code with.code16gcc
so instructions with explicit operands get an operand-size and address-size prefix. Anyway, the critical point is thatgcc -march
never pretended to set the target mode, just ISA extensions within it.
– Peter Cordes
yesterday
|
show 2 more comments
In modern usage it also means software which only uses the 32-bit architecture of the earlier 80x86 processors, to distinguish it from 64-bit applications.
Microsoft uses it that way on 64-bit versions of Windows, which have two separate directories called "Program Files" and "Program Files (x86)."
The 32-bit applications will run on 64-bit hardware, but the OS needs to provide the appropriate 32 or 64 bit interface at run-time.
5
That doesn't mean software though, it means the hardware the software is built for. Consider a rack of fan belts labelled "Ford Focus", "Nissan Micra", etc.. You're not saying the fan belt is a Nissan Micra, only that it's suitable for use on one.
– Graham
yesterday
Only MS Windows uses x86 to specifically exclude x86-64. In other contexts, like computer architecture discussion, x86 includes all CPUs that are backwards-compatible with 8086, with the usual assumption that modern CPUs are running in their best mode (x86-64 long mode). Or at least no implication of 32-bit mode specifically. e.g. "x86 has efficient unaligned loads, but ARM or MIPS doesn't always". But I'd certainly say "modern x86 has 16 integer and 16 or 32 vector registers". (Which is only true in long mode, and I'm of course talking about architectural registers, not physical.)
– Peter Cordes
yesterday
TL:DR: In other contexts (outside of MS Windows software), x86-64 is a subset of x86, not a disjoint set.
– Peter Cordes
yesterday
A term that unambiguously means 32-bit x86 is "IA-32". en.wikipedia.org/wiki/IA-32. Intel uses that in their ISA manuals. (For a while, they used IA-32e (enhanced) for x86-64-specific features / modes. I forget if they still do that or if they call it x86-64.)
– Peter Cordes
yesterday
add a comment |
Intel products were numbered. For example, their first microprocessor was the 4-bit Intel 4004, which was coupled with the 4001 ROM, 4002 RAM, and 4003 shift register. The start denoted the series, and the last digit denoted the specific part.
Later, the intel 8008 came along, which was an 8-bit microprocessor. This was succeeded by the 8080, which was then replaced by the 8085, which was then replaced by the 8086.
After the 8086, processors started taking on the format of 80x86, with x being a number such as 80186, 80286, 80386, etc. They were backwards compatible with one-another, and modern computers still boot into 16-bit-mode. As Intel continued rolling out processors, they began to be referred to as Intel 386 or Intel 486 rather than Intel 80386. This is how the terms 'i386' and 'i586' came into play. As they were based on the same architecture, they were called Intel x86, where x refers to a number. They also came with coprocessors that had a last number of '7', such as 80387, and as such we also have x87.
New contributor
1
You have a typo: 8186->80186. Too small for me to edit myself.
– Martin Bonner
yesterday
@MartinBonner Thanks, fixed.
– Ender - Joshua Pritsker
yesterday
Why does "i586" refer to Pentium 1, and why does "i686" refer to Pentium Pro? - i586 is a made-up term. The CPU model numbers were 80500 through 80502 for the P5 / P54C / P55C microarchitectures. But yeah there was a 5 in there, so i586 is semi justified for convenience and consistency.
– Peter Cordes
yesterday
add a comment |
It just means any processor compatible with same architecture.
So it includes 8088, 8086, 80186, 80286, 80386, 80486, Pentium, etc..
add a comment |
The name "x86" was never 'given' or 'desiged' this way. If I remember correctly, it more or less evolved as a convenient abbriviation for a whole range of compatible processors.
Back in the day when PC's became populair, it was important that your PC was "IBM Compatible". This meant, among other things, your PC must have an Intel 8086 or an 8088. Later, when Intel released more powerfull processors such as the (rare) 80186 or (popular) 80286, it was still important that your PC was just "MS-Dos" or "IBM Compatible". The 80286 was just a faster processor. It had a protected mode feature, but little software actually used or even required that.
The next step was the 80386. This was an improvement over the 80286 because it had a mode that provided full backward compatibility with 8086 programs. Operating systems such as OS/2, DesqView and MS-Windows used this mode to provide backward compatibility whith existing software. Other operating systems such as Linux and *BSD's designed for PC hardware also depended on some new features of the 80386 without acutally providing direct compatibilitiy with existing MS-DOS software. All these systems required a 80386 processor.
Then came the 80486. An even faster and more powerfull processor but mainly backward compatible with the '386. So if you bought a '486 you could still run software designed for the '386. The package would say 'needs a 386 or better' or 'needs 386 or 486'
Along came the 80586 or Pentium. And then the Pentium Pro, also known as 80686...
By this time software developers got tired of listing all possible numbers and since most software was still written to be able to run on a '386, the whole list of numbers was abbriviated to just "x86". This later became synonymous with "32 bit", because the 80386 was a 32 bit processor and hence software that's written for 'x86' is 32-bit software.
New contributor
add a comment |
Practically x86
is shortening for "80386 or 80486 running in 32-bit mode". It comes from 8086/186/286+ line but Win32 cannot run on CPU below 386. After 80486 the 80*86
scheme was changed to Pentium[N] and AMD [model].
New contributor
Why does "i586" refer to Pentium 1, and why does "i686" refer to Pentium Pro? explains that in casual usage, i586 and i686 were used for somewhat justifiable reasons. x86 definitely does not exclude modern CPUs like Skylake! In most contexts other than MS Windows (e.g. CPU architecture discussion) it also doesn't mean specifically 32-bit mode.
– Peter Cordes
yesterday
add a comment |
Your Answer
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "648"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
user12302 is a new contributor. Be nice, and check out our Code of Conduct.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fretrocomputing.stackexchange.com%2fquestions%2f9685%2fwhat-does-the-x-in-x86-represent%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
7 Answers
7
active
oldest
votes
7 Answers
7
active
oldest
votes
active
oldest
votes
active
oldest
votes
The term x86
is shorthand for 80x86
, which was used to refer to any member of the family 8086 (and also, incidently, 8088), 80186, 80286, etc. Things have since gotten a bit muddled by the fact that while an 80386 had a mode that was compatible with the old architecture, it also introduced some fundamentally new ways of doing things which were shared by the 80486 as well as "named" processors like the Pentium, Pentium Pro, etc., and thus it is sometimes ambiguous whether the name "x86" is used in reference to the architecture that started with the 8086, or the one which had its debut with the 80386.
2
@BrianH: Perhaps "mode" wasn't the best term. Maybe "ways of doing things" is better, though some of those new ways of doing things also included new 32-bit modes. Perhaps the most important point is that compilers targeting code for the 80386 and later processors will tend to do things fundamentally differently from those targeting the 80286 and earlier processors, so they really should be viewed as distinct architectures.
– supercat
2 days ago
13
@BrianH, 32-bit protected mode with paging and all that is pretty much fundamentally different from the 16-bit protected mode in the 286.
– ilkkachu
2 days ago
1
also, from memory the 286 had to reset to come out of protected mode, while the 386 could change modes at will, so protected mode wasn't widely used until the 386 came along.
– Joseph Rogers
yesterday
2
@JosephRogers: Protected mode was the only way to access more than 1MB of address space, so using storage beyond that region within a DOS program would require switching to protected mode, doing the access, setting a special "the reset handler should reload some registers and resume normal operation" flag, and then asking the keyboard chip to trigger a CPU reset. There was actually an undocumented way code could access upper memory without that rigamarole, but that wasn't discovered until the 80386 came along. It's really a shame that the designers of protected mode failed to recognize...
– supercat
yesterday
1
...what had been uniquely good about the way real mode segments worked. Had the 80386 protected mode incorporated the better aspects of 8086 segmentation, it would have been practical for a framework like .NET to allow programs to access many gigs of storage using 32-bit object references (which would only take half as much cache space as 64-bit ones). Make segment identifiers 32 bits, with the upper portion selecting a descriptor that contains a base and a scale factor, and the lower containing a scaled offset. That would allow every object to start at address 0 of some segment...
– supercat
yesterday
|
show 10 more comments
The term x86
is shorthand for 80x86
, which was used to refer to any member of the family 8086 (and also, incidently, 8088), 80186, 80286, etc. Things have since gotten a bit muddled by the fact that while an 80386 had a mode that was compatible with the old architecture, it also introduced some fundamentally new ways of doing things which were shared by the 80486 as well as "named" processors like the Pentium, Pentium Pro, etc., and thus it is sometimes ambiguous whether the name "x86" is used in reference to the architecture that started with the 8086, or the one which had its debut with the 80386.
2
@BrianH: Perhaps "mode" wasn't the best term. Maybe "ways of doing things" is better, though some of those new ways of doing things also included new 32-bit modes. Perhaps the most important point is that compilers targeting code for the 80386 and later processors will tend to do things fundamentally differently from those targeting the 80286 and earlier processors, so they really should be viewed as distinct architectures.
– supercat
2 days ago
13
@BrianH, 32-bit protected mode with paging and all that is pretty much fundamentally different from the 16-bit protected mode in the 286.
– ilkkachu
2 days ago
1
also, from memory the 286 had to reset to come out of protected mode, while the 386 could change modes at will, so protected mode wasn't widely used until the 386 came along.
– Joseph Rogers
yesterday
2
@JosephRogers: Protected mode was the only way to access more than 1MB of address space, so using storage beyond that region within a DOS program would require switching to protected mode, doing the access, setting a special "the reset handler should reload some registers and resume normal operation" flag, and then asking the keyboard chip to trigger a CPU reset. There was actually an undocumented way code could access upper memory without that rigamarole, but that wasn't discovered until the 80386 came along. It's really a shame that the designers of protected mode failed to recognize...
– supercat
yesterday
1
...what had been uniquely good about the way real mode segments worked. Had the 80386 protected mode incorporated the better aspects of 8086 segmentation, it would have been practical for a framework like .NET to allow programs to access many gigs of storage using 32-bit object references (which would only take half as much cache space as 64-bit ones). Make segment identifiers 32 bits, with the upper portion selecting a descriptor that contains a base and a scale factor, and the lower containing a scaled offset. That would allow every object to start at address 0 of some segment...
– supercat
yesterday
|
show 10 more comments
The term x86
is shorthand for 80x86
, which was used to refer to any member of the family 8086 (and also, incidently, 8088), 80186, 80286, etc. Things have since gotten a bit muddled by the fact that while an 80386 had a mode that was compatible with the old architecture, it also introduced some fundamentally new ways of doing things which were shared by the 80486 as well as "named" processors like the Pentium, Pentium Pro, etc., and thus it is sometimes ambiguous whether the name "x86" is used in reference to the architecture that started with the 8086, or the one which had its debut with the 80386.
The term x86
is shorthand for 80x86
, which was used to refer to any member of the family 8086 (and also, incidently, 8088), 80186, 80286, etc. Things have since gotten a bit muddled by the fact that while an 80386 had a mode that was compatible with the old architecture, it also introduced some fundamentally new ways of doing things which were shared by the 80486 as well as "named" processors like the Pentium, Pentium Pro, etc., and thus it is sometimes ambiguous whether the name "x86" is used in reference to the architecture that started with the 8086, or the one which had its debut with the 80386.
edited 2 days ago
answered 2 days ago
supercatsupercat
7,820841
7,820841
2
@BrianH: Perhaps "mode" wasn't the best term. Maybe "ways of doing things" is better, though some of those new ways of doing things also included new 32-bit modes. Perhaps the most important point is that compilers targeting code for the 80386 and later processors will tend to do things fundamentally differently from those targeting the 80286 and earlier processors, so they really should be viewed as distinct architectures.
– supercat
2 days ago
13
@BrianH, 32-bit protected mode with paging and all that is pretty much fundamentally different from the 16-bit protected mode in the 286.
– ilkkachu
2 days ago
1
also, from memory the 286 had to reset to come out of protected mode, while the 386 could change modes at will, so protected mode wasn't widely used until the 386 came along.
– Joseph Rogers
yesterday
2
@JosephRogers: Protected mode was the only way to access more than 1MB of address space, so using storage beyond that region within a DOS program would require switching to protected mode, doing the access, setting a special "the reset handler should reload some registers and resume normal operation" flag, and then asking the keyboard chip to trigger a CPU reset. There was actually an undocumented way code could access upper memory without that rigamarole, but that wasn't discovered until the 80386 came along. It's really a shame that the designers of protected mode failed to recognize...
– supercat
yesterday
1
...what had been uniquely good about the way real mode segments worked. Had the 80386 protected mode incorporated the better aspects of 8086 segmentation, it would have been practical for a framework like .NET to allow programs to access many gigs of storage using 32-bit object references (which would only take half as much cache space as 64-bit ones). Make segment identifiers 32 bits, with the upper portion selecting a descriptor that contains a base and a scale factor, and the lower containing a scaled offset. That would allow every object to start at address 0 of some segment...
– supercat
yesterday
|
show 10 more comments
2
@BrianH: Perhaps "mode" wasn't the best term. Maybe "ways of doing things" is better, though some of those new ways of doing things also included new 32-bit modes. Perhaps the most important point is that compilers targeting code for the 80386 and later processors will tend to do things fundamentally differently from those targeting the 80286 and earlier processors, so they really should be viewed as distinct architectures.
– supercat
2 days ago
13
@BrianH, 32-bit protected mode with paging and all that is pretty much fundamentally different from the 16-bit protected mode in the 286.
– ilkkachu
2 days ago
1
also, from memory the 286 had to reset to come out of protected mode, while the 386 could change modes at will, so protected mode wasn't widely used until the 386 came along.
– Joseph Rogers
yesterday
2
@JosephRogers: Protected mode was the only way to access more than 1MB of address space, so using storage beyond that region within a DOS program would require switching to protected mode, doing the access, setting a special "the reset handler should reload some registers and resume normal operation" flag, and then asking the keyboard chip to trigger a CPU reset. There was actually an undocumented way code could access upper memory without that rigamarole, but that wasn't discovered until the 80386 came along. It's really a shame that the designers of protected mode failed to recognize...
– supercat
yesterday
1
...what had been uniquely good about the way real mode segments worked. Had the 80386 protected mode incorporated the better aspects of 8086 segmentation, it would have been practical for a framework like .NET to allow programs to access many gigs of storage using 32-bit object references (which would only take half as much cache space as 64-bit ones). Make segment identifiers 32 bits, with the upper portion selecting a descriptor that contains a base and a scale factor, and the lower containing a scaled offset. That would allow every object to start at address 0 of some segment...
– supercat
yesterday
2
2
@BrianH: Perhaps "mode" wasn't the best term. Maybe "ways of doing things" is better, though some of those new ways of doing things also included new 32-bit modes. Perhaps the most important point is that compilers targeting code for the 80386 and later processors will tend to do things fundamentally differently from those targeting the 80286 and earlier processors, so they really should be viewed as distinct architectures.
– supercat
2 days ago
@BrianH: Perhaps "mode" wasn't the best term. Maybe "ways of doing things" is better, though some of those new ways of doing things also included new 32-bit modes. Perhaps the most important point is that compilers targeting code for the 80386 and later processors will tend to do things fundamentally differently from those targeting the 80286 and earlier processors, so they really should be viewed as distinct architectures.
– supercat
2 days ago
13
13
@BrianH, 32-bit protected mode with paging and all that is pretty much fundamentally different from the 16-bit protected mode in the 286.
– ilkkachu
2 days ago
@BrianH, 32-bit protected mode with paging and all that is pretty much fundamentally different from the 16-bit protected mode in the 286.
– ilkkachu
2 days ago
1
1
also, from memory the 286 had to reset to come out of protected mode, while the 386 could change modes at will, so protected mode wasn't widely used until the 386 came along.
– Joseph Rogers
yesterday
also, from memory the 286 had to reset to come out of protected mode, while the 386 could change modes at will, so protected mode wasn't widely used until the 386 came along.
– Joseph Rogers
yesterday
2
2
@JosephRogers: Protected mode was the only way to access more than 1MB of address space, so using storage beyond that region within a DOS program would require switching to protected mode, doing the access, setting a special "the reset handler should reload some registers and resume normal operation" flag, and then asking the keyboard chip to trigger a CPU reset. There was actually an undocumented way code could access upper memory without that rigamarole, but that wasn't discovered until the 80386 came along. It's really a shame that the designers of protected mode failed to recognize...
– supercat
yesterday
@JosephRogers: Protected mode was the only way to access more than 1MB of address space, so using storage beyond that region within a DOS program would require switching to protected mode, doing the access, setting a special "the reset handler should reload some registers and resume normal operation" flag, and then asking the keyboard chip to trigger a CPU reset. There was actually an undocumented way code could access upper memory without that rigamarole, but that wasn't discovered until the 80386 came along. It's really a shame that the designers of protected mode failed to recognize...
– supercat
yesterday
1
1
...what had been uniquely good about the way real mode segments worked. Had the 80386 protected mode incorporated the better aspects of 8086 segmentation, it would have been practical for a framework like .NET to allow programs to access many gigs of storage using 32-bit object references (which would only take half as much cache space as 64-bit ones). Make segment identifiers 32 bits, with the upper portion selecting a descriptor that contains a base and a scale factor, and the lower containing a scaled offset. That would allow every object to start at address 0 of some segment...
– supercat
yesterday
...what had been uniquely good about the way real mode segments worked. Had the 80386 protected mode incorporated the better aspects of 8086 segmentation, it would have been practical for a framework like .NET to allow programs to access many gigs of storage using 32-bit object references (which would only take half as much cache space as 64-bit ones). Make segment identifiers 32 bits, with the upper portion selecting a descriptor that contains a base and a scale factor, and the lower containing a scaled offset. That would allow every object to start at address 0 of some segment...
– supercat
yesterday
|
show 10 more comments
x is meant as wildcard, so this represents all CPUs able to run 8086 compatible code.
6
This answer is so far the only answer that addresses the original question about what the "x" represents.
– G. Tranter
2 days ago
4
@G.Tranter I agree up to a point. However, if you write "x86" people usually assume you mean compatible with the Intel 80386. i.e. capable of running in 32 bit protected mode. For example, if you compile a C program withgcc -march=x86
the code won't run on an 8086.
– JeremyP
yesterday
4
@JeremyP Exactly my thoughts. Even in the nineties, I don't remember anybody using x86 to mean 80286 or earlier. The gap was too large to put 16-bit systems in the same bag as 32-bit. When x86 appeared as a term I think everyone meant "80386+" by it.
– kubanczyk
yesterday
2
Read the question. I find it's always best to answer the question, not what I think is the question behind the question. The "x" is a wildcard. If you read the whole question, you'd see that the author already understands the architecture part.
– G. Tranter
yesterday
1
@JeremyP: gcc doesn't have-march=x86
. It has-march=i386
. See godbolt.org/z/xg19XI showsgcc -m32
's help for invalid-march=...
values, which lists all it supports. If you run x86 gcc with the default-m64
, it leaves out arches that only support 32-bit mode. gcc-m16
exists, but still requires 386+ because it mostly just assembles its usual machine code with.code16gcc
so instructions with explicit operands get an operand-size and address-size prefix. Anyway, the critical point is thatgcc -march
never pretended to set the target mode, just ISA extensions within it.
– Peter Cordes
yesterday
|
show 2 more comments
x is meant as wildcard, so this represents all CPUs able to run 8086 compatible code.
6
This answer is so far the only answer that addresses the original question about what the "x" represents.
– G. Tranter
2 days ago
4
@G.Tranter I agree up to a point. However, if you write "x86" people usually assume you mean compatible with the Intel 80386. i.e. capable of running in 32 bit protected mode. For example, if you compile a C program withgcc -march=x86
the code won't run on an 8086.
– JeremyP
yesterday
4
@JeremyP Exactly my thoughts. Even in the nineties, I don't remember anybody using x86 to mean 80286 or earlier. The gap was too large to put 16-bit systems in the same bag as 32-bit. When x86 appeared as a term I think everyone meant "80386+" by it.
– kubanczyk
yesterday
2
Read the question. I find it's always best to answer the question, not what I think is the question behind the question. The "x" is a wildcard. If you read the whole question, you'd see that the author already understands the architecture part.
– G. Tranter
yesterday
1
@JeremyP: gcc doesn't have-march=x86
. It has-march=i386
. See godbolt.org/z/xg19XI showsgcc -m32
's help for invalid-march=...
values, which lists all it supports. If you run x86 gcc with the default-m64
, it leaves out arches that only support 32-bit mode. gcc-m16
exists, but still requires 386+ because it mostly just assembles its usual machine code with.code16gcc
so instructions with explicit operands get an operand-size and address-size prefix. Anyway, the critical point is thatgcc -march
never pretended to set the target mode, just ISA extensions within it.
– Peter Cordes
yesterday
|
show 2 more comments
x is meant as wildcard, so this represents all CPUs able to run 8086 compatible code.
x is meant as wildcard, so this represents all CPUs able to run 8086 compatible code.
answered 2 days ago
RaffzahnRaffzahn
56.8k6138230
56.8k6138230
6
This answer is so far the only answer that addresses the original question about what the "x" represents.
– G. Tranter
2 days ago
4
@G.Tranter I agree up to a point. However, if you write "x86" people usually assume you mean compatible with the Intel 80386. i.e. capable of running in 32 bit protected mode. For example, if you compile a C program withgcc -march=x86
the code won't run on an 8086.
– JeremyP
yesterday
4
@JeremyP Exactly my thoughts. Even in the nineties, I don't remember anybody using x86 to mean 80286 or earlier. The gap was too large to put 16-bit systems in the same bag as 32-bit. When x86 appeared as a term I think everyone meant "80386+" by it.
– kubanczyk
yesterday
2
Read the question. I find it's always best to answer the question, not what I think is the question behind the question. The "x" is a wildcard. If you read the whole question, you'd see that the author already understands the architecture part.
– G. Tranter
yesterday
1
@JeremyP: gcc doesn't have-march=x86
. It has-march=i386
. See godbolt.org/z/xg19XI showsgcc -m32
's help for invalid-march=...
values, which lists all it supports. If you run x86 gcc with the default-m64
, it leaves out arches that only support 32-bit mode. gcc-m16
exists, but still requires 386+ because it mostly just assembles its usual machine code with.code16gcc
so instructions with explicit operands get an operand-size and address-size prefix. Anyway, the critical point is thatgcc -march
never pretended to set the target mode, just ISA extensions within it.
– Peter Cordes
yesterday
|
show 2 more comments
6
This answer is so far the only answer that addresses the original question about what the "x" represents.
– G. Tranter
2 days ago
4
@G.Tranter I agree up to a point. However, if you write "x86" people usually assume you mean compatible with the Intel 80386. i.e. capable of running in 32 bit protected mode. For example, if you compile a C program withgcc -march=x86
the code won't run on an 8086.
– JeremyP
yesterday
4
@JeremyP Exactly my thoughts. Even in the nineties, I don't remember anybody using x86 to mean 80286 or earlier. The gap was too large to put 16-bit systems in the same bag as 32-bit. When x86 appeared as a term I think everyone meant "80386+" by it.
– kubanczyk
yesterday
2
Read the question. I find it's always best to answer the question, not what I think is the question behind the question. The "x" is a wildcard. If you read the whole question, you'd see that the author already understands the architecture part.
– G. Tranter
yesterday
1
@JeremyP: gcc doesn't have-march=x86
. It has-march=i386
. See godbolt.org/z/xg19XI showsgcc -m32
's help for invalid-march=...
values, which lists all it supports. If you run x86 gcc with the default-m64
, it leaves out arches that only support 32-bit mode. gcc-m16
exists, but still requires 386+ because it mostly just assembles its usual machine code with.code16gcc
so instructions with explicit operands get an operand-size and address-size prefix. Anyway, the critical point is thatgcc -march
never pretended to set the target mode, just ISA extensions within it.
– Peter Cordes
yesterday
6
6
This answer is so far the only answer that addresses the original question about what the "x" represents.
– G. Tranter
2 days ago
This answer is so far the only answer that addresses the original question about what the "x" represents.
– G. Tranter
2 days ago
4
4
@G.Tranter I agree up to a point. However, if you write "x86" people usually assume you mean compatible with the Intel 80386. i.e. capable of running in 32 bit protected mode. For example, if you compile a C program with
gcc -march=x86
the code won't run on an 8086.– JeremyP
yesterday
@G.Tranter I agree up to a point. However, if you write "x86" people usually assume you mean compatible with the Intel 80386. i.e. capable of running in 32 bit protected mode. For example, if you compile a C program with
gcc -march=x86
the code won't run on an 8086.– JeremyP
yesterday
4
4
@JeremyP Exactly my thoughts. Even in the nineties, I don't remember anybody using x86 to mean 80286 or earlier. The gap was too large to put 16-bit systems in the same bag as 32-bit. When x86 appeared as a term I think everyone meant "80386+" by it.
– kubanczyk
yesterday
@JeremyP Exactly my thoughts. Even in the nineties, I don't remember anybody using x86 to mean 80286 or earlier. The gap was too large to put 16-bit systems in the same bag as 32-bit. When x86 appeared as a term I think everyone meant "80386+" by it.
– kubanczyk
yesterday
2
2
Read the question. I find it's always best to answer the question, not what I think is the question behind the question. The "x" is a wildcard. If you read the whole question, you'd see that the author already understands the architecture part.
– G. Tranter
yesterday
Read the question. I find it's always best to answer the question, not what I think is the question behind the question. The "x" is a wildcard. If you read the whole question, you'd see that the author already understands the architecture part.
– G. Tranter
yesterday
1
1
@JeremyP: gcc doesn't have
-march=x86
. It has -march=i386
. See godbolt.org/z/xg19XI shows gcc -m32
's help for invalid -march=...
values, which lists all it supports. If you run x86 gcc with the default -m64
, it leaves out arches that only support 32-bit mode. gcc -m16
exists, but still requires 386+ because it mostly just assembles its usual machine code with .code16gcc
so instructions with explicit operands get an operand-size and address-size prefix. Anyway, the critical point is that gcc -march
never pretended to set the target mode, just ISA extensions within it.– Peter Cordes
yesterday
@JeremyP: gcc doesn't have
-march=x86
. It has -march=i386
. See godbolt.org/z/xg19XI shows gcc -m32
's help for invalid -march=...
values, which lists all it supports. If you run x86 gcc with the default -m64
, it leaves out arches that only support 32-bit mode. gcc -m16
exists, but still requires 386+ because it mostly just assembles its usual machine code with .code16gcc
so instructions with explicit operands get an operand-size and address-size prefix. Anyway, the critical point is that gcc -march
never pretended to set the target mode, just ISA extensions within it.– Peter Cordes
yesterday
|
show 2 more comments
In modern usage it also means software which only uses the 32-bit architecture of the earlier 80x86 processors, to distinguish it from 64-bit applications.
Microsoft uses it that way on 64-bit versions of Windows, which have two separate directories called "Program Files" and "Program Files (x86)."
The 32-bit applications will run on 64-bit hardware, but the OS needs to provide the appropriate 32 or 64 bit interface at run-time.
5
That doesn't mean software though, it means the hardware the software is built for. Consider a rack of fan belts labelled "Ford Focus", "Nissan Micra", etc.. You're not saying the fan belt is a Nissan Micra, only that it's suitable for use on one.
– Graham
yesterday
Only MS Windows uses x86 to specifically exclude x86-64. In other contexts, like computer architecture discussion, x86 includes all CPUs that are backwards-compatible with 8086, with the usual assumption that modern CPUs are running in their best mode (x86-64 long mode). Or at least no implication of 32-bit mode specifically. e.g. "x86 has efficient unaligned loads, but ARM or MIPS doesn't always". But I'd certainly say "modern x86 has 16 integer and 16 or 32 vector registers". (Which is only true in long mode, and I'm of course talking about architectural registers, not physical.)
– Peter Cordes
yesterday
TL:DR: In other contexts (outside of MS Windows software), x86-64 is a subset of x86, not a disjoint set.
– Peter Cordes
yesterday
A term that unambiguously means 32-bit x86 is "IA-32". en.wikipedia.org/wiki/IA-32. Intel uses that in their ISA manuals. (For a while, they used IA-32e (enhanced) for x86-64-specific features / modes. I forget if they still do that or if they call it x86-64.)
– Peter Cordes
yesterday
add a comment |
In modern usage it also means software which only uses the 32-bit architecture of the earlier 80x86 processors, to distinguish it from 64-bit applications.
Microsoft uses it that way on 64-bit versions of Windows, which have two separate directories called "Program Files" and "Program Files (x86)."
The 32-bit applications will run on 64-bit hardware, but the OS needs to provide the appropriate 32 or 64 bit interface at run-time.
5
That doesn't mean software though, it means the hardware the software is built for. Consider a rack of fan belts labelled "Ford Focus", "Nissan Micra", etc.. You're not saying the fan belt is a Nissan Micra, only that it's suitable for use on one.
– Graham
yesterday
Only MS Windows uses x86 to specifically exclude x86-64. In other contexts, like computer architecture discussion, x86 includes all CPUs that are backwards-compatible with 8086, with the usual assumption that modern CPUs are running in their best mode (x86-64 long mode). Or at least no implication of 32-bit mode specifically. e.g. "x86 has efficient unaligned loads, but ARM or MIPS doesn't always". But I'd certainly say "modern x86 has 16 integer and 16 or 32 vector registers". (Which is only true in long mode, and I'm of course talking about architectural registers, not physical.)
– Peter Cordes
yesterday
TL:DR: In other contexts (outside of MS Windows software), x86-64 is a subset of x86, not a disjoint set.
– Peter Cordes
yesterday
A term that unambiguously means 32-bit x86 is "IA-32". en.wikipedia.org/wiki/IA-32. Intel uses that in their ISA manuals. (For a while, they used IA-32e (enhanced) for x86-64-specific features / modes. I forget if they still do that or if they call it x86-64.)
– Peter Cordes
yesterday
add a comment |
In modern usage it also means software which only uses the 32-bit architecture of the earlier 80x86 processors, to distinguish it from 64-bit applications.
Microsoft uses it that way on 64-bit versions of Windows, which have two separate directories called "Program Files" and "Program Files (x86)."
The 32-bit applications will run on 64-bit hardware, but the OS needs to provide the appropriate 32 or 64 bit interface at run-time.
In modern usage it also means software which only uses the 32-bit architecture of the earlier 80x86 processors, to distinguish it from 64-bit applications.
Microsoft uses it that way on 64-bit versions of Windows, which have two separate directories called "Program Files" and "Program Files (x86)."
The 32-bit applications will run on 64-bit hardware, but the OS needs to provide the appropriate 32 or 64 bit interface at run-time.
answered 2 days ago
alephzeroalephzero
2,5211816
2,5211816
5
That doesn't mean software though, it means the hardware the software is built for. Consider a rack of fan belts labelled "Ford Focus", "Nissan Micra", etc.. You're not saying the fan belt is a Nissan Micra, only that it's suitable for use on one.
– Graham
yesterday
Only MS Windows uses x86 to specifically exclude x86-64. In other contexts, like computer architecture discussion, x86 includes all CPUs that are backwards-compatible with 8086, with the usual assumption that modern CPUs are running in their best mode (x86-64 long mode). Or at least no implication of 32-bit mode specifically. e.g. "x86 has efficient unaligned loads, but ARM or MIPS doesn't always". But I'd certainly say "modern x86 has 16 integer and 16 or 32 vector registers". (Which is only true in long mode, and I'm of course talking about architectural registers, not physical.)
– Peter Cordes
yesterday
TL:DR: In other contexts (outside of MS Windows software), x86-64 is a subset of x86, not a disjoint set.
– Peter Cordes
yesterday
A term that unambiguously means 32-bit x86 is "IA-32". en.wikipedia.org/wiki/IA-32. Intel uses that in their ISA manuals. (For a while, they used IA-32e (enhanced) for x86-64-specific features / modes. I forget if they still do that or if they call it x86-64.)
– Peter Cordes
yesterday
add a comment |
5
That doesn't mean software though, it means the hardware the software is built for. Consider a rack of fan belts labelled "Ford Focus", "Nissan Micra", etc.. You're not saying the fan belt is a Nissan Micra, only that it's suitable for use on one.
– Graham
yesterday
Only MS Windows uses x86 to specifically exclude x86-64. In other contexts, like computer architecture discussion, x86 includes all CPUs that are backwards-compatible with 8086, with the usual assumption that modern CPUs are running in their best mode (x86-64 long mode). Or at least no implication of 32-bit mode specifically. e.g. "x86 has efficient unaligned loads, but ARM or MIPS doesn't always". But I'd certainly say "modern x86 has 16 integer and 16 or 32 vector registers". (Which is only true in long mode, and I'm of course talking about architectural registers, not physical.)
– Peter Cordes
yesterday
TL:DR: In other contexts (outside of MS Windows software), x86-64 is a subset of x86, not a disjoint set.
– Peter Cordes
yesterday
A term that unambiguously means 32-bit x86 is "IA-32". en.wikipedia.org/wiki/IA-32. Intel uses that in their ISA manuals. (For a while, they used IA-32e (enhanced) for x86-64-specific features / modes. I forget if they still do that or if they call it x86-64.)
– Peter Cordes
yesterday
5
5
That doesn't mean software though, it means the hardware the software is built for. Consider a rack of fan belts labelled "Ford Focus", "Nissan Micra", etc.. You're not saying the fan belt is a Nissan Micra, only that it's suitable for use on one.
– Graham
yesterday
That doesn't mean software though, it means the hardware the software is built for. Consider a rack of fan belts labelled "Ford Focus", "Nissan Micra", etc.. You're not saying the fan belt is a Nissan Micra, only that it's suitable for use on one.
– Graham
yesterday
Only MS Windows uses x86 to specifically exclude x86-64. In other contexts, like computer architecture discussion, x86 includes all CPUs that are backwards-compatible with 8086, with the usual assumption that modern CPUs are running in their best mode (x86-64 long mode). Or at least no implication of 32-bit mode specifically. e.g. "x86 has efficient unaligned loads, but ARM or MIPS doesn't always". But I'd certainly say "modern x86 has 16 integer and 16 or 32 vector registers". (Which is only true in long mode, and I'm of course talking about architectural registers, not physical.)
– Peter Cordes
yesterday
Only MS Windows uses x86 to specifically exclude x86-64. In other contexts, like computer architecture discussion, x86 includes all CPUs that are backwards-compatible with 8086, with the usual assumption that modern CPUs are running in their best mode (x86-64 long mode). Or at least no implication of 32-bit mode specifically. e.g. "x86 has efficient unaligned loads, but ARM or MIPS doesn't always". But I'd certainly say "modern x86 has 16 integer and 16 or 32 vector registers". (Which is only true in long mode, and I'm of course talking about architectural registers, not physical.)
– Peter Cordes
yesterday
TL:DR: In other contexts (outside of MS Windows software), x86-64 is a subset of x86, not a disjoint set.
– Peter Cordes
yesterday
TL:DR: In other contexts (outside of MS Windows software), x86-64 is a subset of x86, not a disjoint set.
– Peter Cordes
yesterday
A term that unambiguously means 32-bit x86 is "IA-32". en.wikipedia.org/wiki/IA-32. Intel uses that in their ISA manuals. (For a while, they used IA-32e (enhanced) for x86-64-specific features / modes. I forget if they still do that or if they call it x86-64.)
– Peter Cordes
yesterday
A term that unambiguously means 32-bit x86 is "IA-32". en.wikipedia.org/wiki/IA-32. Intel uses that in their ISA manuals. (For a while, they used IA-32e (enhanced) for x86-64-specific features / modes. I forget if they still do that or if they call it x86-64.)
– Peter Cordes
yesterday
add a comment |
Intel products were numbered. For example, their first microprocessor was the 4-bit Intel 4004, which was coupled with the 4001 ROM, 4002 RAM, and 4003 shift register. The start denoted the series, and the last digit denoted the specific part.
Later, the intel 8008 came along, which was an 8-bit microprocessor. This was succeeded by the 8080, which was then replaced by the 8085, which was then replaced by the 8086.
After the 8086, processors started taking on the format of 80x86, with x being a number such as 80186, 80286, 80386, etc. They were backwards compatible with one-another, and modern computers still boot into 16-bit-mode. As Intel continued rolling out processors, they began to be referred to as Intel 386 or Intel 486 rather than Intel 80386. This is how the terms 'i386' and 'i586' came into play. As they were based on the same architecture, they were called Intel x86, where x refers to a number. They also came with coprocessors that had a last number of '7', such as 80387, and as such we also have x87.
New contributor
1
You have a typo: 8186->80186. Too small for me to edit myself.
– Martin Bonner
yesterday
@MartinBonner Thanks, fixed.
– Ender - Joshua Pritsker
yesterday
Why does "i586" refer to Pentium 1, and why does "i686" refer to Pentium Pro? - i586 is a made-up term. The CPU model numbers were 80500 through 80502 for the P5 / P54C / P55C microarchitectures. But yeah there was a 5 in there, so i586 is semi justified for convenience and consistency.
– Peter Cordes
yesterday
add a comment |
Intel products were numbered. For example, their first microprocessor was the 4-bit Intel 4004, which was coupled with the 4001 ROM, 4002 RAM, and 4003 shift register. The start denoted the series, and the last digit denoted the specific part.
Later, the intel 8008 came along, which was an 8-bit microprocessor. This was succeeded by the 8080, which was then replaced by the 8085, which was then replaced by the 8086.
After the 8086, processors started taking on the format of 80x86, with x being a number such as 80186, 80286, 80386, etc. They were backwards compatible with one-another, and modern computers still boot into 16-bit-mode. As Intel continued rolling out processors, they began to be referred to as Intel 386 or Intel 486 rather than Intel 80386. This is how the terms 'i386' and 'i586' came into play. As they were based on the same architecture, they were called Intel x86, where x refers to a number. They also came with coprocessors that had a last number of '7', such as 80387, and as such we also have x87.
New contributor
1
You have a typo: 8186->80186. Too small for me to edit myself.
– Martin Bonner
yesterday
@MartinBonner Thanks, fixed.
– Ender - Joshua Pritsker
yesterday
Why does "i586" refer to Pentium 1, and why does "i686" refer to Pentium Pro? - i586 is a made-up term. The CPU model numbers were 80500 through 80502 for the P5 / P54C / P55C microarchitectures. But yeah there was a 5 in there, so i586 is semi justified for convenience and consistency.
– Peter Cordes
yesterday
add a comment |
Intel products were numbered. For example, their first microprocessor was the 4-bit Intel 4004, which was coupled with the 4001 ROM, 4002 RAM, and 4003 shift register. The start denoted the series, and the last digit denoted the specific part.
Later, the intel 8008 came along, which was an 8-bit microprocessor. This was succeeded by the 8080, which was then replaced by the 8085, which was then replaced by the 8086.
After the 8086, processors started taking on the format of 80x86, with x being a number such as 80186, 80286, 80386, etc. They were backwards compatible with one-another, and modern computers still boot into 16-bit-mode. As Intel continued rolling out processors, they began to be referred to as Intel 386 or Intel 486 rather than Intel 80386. This is how the terms 'i386' and 'i586' came into play. As they were based on the same architecture, they were called Intel x86, where x refers to a number. They also came with coprocessors that had a last number of '7', such as 80387, and as such we also have x87.
New contributor
Intel products were numbered. For example, their first microprocessor was the 4-bit Intel 4004, which was coupled with the 4001 ROM, 4002 RAM, and 4003 shift register. The start denoted the series, and the last digit denoted the specific part.
Later, the intel 8008 came along, which was an 8-bit microprocessor. This was succeeded by the 8080, which was then replaced by the 8085, which was then replaced by the 8086.
After the 8086, processors started taking on the format of 80x86, with x being a number such as 80186, 80286, 80386, etc. They were backwards compatible with one-another, and modern computers still boot into 16-bit-mode. As Intel continued rolling out processors, they began to be referred to as Intel 386 or Intel 486 rather than Intel 80386. This is how the terms 'i386' and 'i586' came into play. As they were based on the same architecture, they were called Intel x86, where x refers to a number. They also came with coprocessors that had a last number of '7', such as 80387, and as such we also have x87.
New contributor
edited yesterday
New contributor
answered 2 days ago
Ender - Joshua PritskerEnder - Joshua Pritsker
613
613
New contributor
New contributor
1
You have a typo: 8186->80186. Too small for me to edit myself.
– Martin Bonner
yesterday
@MartinBonner Thanks, fixed.
– Ender - Joshua Pritsker
yesterday
Why does "i586" refer to Pentium 1, and why does "i686" refer to Pentium Pro? - i586 is a made-up term. The CPU model numbers were 80500 through 80502 for the P5 / P54C / P55C microarchitectures. But yeah there was a 5 in there, so i586 is semi justified for convenience and consistency.
– Peter Cordes
yesterday
add a comment |
1
You have a typo: 8186->80186. Too small for me to edit myself.
– Martin Bonner
yesterday
@MartinBonner Thanks, fixed.
– Ender - Joshua Pritsker
yesterday
Why does "i586" refer to Pentium 1, and why does "i686" refer to Pentium Pro? - i586 is a made-up term. The CPU model numbers were 80500 through 80502 for the P5 / P54C / P55C microarchitectures. But yeah there was a 5 in there, so i586 is semi justified for convenience and consistency.
– Peter Cordes
yesterday
1
1
You have a typo: 8186->80186. Too small for me to edit myself.
– Martin Bonner
yesterday
You have a typo: 8186->80186. Too small for me to edit myself.
– Martin Bonner
yesterday
@MartinBonner Thanks, fixed.
– Ender - Joshua Pritsker
yesterday
@MartinBonner Thanks, fixed.
– Ender - Joshua Pritsker
yesterday
Why does "i586" refer to Pentium 1, and why does "i686" refer to Pentium Pro? - i586 is a made-up term. The CPU model numbers were 80500 through 80502 for the P5 / P54C / P55C microarchitectures. But yeah there was a 5 in there, so i586 is semi justified for convenience and consistency.
– Peter Cordes
yesterday
Why does "i586" refer to Pentium 1, and why does "i686" refer to Pentium Pro? - i586 is a made-up term. The CPU model numbers were 80500 through 80502 for the P5 / P54C / P55C microarchitectures. But yeah there was a 5 in there, so i586 is semi justified for convenience and consistency.
– Peter Cordes
yesterday
add a comment |
It just means any processor compatible with same architecture.
So it includes 8088, 8086, 80186, 80286, 80386, 80486, Pentium, etc..
add a comment |
It just means any processor compatible with same architecture.
So it includes 8088, 8086, 80186, 80286, 80386, 80486, Pentium, etc..
add a comment |
It just means any processor compatible with same architecture.
So it includes 8088, 8086, 80186, 80286, 80386, 80486, Pentium, etc..
It just means any processor compatible with same architecture.
So it includes 8088, 8086, 80186, 80286, 80386, 80486, Pentium, etc..
answered 2 days ago
JustmeJustme
3973
3973
add a comment |
add a comment |
The name "x86" was never 'given' or 'desiged' this way. If I remember correctly, it more or less evolved as a convenient abbriviation for a whole range of compatible processors.
Back in the day when PC's became populair, it was important that your PC was "IBM Compatible". This meant, among other things, your PC must have an Intel 8086 or an 8088. Later, when Intel released more powerfull processors such as the (rare) 80186 or (popular) 80286, it was still important that your PC was just "MS-Dos" or "IBM Compatible". The 80286 was just a faster processor. It had a protected mode feature, but little software actually used or even required that.
The next step was the 80386. This was an improvement over the 80286 because it had a mode that provided full backward compatibility with 8086 programs. Operating systems such as OS/2, DesqView and MS-Windows used this mode to provide backward compatibility whith existing software. Other operating systems such as Linux and *BSD's designed for PC hardware also depended on some new features of the 80386 without acutally providing direct compatibilitiy with existing MS-DOS software. All these systems required a 80386 processor.
Then came the 80486. An even faster and more powerfull processor but mainly backward compatible with the '386. So if you bought a '486 you could still run software designed for the '386. The package would say 'needs a 386 or better' or 'needs 386 or 486'
Along came the 80586 or Pentium. And then the Pentium Pro, also known as 80686...
By this time software developers got tired of listing all possible numbers and since most software was still written to be able to run on a '386, the whole list of numbers was abbriviated to just "x86". This later became synonymous with "32 bit", because the 80386 was a 32 bit processor and hence software that's written for 'x86' is 32-bit software.
New contributor
add a comment |
The name "x86" was never 'given' or 'desiged' this way. If I remember correctly, it more or less evolved as a convenient abbriviation for a whole range of compatible processors.
Back in the day when PC's became populair, it was important that your PC was "IBM Compatible". This meant, among other things, your PC must have an Intel 8086 or an 8088. Later, when Intel released more powerfull processors such as the (rare) 80186 or (popular) 80286, it was still important that your PC was just "MS-Dos" or "IBM Compatible". The 80286 was just a faster processor. It had a protected mode feature, but little software actually used or even required that.
The next step was the 80386. This was an improvement over the 80286 because it had a mode that provided full backward compatibility with 8086 programs. Operating systems such as OS/2, DesqView and MS-Windows used this mode to provide backward compatibility whith existing software. Other operating systems such as Linux and *BSD's designed for PC hardware also depended on some new features of the 80386 without acutally providing direct compatibilitiy with existing MS-DOS software. All these systems required a 80386 processor.
Then came the 80486. An even faster and more powerfull processor but mainly backward compatible with the '386. So if you bought a '486 you could still run software designed for the '386. The package would say 'needs a 386 or better' or 'needs 386 or 486'
Along came the 80586 or Pentium. And then the Pentium Pro, also known as 80686...
By this time software developers got tired of listing all possible numbers and since most software was still written to be able to run on a '386, the whole list of numbers was abbriviated to just "x86". This later became synonymous with "32 bit", because the 80386 was a 32 bit processor and hence software that's written for 'x86' is 32-bit software.
New contributor
add a comment |
The name "x86" was never 'given' or 'desiged' this way. If I remember correctly, it more or less evolved as a convenient abbriviation for a whole range of compatible processors.
Back in the day when PC's became populair, it was important that your PC was "IBM Compatible". This meant, among other things, your PC must have an Intel 8086 or an 8088. Later, when Intel released more powerfull processors such as the (rare) 80186 or (popular) 80286, it was still important that your PC was just "MS-Dos" or "IBM Compatible". The 80286 was just a faster processor. It had a protected mode feature, but little software actually used or even required that.
The next step was the 80386. This was an improvement over the 80286 because it had a mode that provided full backward compatibility with 8086 programs. Operating systems such as OS/2, DesqView and MS-Windows used this mode to provide backward compatibility whith existing software. Other operating systems such as Linux and *BSD's designed for PC hardware also depended on some new features of the 80386 without acutally providing direct compatibilitiy with existing MS-DOS software. All these systems required a 80386 processor.
Then came the 80486. An even faster and more powerfull processor but mainly backward compatible with the '386. So if you bought a '486 you could still run software designed for the '386. The package would say 'needs a 386 or better' or 'needs 386 or 486'
Along came the 80586 or Pentium. And then the Pentium Pro, also known as 80686...
By this time software developers got tired of listing all possible numbers and since most software was still written to be able to run on a '386, the whole list of numbers was abbriviated to just "x86". This later became synonymous with "32 bit", because the 80386 was a 32 bit processor and hence software that's written for 'x86' is 32-bit software.
New contributor
The name "x86" was never 'given' or 'desiged' this way. If I remember correctly, it more or less evolved as a convenient abbriviation for a whole range of compatible processors.
Back in the day when PC's became populair, it was important that your PC was "IBM Compatible". This meant, among other things, your PC must have an Intel 8086 or an 8088. Later, when Intel released more powerfull processors such as the (rare) 80186 or (popular) 80286, it was still important that your PC was just "MS-Dos" or "IBM Compatible". The 80286 was just a faster processor. It had a protected mode feature, but little software actually used or even required that.
The next step was the 80386. This was an improvement over the 80286 because it had a mode that provided full backward compatibility with 8086 programs. Operating systems such as OS/2, DesqView and MS-Windows used this mode to provide backward compatibility whith existing software. Other operating systems such as Linux and *BSD's designed for PC hardware also depended on some new features of the 80386 without acutally providing direct compatibilitiy with existing MS-DOS software. All these systems required a 80386 processor.
Then came the 80486. An even faster and more powerfull processor but mainly backward compatible with the '386. So if you bought a '486 you could still run software designed for the '386. The package would say 'needs a 386 or better' or 'needs 386 or 486'
Along came the 80586 or Pentium. And then the Pentium Pro, also known as 80686...
By this time software developers got tired of listing all possible numbers and since most software was still written to be able to run on a '386, the whole list of numbers was abbriviated to just "x86". This later became synonymous with "32 bit", because the 80386 was a 32 bit processor and hence software that's written for 'x86' is 32-bit software.
New contributor
New contributor
answered 18 hours ago
OscarOscar
312
312
New contributor
New contributor
add a comment |
add a comment |
Practically x86
is shortening for "80386 or 80486 running in 32-bit mode". It comes from 8086/186/286+ line but Win32 cannot run on CPU below 386. After 80486 the 80*86
scheme was changed to Pentium[N] and AMD [model].
New contributor
Why does "i586" refer to Pentium 1, and why does "i686" refer to Pentium Pro? explains that in casual usage, i586 and i686 were used for somewhat justifiable reasons. x86 definitely does not exclude modern CPUs like Skylake! In most contexts other than MS Windows (e.g. CPU architecture discussion) it also doesn't mean specifically 32-bit mode.
– Peter Cordes
yesterday
add a comment |
Practically x86
is shortening for "80386 or 80486 running in 32-bit mode". It comes from 8086/186/286+ line but Win32 cannot run on CPU below 386. After 80486 the 80*86
scheme was changed to Pentium[N] and AMD [model].
New contributor
Why does "i586" refer to Pentium 1, and why does "i686" refer to Pentium Pro? explains that in casual usage, i586 and i686 were used for somewhat justifiable reasons. x86 definitely does not exclude modern CPUs like Skylake! In most contexts other than MS Windows (e.g. CPU architecture discussion) it also doesn't mean specifically 32-bit mode.
– Peter Cordes
yesterday
add a comment |
Practically x86
is shortening for "80386 or 80486 running in 32-bit mode". It comes from 8086/186/286+ line but Win32 cannot run on CPU below 386. After 80486 the 80*86
scheme was changed to Pentium[N] and AMD [model].
New contributor
Practically x86
is shortening for "80386 or 80486 running in 32-bit mode". It comes from 8086/186/286+ line but Win32 cannot run on CPU below 386. After 80486 the 80*86
scheme was changed to Pentium[N] and AMD [model].
New contributor
New contributor
answered yesterday
i486i486
1114
1114
New contributor
New contributor
Why does "i586" refer to Pentium 1, and why does "i686" refer to Pentium Pro? explains that in casual usage, i586 and i686 were used for somewhat justifiable reasons. x86 definitely does not exclude modern CPUs like Skylake! In most contexts other than MS Windows (e.g. CPU architecture discussion) it also doesn't mean specifically 32-bit mode.
– Peter Cordes
yesterday
add a comment |
Why does "i586" refer to Pentium 1, and why does "i686" refer to Pentium Pro? explains that in casual usage, i586 and i686 were used for somewhat justifiable reasons. x86 definitely does not exclude modern CPUs like Skylake! In most contexts other than MS Windows (e.g. CPU architecture discussion) it also doesn't mean specifically 32-bit mode.
– Peter Cordes
yesterday
Why does "i586" refer to Pentium 1, and why does "i686" refer to Pentium Pro? explains that in casual usage, i586 and i686 were used for somewhat justifiable reasons. x86 definitely does not exclude modern CPUs like Skylake! In most contexts other than MS Windows (e.g. CPU architecture discussion) it also doesn't mean specifically 32-bit mode.
– Peter Cordes
yesterday
Why does "i586" refer to Pentium 1, and why does "i686" refer to Pentium Pro? explains that in casual usage, i586 and i686 were used for somewhat justifiable reasons. x86 definitely does not exclude modern CPUs like Skylake! In most contexts other than MS Windows (e.g. CPU architecture discussion) it also doesn't mean specifically 32-bit mode.
– Peter Cordes
yesterday
add a comment |
user12302 is a new contributor. Be nice, and check out our Code of Conduct.
user12302 is a new contributor. Be nice, and check out our Code of Conduct.
user12302 is a new contributor. Be nice, and check out our Code of Conduct.
user12302 is a new contributor. Be nice, and check out our Code of Conduct.
Thanks for contributing an answer to Retrocomputing Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fretrocomputing.stackexchange.com%2fquestions%2f9685%2fwhat-does-the-x-in-x86-represent%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
10
80 _ 86 (nothing in between), 80 1 86, 80 2 86, 80 3 86, 80 4 86...notice the pattern?
– user17915
2 days ago
3
x
in IC part numbering is a common way to declare variable ID for the same IC family. Its meaning can be anything in CPU its usually a generation of processor in MCU it might indicate RAM or EEPROM size, for voltage regulators its target voltage etc ... For TTL logic XXYY like 7474 etc the XX means quality (from commercial to military) etc ... so to be sure see datasheet of the part ... To get back to your question Intel CPU/MCU start using a shortcuted marking like x86, x51 its really an shortcut for 8086... and 8051 ... and it sort of stuck with the community too– Spektre
yesterday
@bogl heh I did not consider that comment an answer rather some additional info I did not see in the other answers ... and was reluctant to create answer on my own as there are already good answers present ... Should I move into answer?
– Spektre
yesterday
1
Up to you, I have no say here. ;) But to me, it looks very much like an answer.
– bogl
yesterday
1
OT in Retrocomputing ... ;-)
– Peter A. Schneider
yesterday