Why did Windows 95 crash the whole system but newer Windows only crashed programs?Is it true that control+alt+delete only became a thing because IBM would not build Bill Gates a computer with a task manager button?What technologies were used to harden Windows against BluescreensWhy was BASIC built into so many operating systems?Using a modern microcontroller to explore early home computing conceptsDoes “Disk Operating System” imply that there was a “non-disk” Operating System?Where does the hierarchical directory structure originate from?Can anyone help provide more information regarding an operating system called I/OS by Infosoft in the early 1980's?Which operating systems for 80286 computers allowed a process to use more than 128k data?Why didn't PostScript eliminate the need for printer drivers?Operating systems which have non-x86 instruction set architectureDid Windows NT 4 emulate x86 on non-Intel platforms?Why did scandisk exist?

Does bottle color affect mold growth?

Whats the name of this projection?

In Pokémon Go, why does one of my Pikachu have an option to evolve, but another one doesn't?

Did WWII Japanese soldiers engage in cannibalism of their enemies?

Should I self-publish my novella on Amazon or try my luck getting publishers?

Why does the ultra long-end of a yield curve invert?

Erratic behavior by an internal employee against an external employee

Did Apollo leave poop on the moon?

What are these mathematical groups in U.S. universities?

Premier League simulation

In the movie Harry Potter and the Order or the Phoenix, why didn't Mr. Filch succeed to open the Room of Requirement if it's what he needed?

Why use regularization instead of decreasing the model

How quickly could a country build a tall concrete wall around a city?

How do I get the =LEFT function in excel, to also take the number zero as the first number?

Need help understanding lens reach

Is there a loss of quality when converting RGB to HEX?

What is to be understood by the assertion 'Israels right to exist'?

How to realistically deal with a shield user?

Word or idiom defining something barely functional

Does the United States guarantee any unique freedoms?

Is it allowed and safe to carry a passenger / non-pilot in the front seat of a small general aviation airplane?

Can a PC attack themselves with an unarmed strike?

Can we use other things than single-word verbs in our dialog tags?

Is multiplication of real numbers uniquely defined as being distributive over addition?



Why did Windows 95 crash the whole system but newer Windows only crashed programs?


Is it true that control+alt+delete only became a thing because IBM would not build Bill Gates a computer with a task manager button?What technologies were used to harden Windows against BluescreensWhy was BASIC built into so many operating systems?Using a modern microcontroller to explore early home computing conceptsDoes “Disk Operating System” imply that there was a “non-disk” Operating System?Where does the hierarchical directory structure originate from?Can anyone help provide more information regarding an operating system called I/OS by Infosoft in the early 1980's?Which operating systems for 80286 computers allowed a process to use more than 128k data?Why didn't PostScript eliminate the need for printer drivers?Operating systems which have non-x86 instruction set architectureDid Windows NT 4 emulate x86 on non-Intel platforms?Why did scandisk exist?






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;








26















Example situation.



You are using a program in Windows 95 and screen goes blue only for you to restart the whole computer



You are using a program in Windows 7 and the program stops responding only to be stopped by the task manager.



Why was there a difference?










share|improve this question





















  • 24





    Both scenarios are possible on both OSs. Could you be a little more specific? (Yes, I know, crashes were more common on Windows 95.)

    – Stephen Kitt
    Jul 28 at 18:21







  • 3





    So you’re asking, in general, why Windows 95 is more crash-prone than Windows 7, with more crashes affecting the whole OS?

    – Stephen Kitt
    Jul 28 at 18:31






  • 3





    en.wikipedia.org/wiki/Blue_Screen_of_Death

    – Bruce Abbott
    Jul 28 at 19:21






  • 8





    Keep in mind that Blue Screens of Death in Windows 9x weren't necessarily fatal. A lot of errors which nowadays are simply reported to the event log or cause an error popup would throw a BSoD on Win9x - but you could still resume the system after that happened.

    – Maciej Stachowski
    Jul 29 at 9:55






  • 4





    Side note; your comparison isn't really valid because it implies that Windows 7 (other operating systems in the Windows NT family) never causes a blue-screen stop of the entire computer; this is not true. There are many reasons why NT based operating systems will experience a STOP and crash the entire operating system, just like Win9x did - even though the two operating system families are massively different

    – Caius Jard
    Jul 30 at 9:33


















26















Example situation.



You are using a program in Windows 95 and screen goes blue only for you to restart the whole computer



You are using a program in Windows 7 and the program stops responding only to be stopped by the task manager.



Why was there a difference?










share|improve this question





















  • 24





    Both scenarios are possible on both OSs. Could you be a little more specific? (Yes, I know, crashes were more common on Windows 95.)

    – Stephen Kitt
    Jul 28 at 18:21







  • 3





    So you’re asking, in general, why Windows 95 is more crash-prone than Windows 7, with more crashes affecting the whole OS?

    – Stephen Kitt
    Jul 28 at 18:31






  • 3





    en.wikipedia.org/wiki/Blue_Screen_of_Death

    – Bruce Abbott
    Jul 28 at 19:21






  • 8





    Keep in mind that Blue Screens of Death in Windows 9x weren't necessarily fatal. A lot of errors which nowadays are simply reported to the event log or cause an error popup would throw a BSoD on Win9x - but you could still resume the system after that happened.

    – Maciej Stachowski
    Jul 29 at 9:55






  • 4





    Side note; your comparison isn't really valid because it implies that Windows 7 (other operating systems in the Windows NT family) never causes a blue-screen stop of the entire computer; this is not true. There are many reasons why NT based operating systems will experience a STOP and crash the entire operating system, just like Win9x did - even though the two operating system families are massively different

    – Caius Jard
    Jul 30 at 9:33














26












26








26


5






Example situation.



You are using a program in Windows 95 and screen goes blue only for you to restart the whole computer



You are using a program in Windows 7 and the program stops responding only to be stopped by the task manager.



Why was there a difference?










share|improve this question
















Example situation.



You are using a program in Windows 95 and screen goes blue only for you to restart the whole computer



You are using a program in Windows 7 and the program stops responding only to be stopped by the task manager.



Why was there a difference?







operating-system windows






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Jul 29 at 5:01









DrSheldon

3,0823 gold badges15 silver badges41 bronze badges




3,0823 gold badges15 silver badges41 bronze badges










asked Jul 28 at 17:56









Delta Oscar UniformDelta Oscar Uniform

3701 gold badge3 silver badges9 bronze badges




3701 gold badge3 silver badges9 bronze badges










  • 24





    Both scenarios are possible on both OSs. Could you be a little more specific? (Yes, I know, crashes were more common on Windows 95.)

    – Stephen Kitt
    Jul 28 at 18:21







  • 3





    So you’re asking, in general, why Windows 95 is more crash-prone than Windows 7, with more crashes affecting the whole OS?

    – Stephen Kitt
    Jul 28 at 18:31






  • 3





    en.wikipedia.org/wiki/Blue_Screen_of_Death

    – Bruce Abbott
    Jul 28 at 19:21






  • 8





    Keep in mind that Blue Screens of Death in Windows 9x weren't necessarily fatal. A lot of errors which nowadays are simply reported to the event log or cause an error popup would throw a BSoD on Win9x - but you could still resume the system after that happened.

    – Maciej Stachowski
    Jul 29 at 9:55






  • 4





    Side note; your comparison isn't really valid because it implies that Windows 7 (other operating systems in the Windows NT family) never causes a blue-screen stop of the entire computer; this is not true. There are many reasons why NT based operating systems will experience a STOP and crash the entire operating system, just like Win9x did - even though the two operating system families are massively different

    – Caius Jard
    Jul 30 at 9:33













  • 24





    Both scenarios are possible on both OSs. Could you be a little more specific? (Yes, I know, crashes were more common on Windows 95.)

    – Stephen Kitt
    Jul 28 at 18:21







  • 3





    So you’re asking, in general, why Windows 95 is more crash-prone than Windows 7, with more crashes affecting the whole OS?

    – Stephen Kitt
    Jul 28 at 18:31






  • 3





    en.wikipedia.org/wiki/Blue_Screen_of_Death

    – Bruce Abbott
    Jul 28 at 19:21






  • 8





    Keep in mind that Blue Screens of Death in Windows 9x weren't necessarily fatal. A lot of errors which nowadays are simply reported to the event log or cause an error popup would throw a BSoD on Win9x - but you could still resume the system after that happened.

    – Maciej Stachowski
    Jul 29 at 9:55






  • 4





    Side note; your comparison isn't really valid because it implies that Windows 7 (other operating systems in the Windows NT family) never causes a blue-screen stop of the entire computer; this is not true. There are many reasons why NT based operating systems will experience a STOP and crash the entire operating system, just like Win9x did - even though the two operating system families are massively different

    – Caius Jard
    Jul 30 at 9:33








24




24





Both scenarios are possible on both OSs. Could you be a little more specific? (Yes, I know, crashes were more common on Windows 95.)

– Stephen Kitt
Jul 28 at 18:21






Both scenarios are possible on both OSs. Could you be a little more specific? (Yes, I know, crashes were more common on Windows 95.)

– Stephen Kitt
Jul 28 at 18:21





3




3





So you’re asking, in general, why Windows 95 is more crash-prone than Windows 7, with more crashes affecting the whole OS?

– Stephen Kitt
Jul 28 at 18:31





So you’re asking, in general, why Windows 95 is more crash-prone than Windows 7, with more crashes affecting the whole OS?

– Stephen Kitt
Jul 28 at 18:31




3




3





en.wikipedia.org/wiki/Blue_Screen_of_Death

– Bruce Abbott
Jul 28 at 19:21





en.wikipedia.org/wiki/Blue_Screen_of_Death

– Bruce Abbott
Jul 28 at 19:21




8




8





Keep in mind that Blue Screens of Death in Windows 9x weren't necessarily fatal. A lot of errors which nowadays are simply reported to the event log or cause an error popup would throw a BSoD on Win9x - but you could still resume the system after that happened.

– Maciej Stachowski
Jul 29 at 9:55





Keep in mind that Blue Screens of Death in Windows 9x weren't necessarily fatal. A lot of errors which nowadays are simply reported to the event log or cause an error popup would throw a BSoD on Win9x - but you could still resume the system after that happened.

– Maciej Stachowski
Jul 29 at 9:55




4




4





Side note; your comparison isn't really valid because it implies that Windows 7 (other operating systems in the Windows NT family) never causes a blue-screen stop of the entire computer; this is not true. There are many reasons why NT based operating systems will experience a STOP and crash the entire operating system, just like Win9x did - even though the two operating system families are massively different

– Caius Jard
Jul 30 at 9:33






Side note; your comparison isn't really valid because it implies that Windows 7 (other operating systems in the Windows NT family) never causes a blue-screen stop of the entire computer; this is not true. There are many reasons why NT based operating systems will experience a STOP and crash the entire operating system, just like Win9x did - even though the two operating system families are massively different

– Caius Jard
Jul 30 at 9:33











5 Answers
5






active

oldest

votes


















67














You are comparing apples to motorcycles.



Windows 95 traces its lineage back through Windows 3.x all the way to Windows 1.x and MS-DOS/PC-DOS, themselves inspired by CP/M. It was conceived and designed as a single-user, cooperatively multitasking environment in which applications have a large degree of freedom in what to do. Windows 95 moved towards a preemptive multitasking design, but still had significant cooperative elements built-in.



The fact that it was intended as a consumer OS replacement for the combination of MS-DOS and Windows 3.1/3.11, and was to work (not necessarily provide a great user experience, but boot and allow starting applications) on as low end a system as any 386DX with 4 MB RAM and around 50 MB of hard disk space, also put huge limitations on what Microsoft could do. Not least of this is its ability to use old MS-DOS device drivers to allow interoperability with hardware which did not have native Windows 95 drivers.



So while Windows 95 provided a hugely revamped UI compared to Windows 3.x, many technical improvements and paved the way for more advanced features, a lot of it had compatibility restraints based on choices, and to support limitations in hardware, dating back over a decade. (The 386 itself was introduced in 1985.)



Now compare this to modern versions of Windows, which don't trace their lineage back to MS-DOS at all. Rather, modern versions of Windows are based on Windows NT which was basically a complete redesign, originally dubbed NT OS/2 and named Windows NT prior to release.



Windows NT was basically designed and written from the beginning with such things as user isolation (multiuser support), process isolation, kernel/userspace isolation (*), and no regard for driver compatibility with MS-DOS.



For a contemporary version, Windows NT 3.51 was released three months before Windows 95, and required at a minimum a 386 at 25 MHz, 12 MB RAM, and 90 MB hard disk space. That's quite a step up from the requirements of Windows 95; three times the RAM, twice the disk space, and quite possibly a faster CPU (the 386 came in versions clocked at 12-40 MHz over its product lifetime), and again, that's just to boot the operating system.



Keep in mind that at the time, a 486 with 8-12 MB RAM and 500 MB hard disk was a reasonably high end system. Compare Multimedia PC level 2 (1993) and level 3 (1996), only the latter of which went beyond a minimum of 4 MB RAM. Even a MPC Level 3 PC in 1996 wouldn't meet the hardware requirements of the 1995 Windows NT 3.51, as MPC 3 only required 8 MB RAM.



From a stability point of view, even Windows NT 3.51 was vastly better than Windows 95 could ever hope to be. It achieved this, however, by sacrificing a lot of things that home users would care about; the ability to run well on at the time reasonably affordable hardware, the ability to run DOS software that accessed hardware directly (as far as I know, while basic MS-DOS application compatibility was provided, there was no way other than dual-boot to run most DOS games on a Windows NT system), plug-and-play, and the ability to use hardware that lacked dedicated Windows NT drivers.



And that's what Microsoft has been building on for the last about two decades to create what we now know as Windows 10, by way of Windows NT 4.0, Windows 2000, XP, Vista, 7 and 8. (The DOS/Windows lineage ended with Windows ME.)



As another-dave said in another answer, process isolation (which is a cornerstone for, but on its own not sufficient to ensure, system stability) isn't a bolt-on; it pretty much needs to be designed in from the beginning as, if it isn't there, programmers (especially back in the day, when squeezing every bit of performance out of a system was basically a requirement) will take shortcuts which will break if you add such isolation later on. (Compare all the trouble Apple had adding even basic protections to classic Mac OS; they, too, ended up doing a complete redesign of the OS that, among other things, added such protections.) Windows 95 didn't have it, nor was the desire from Microsoft to do the work needed to add it there; Windows NT did have such isolation (as well as paid the cost for having it). So even though Windows NT was far from uncrashable, this difference in the level of process isolation provided by the operating system shows in their stability when compared to each other, even when comparing contemporary versions.




*) The idea behind kernel/userspace isolation (usually referred to as "ring 0" and "ring 3" respectively in an Intel environment) is that while the operating system kernel has full access to the entire system (it needs to, in order to do its job properly; a possible exception could perhaps be argued for a true microkernel design, but even there, some part of the operating system needs to perform the lowest-level operations; there's just less of it), normal applications generally don't need to have that level of access. In a multitasking environment, for just any application to be able to write to just any memory location, or access any hardware device directly, and so on, comes with the completely unnecessary risk of doing harm to the operating system and/or other running applications.



This isn't anywhere near as much of a problem in a single-tasking environment such as MS-DOS, where the running application is basically assumed to be in complete control of the computer anyway.



Usually, the only code (other than the operating system kernel proper) that actually needs to have such a level of access in a multitasking environment is hardware drivers. With good design, even those can usually be restricted only to the portions of the system they actually need to work with, though that does increase complexity further, and absent separate controls, a driver can always claim to need more than it strictly speaking would need.



Windows 95 did have rudimentary kernel/userspace and process/process separation, but it was pretty much trivial to bypass if you wanted to, and drivers (even old DOS drivers) basically bypassed it by design. Windows NT fully enforced such separation right from the beginning. The latter makes it much easier to isolate a fault to a single process, thereby greatly reducing the risk of an errant userspace process causing damage that cannot be known to be restricted only to that process.



Even with Windows NT, back then as well as today, if something went/goes wrong in kernel mode, it would generally cause the OS to crash. It was just a lot harder to, in software, cause something to go sufficiently wrong in kernel mode in Windows NT than in Windows 95, and therefore, it was correspondingly harder to cause the entire operating system to crash. Not impossible, just harder.






share|improve this answer






















  • 4





    Its a rare occurence for an newer answer to "kill"" an older one by simply being lot better. Congratulations.

    – Delta Oscar Uniform
    Jul 29 at 18:04






  • 10





    Linux was lucky in this case because it was designed from UNIX which had process isolation (in certain forms) from the beginning, as a result of its use on mainframes. That's one reason why Windows was considered so much less stable than *nix since the former was far more vulnerable to fatal crashes. Nowadays though, they both have excellent stability and a very strong multi-process isolation model.

    – forest
    Jul 30 at 6:54






  • 1





    NT and OS/2 were different things.

    – OrangeDog
    Jul 30 at 10:51






  • 1





    Apparently it was called "NT OS/2" internally during design, before they knew whether it was going to get IBM or Microsoft branding.

    – OrangeDog
    Jul 30 at 11:03







  • 1





    @OrangeDog See en.wikipedia.org/wiki/Windows_NT_3.1#As_NT_OS/2 and en.wikipedia.org/wiki/Windows_NT#Development I've edited the answer to clarify the naming issue.

    – a CVn
    Jul 30 at 11:17



















37














The decision about whether to kill a process or crash the OS generally depends on whether the problem can be isolated to the process.



For example, if a running process in user mode attempts to read from an address that's not present in its address space, that's not going to affect anything else. The process can be terminated cleanly.



At the other extreme, if the file system running in kernel mode discovers that some data structure is not as expected, then it is wise to crash the entire system immediately, because the consequence of corrupt in-memory control structures could be loss of disk data, and that's the worst thing that could happen.



With specific respect to the Windows NT (-2000, -XP, -7) family: the OS was designed with good process isolation from the beginning. For Windows 9x, the heritage of Windows up through 3.x required some compromises in the name of compatibility. In particular, the first megabyte of address space is common to all processes: corruption there can kill the whole system.



TL;DR - process isolation is a day-0 design issue.






share|improve this answer




















  • 23





    All these poor people that actually spent money on Windows ME instead of 2000...

    – Delta Oscar Uniform
    Jul 28 at 19:25







  • 3





    What does 0-day design issue mean?

    – Wilson
    Jul 29 at 8:05






  • 23





    @Wilson it means it's basically impossible to retrofit and has to be considered on the first day of designing the operating system when drawing up the one-page or even one-sentence description of what you're intending to build.

    – pjc50
    Jul 29 at 9:21






  • 4





    @vilx Microsoft claimed (which I do believe accurate) that third party drivers were responsible for the vast majority of the blue screens in Windows back in the late Windows 98 SE era. This is the justification on why they made it mandatory to use signed drivers to pass WHQL tests and ship Windows as an OEM.

    – UnhandledExcepSean
    Jul 29 at 14:17






  • 4





    @UnhandledExcepSean third party drivers were responsible for... - they still are, just about a month ago I had a Windows 10 PC that would BSOD daily (sometimes twice a day) due to a faulty Intel HDD driver. Not saying things haven't improved, though - the screen now has a nicer shade of blue and a QR-code!

    – Headcrab
    Jul 30 at 8:01


















11














Although Windows 95 introduced support for 32 bit applications with memory protection, it was still somewhat reliant on MS DOS. For example, where native 32 bit drivers were not available, it used 16 bit DOS drivers instead. Even 32 bit applications had to be synchronized with the 16 bit DOS environment.



A fault in the DOS part of the system would bring the whole thing crashing down. 16 bit DOS applications do not have any meaningful memory protection or resource management and a crash cannot be recovered in most instances. And since even 32 bit applications had to interact with DOS components, they were not entirely immune either.



Another major cause of instability was that 32 bit drivers ran inside the Windows kernel (the core of the system). That reduced the amount of memory protection they had, and meant bugs would crash or corrupt the kernel too.



By the time Windows 7 came around drivers had mostly been moved out of the kernel and faults could be recovered from similar to an application crashing. There are some exceptions such as low level storage drivers.






share|improve this answer




















  • 1





    I'm not convinced by "drivers had been mostly moved out of the kernel". How many, what devices did they drive, etc?

    – another-dave
    Jul 30 at 11:47


















2














Addendum:



Some special memory areas (eg the infamous "GDI resources") that all applications needed were extremely limited in size (due to being shared with 16-bit APIs, which needed segment size limits respected) - and very easily ran into exhaustion, with no effective safeguards present.



A lot of essential system APIs did not sanity check their parameters well - if you accidentally fed them invalid pointers or pointers to resources of a different type of resource than expected, all kinds of unwanted behaviour could happen - especially when involving something in a 16-bit-shared area. Getting GDI object handles in a twist ... ouch.



Also, the system was trusting the responses to certain messages too much. I remember you could make Windows 9x extremely hard to shut down properly by simply installing a WM_QUERYENDSESSION handler that silently FALSE'd out everytime...



16-bit apps were run with A LOT of gratuitous privileges for compatibility reasons - enough to directly access ... and in the worst case crash! ... some of the hardware.






share|improve this answer
































    1














    Everyone is talking about the improvements in software between Windows 95 and Windows 7, but in those 15 years there were huge advancements in hardware as well. You can run an identical copy of Linux on some consumer-grade hardware from 1996 and some hardware from 2016 and you will find a world of difference in system stability.



    Older hardware simply crashed more often, and it was only really around 2003-2004 that things really changed. Manufacturers of motherboards, CPUs, RAM and various other hardware upped their game significantly as businesses and home users demanded better stability.



    One of the most popular motherboard manufacturers in the 90s was a company named "PC Chips", who also traded under about 20 other names. They produced these shonky, poorly-soldered, barely-shielded motherboards at rock bottom prices. A lot of the system crashes back then were due to people running those motherboards, and not Windows.



    That said, Win95 was horribly crash prone itself, and it was always a guessing game as to whether your crashes were hardware or software related.






    share|improve this answer

























    • Pc Chips? Did good Ole IBM computers use these ugly mother boards?

      – Delta Oscar Uniform
      Jul 31 at 15:45











    • Also where I can find info about these cheapskate? Any Wikipedia pages or something like that.?

      – Delta Oscar Uniform
      Jul 31 at 15:47






    • 1





      Certainly in my case, moving from Windows 9x (I think from 95 OSR2 at the time, but it might possibly have been 98) to NT 4.0 Workstation made an enormous difference in system stability, with absolutely no hardware changes. But then again, by that time I had a powerful enough system to run NT well.

      – a CVn
      Jul 31 at 15:50






    • 1





      @DeltaOscarUniform They are still around, and almost every OEM uses something from them today. The company and brand was changed to ECS, mostly because of the reputation of the "PC Chips" brand name. Speaking of which, my first decent computer that I got on my own, was a2003-ish Socket A with an Athlon Thunderbird. The motherboard was an ECS, and ... it did eat it eventually, it was like the most budget board you could imagine. I found a gutted tower in a warhouse basement, nothing mounted in it except that board and CPU.

      – J. M. Becker
      Aug 1 at 4:32







    • 1





      Some late 90s hardware (not all, and you never knew what you got) was perfectly able to do 100s of days of uptime under either linux/unix (if the drivers you needed to use weren't bugged!) or Windows NT or when running non-bugged DOS single purpose, equipment control programs. No, it did not under most Windows 9x versions: Research what the 49 day bug was if you are curious :) ... One problem with 90s hardware was still-widespread ISA hardware - easy to misconfigure, and easy to crash the system with on a hardware level if misconfigured :)

      – rackandboneman
      Aug 1 at 21:45













    Your Answer








    StackExchange.ready(function()
    var channelOptions =
    tags: "".split(" "),
    id: "648"
    ;
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function()
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled)
    StackExchange.using("snippets", function()
    createEditor();
    );

    else
    createEditor();

    );

    function createEditor()
    StackExchange.prepareEditor(
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: false,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: null,
    bindNavPrevention: true,
    postfix: "",
    imageUploader:
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    ,
    noCode: true, onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    );



    );













    draft saved

    draft discarded


















    StackExchange.ready(
    function ()
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fretrocomputing.stackexchange.com%2fquestions%2f11878%2fwhy-did-windows-95-crash-the-whole-system-but-newer-windows-only-crashed-program%23new-answer', 'question_page');

    );

    Post as a guest















    Required, but never shown

























    5 Answers
    5






    active

    oldest

    votes








    5 Answers
    5






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    67














    You are comparing apples to motorcycles.



    Windows 95 traces its lineage back through Windows 3.x all the way to Windows 1.x and MS-DOS/PC-DOS, themselves inspired by CP/M. It was conceived and designed as a single-user, cooperatively multitasking environment in which applications have a large degree of freedom in what to do. Windows 95 moved towards a preemptive multitasking design, but still had significant cooperative elements built-in.



    The fact that it was intended as a consumer OS replacement for the combination of MS-DOS and Windows 3.1/3.11, and was to work (not necessarily provide a great user experience, but boot and allow starting applications) on as low end a system as any 386DX with 4 MB RAM and around 50 MB of hard disk space, also put huge limitations on what Microsoft could do. Not least of this is its ability to use old MS-DOS device drivers to allow interoperability with hardware which did not have native Windows 95 drivers.



    So while Windows 95 provided a hugely revamped UI compared to Windows 3.x, many technical improvements and paved the way for more advanced features, a lot of it had compatibility restraints based on choices, and to support limitations in hardware, dating back over a decade. (The 386 itself was introduced in 1985.)



    Now compare this to modern versions of Windows, which don't trace their lineage back to MS-DOS at all. Rather, modern versions of Windows are based on Windows NT which was basically a complete redesign, originally dubbed NT OS/2 and named Windows NT prior to release.



    Windows NT was basically designed and written from the beginning with such things as user isolation (multiuser support), process isolation, kernel/userspace isolation (*), and no regard for driver compatibility with MS-DOS.



    For a contemporary version, Windows NT 3.51 was released three months before Windows 95, and required at a minimum a 386 at 25 MHz, 12 MB RAM, and 90 MB hard disk space. That's quite a step up from the requirements of Windows 95; three times the RAM, twice the disk space, and quite possibly a faster CPU (the 386 came in versions clocked at 12-40 MHz over its product lifetime), and again, that's just to boot the operating system.



    Keep in mind that at the time, a 486 with 8-12 MB RAM and 500 MB hard disk was a reasonably high end system. Compare Multimedia PC level 2 (1993) and level 3 (1996), only the latter of which went beyond a minimum of 4 MB RAM. Even a MPC Level 3 PC in 1996 wouldn't meet the hardware requirements of the 1995 Windows NT 3.51, as MPC 3 only required 8 MB RAM.



    From a stability point of view, even Windows NT 3.51 was vastly better than Windows 95 could ever hope to be. It achieved this, however, by sacrificing a lot of things that home users would care about; the ability to run well on at the time reasonably affordable hardware, the ability to run DOS software that accessed hardware directly (as far as I know, while basic MS-DOS application compatibility was provided, there was no way other than dual-boot to run most DOS games on a Windows NT system), plug-and-play, and the ability to use hardware that lacked dedicated Windows NT drivers.



    And that's what Microsoft has been building on for the last about two decades to create what we now know as Windows 10, by way of Windows NT 4.0, Windows 2000, XP, Vista, 7 and 8. (The DOS/Windows lineage ended with Windows ME.)



    As another-dave said in another answer, process isolation (which is a cornerstone for, but on its own not sufficient to ensure, system stability) isn't a bolt-on; it pretty much needs to be designed in from the beginning as, if it isn't there, programmers (especially back in the day, when squeezing every bit of performance out of a system was basically a requirement) will take shortcuts which will break if you add such isolation later on. (Compare all the trouble Apple had adding even basic protections to classic Mac OS; they, too, ended up doing a complete redesign of the OS that, among other things, added such protections.) Windows 95 didn't have it, nor was the desire from Microsoft to do the work needed to add it there; Windows NT did have such isolation (as well as paid the cost for having it). So even though Windows NT was far from uncrashable, this difference in the level of process isolation provided by the operating system shows in their stability when compared to each other, even when comparing contemporary versions.




    *) The idea behind kernel/userspace isolation (usually referred to as "ring 0" and "ring 3" respectively in an Intel environment) is that while the operating system kernel has full access to the entire system (it needs to, in order to do its job properly; a possible exception could perhaps be argued for a true microkernel design, but even there, some part of the operating system needs to perform the lowest-level operations; there's just less of it), normal applications generally don't need to have that level of access. In a multitasking environment, for just any application to be able to write to just any memory location, or access any hardware device directly, and so on, comes with the completely unnecessary risk of doing harm to the operating system and/or other running applications.



    This isn't anywhere near as much of a problem in a single-tasking environment such as MS-DOS, where the running application is basically assumed to be in complete control of the computer anyway.



    Usually, the only code (other than the operating system kernel proper) that actually needs to have such a level of access in a multitasking environment is hardware drivers. With good design, even those can usually be restricted only to the portions of the system they actually need to work with, though that does increase complexity further, and absent separate controls, a driver can always claim to need more than it strictly speaking would need.



    Windows 95 did have rudimentary kernel/userspace and process/process separation, but it was pretty much trivial to bypass if you wanted to, and drivers (even old DOS drivers) basically bypassed it by design. Windows NT fully enforced such separation right from the beginning. The latter makes it much easier to isolate a fault to a single process, thereby greatly reducing the risk of an errant userspace process causing damage that cannot be known to be restricted only to that process.



    Even with Windows NT, back then as well as today, if something went/goes wrong in kernel mode, it would generally cause the OS to crash. It was just a lot harder to, in software, cause something to go sufficiently wrong in kernel mode in Windows NT than in Windows 95, and therefore, it was correspondingly harder to cause the entire operating system to crash. Not impossible, just harder.






    share|improve this answer






















    • 4





      Its a rare occurence for an newer answer to "kill"" an older one by simply being lot better. Congratulations.

      – Delta Oscar Uniform
      Jul 29 at 18:04






    • 10





      Linux was lucky in this case because it was designed from UNIX which had process isolation (in certain forms) from the beginning, as a result of its use on mainframes. That's one reason why Windows was considered so much less stable than *nix since the former was far more vulnerable to fatal crashes. Nowadays though, they both have excellent stability and a very strong multi-process isolation model.

      – forest
      Jul 30 at 6:54






    • 1





      NT and OS/2 were different things.

      – OrangeDog
      Jul 30 at 10:51






    • 1





      Apparently it was called "NT OS/2" internally during design, before they knew whether it was going to get IBM or Microsoft branding.

      – OrangeDog
      Jul 30 at 11:03







    • 1





      @OrangeDog See en.wikipedia.org/wiki/Windows_NT_3.1#As_NT_OS/2 and en.wikipedia.org/wiki/Windows_NT#Development I've edited the answer to clarify the naming issue.

      – a CVn
      Jul 30 at 11:17
















    67














    You are comparing apples to motorcycles.



    Windows 95 traces its lineage back through Windows 3.x all the way to Windows 1.x and MS-DOS/PC-DOS, themselves inspired by CP/M. It was conceived and designed as a single-user, cooperatively multitasking environment in which applications have a large degree of freedom in what to do. Windows 95 moved towards a preemptive multitasking design, but still had significant cooperative elements built-in.



    The fact that it was intended as a consumer OS replacement for the combination of MS-DOS and Windows 3.1/3.11, and was to work (not necessarily provide a great user experience, but boot and allow starting applications) on as low end a system as any 386DX with 4 MB RAM and around 50 MB of hard disk space, also put huge limitations on what Microsoft could do. Not least of this is its ability to use old MS-DOS device drivers to allow interoperability with hardware which did not have native Windows 95 drivers.



    So while Windows 95 provided a hugely revamped UI compared to Windows 3.x, many technical improvements and paved the way for more advanced features, a lot of it had compatibility restraints based on choices, and to support limitations in hardware, dating back over a decade. (The 386 itself was introduced in 1985.)



    Now compare this to modern versions of Windows, which don't trace their lineage back to MS-DOS at all. Rather, modern versions of Windows are based on Windows NT which was basically a complete redesign, originally dubbed NT OS/2 and named Windows NT prior to release.



    Windows NT was basically designed and written from the beginning with such things as user isolation (multiuser support), process isolation, kernel/userspace isolation (*), and no regard for driver compatibility with MS-DOS.



    For a contemporary version, Windows NT 3.51 was released three months before Windows 95, and required at a minimum a 386 at 25 MHz, 12 MB RAM, and 90 MB hard disk space. That's quite a step up from the requirements of Windows 95; three times the RAM, twice the disk space, and quite possibly a faster CPU (the 386 came in versions clocked at 12-40 MHz over its product lifetime), and again, that's just to boot the operating system.



    Keep in mind that at the time, a 486 with 8-12 MB RAM and 500 MB hard disk was a reasonably high end system. Compare Multimedia PC level 2 (1993) and level 3 (1996), only the latter of which went beyond a minimum of 4 MB RAM. Even a MPC Level 3 PC in 1996 wouldn't meet the hardware requirements of the 1995 Windows NT 3.51, as MPC 3 only required 8 MB RAM.



    From a stability point of view, even Windows NT 3.51 was vastly better than Windows 95 could ever hope to be. It achieved this, however, by sacrificing a lot of things that home users would care about; the ability to run well on at the time reasonably affordable hardware, the ability to run DOS software that accessed hardware directly (as far as I know, while basic MS-DOS application compatibility was provided, there was no way other than dual-boot to run most DOS games on a Windows NT system), plug-and-play, and the ability to use hardware that lacked dedicated Windows NT drivers.



    And that's what Microsoft has been building on for the last about two decades to create what we now know as Windows 10, by way of Windows NT 4.0, Windows 2000, XP, Vista, 7 and 8. (The DOS/Windows lineage ended with Windows ME.)



    As another-dave said in another answer, process isolation (which is a cornerstone for, but on its own not sufficient to ensure, system stability) isn't a bolt-on; it pretty much needs to be designed in from the beginning as, if it isn't there, programmers (especially back in the day, when squeezing every bit of performance out of a system was basically a requirement) will take shortcuts which will break if you add such isolation later on. (Compare all the trouble Apple had adding even basic protections to classic Mac OS; they, too, ended up doing a complete redesign of the OS that, among other things, added such protections.) Windows 95 didn't have it, nor was the desire from Microsoft to do the work needed to add it there; Windows NT did have such isolation (as well as paid the cost for having it). So even though Windows NT was far from uncrashable, this difference in the level of process isolation provided by the operating system shows in their stability when compared to each other, even when comparing contemporary versions.




    *) The idea behind kernel/userspace isolation (usually referred to as "ring 0" and "ring 3" respectively in an Intel environment) is that while the operating system kernel has full access to the entire system (it needs to, in order to do its job properly; a possible exception could perhaps be argued for a true microkernel design, but even there, some part of the operating system needs to perform the lowest-level operations; there's just less of it), normal applications generally don't need to have that level of access. In a multitasking environment, for just any application to be able to write to just any memory location, or access any hardware device directly, and so on, comes with the completely unnecessary risk of doing harm to the operating system and/or other running applications.



    This isn't anywhere near as much of a problem in a single-tasking environment such as MS-DOS, where the running application is basically assumed to be in complete control of the computer anyway.



    Usually, the only code (other than the operating system kernel proper) that actually needs to have such a level of access in a multitasking environment is hardware drivers. With good design, even those can usually be restricted only to the portions of the system they actually need to work with, though that does increase complexity further, and absent separate controls, a driver can always claim to need more than it strictly speaking would need.



    Windows 95 did have rudimentary kernel/userspace and process/process separation, but it was pretty much trivial to bypass if you wanted to, and drivers (even old DOS drivers) basically bypassed it by design. Windows NT fully enforced such separation right from the beginning. The latter makes it much easier to isolate a fault to a single process, thereby greatly reducing the risk of an errant userspace process causing damage that cannot be known to be restricted only to that process.



    Even with Windows NT, back then as well as today, if something went/goes wrong in kernel mode, it would generally cause the OS to crash. It was just a lot harder to, in software, cause something to go sufficiently wrong in kernel mode in Windows NT than in Windows 95, and therefore, it was correspondingly harder to cause the entire operating system to crash. Not impossible, just harder.






    share|improve this answer






















    • 4





      Its a rare occurence for an newer answer to "kill"" an older one by simply being lot better. Congratulations.

      – Delta Oscar Uniform
      Jul 29 at 18:04






    • 10





      Linux was lucky in this case because it was designed from UNIX which had process isolation (in certain forms) from the beginning, as a result of its use on mainframes. That's one reason why Windows was considered so much less stable than *nix since the former was far more vulnerable to fatal crashes. Nowadays though, they both have excellent stability and a very strong multi-process isolation model.

      – forest
      Jul 30 at 6:54






    • 1





      NT and OS/2 were different things.

      – OrangeDog
      Jul 30 at 10:51






    • 1





      Apparently it was called "NT OS/2" internally during design, before they knew whether it was going to get IBM or Microsoft branding.

      – OrangeDog
      Jul 30 at 11:03







    • 1





      @OrangeDog See en.wikipedia.org/wiki/Windows_NT_3.1#As_NT_OS/2 and en.wikipedia.org/wiki/Windows_NT#Development I've edited the answer to clarify the naming issue.

      – a CVn
      Jul 30 at 11:17














    67












    67








    67







    You are comparing apples to motorcycles.



    Windows 95 traces its lineage back through Windows 3.x all the way to Windows 1.x and MS-DOS/PC-DOS, themselves inspired by CP/M. It was conceived and designed as a single-user, cooperatively multitasking environment in which applications have a large degree of freedom in what to do. Windows 95 moved towards a preemptive multitasking design, but still had significant cooperative elements built-in.



    The fact that it was intended as a consumer OS replacement for the combination of MS-DOS and Windows 3.1/3.11, and was to work (not necessarily provide a great user experience, but boot and allow starting applications) on as low end a system as any 386DX with 4 MB RAM and around 50 MB of hard disk space, also put huge limitations on what Microsoft could do. Not least of this is its ability to use old MS-DOS device drivers to allow interoperability with hardware which did not have native Windows 95 drivers.



    So while Windows 95 provided a hugely revamped UI compared to Windows 3.x, many technical improvements and paved the way for more advanced features, a lot of it had compatibility restraints based on choices, and to support limitations in hardware, dating back over a decade. (The 386 itself was introduced in 1985.)



    Now compare this to modern versions of Windows, which don't trace their lineage back to MS-DOS at all. Rather, modern versions of Windows are based on Windows NT which was basically a complete redesign, originally dubbed NT OS/2 and named Windows NT prior to release.



    Windows NT was basically designed and written from the beginning with such things as user isolation (multiuser support), process isolation, kernel/userspace isolation (*), and no regard for driver compatibility with MS-DOS.



    For a contemporary version, Windows NT 3.51 was released three months before Windows 95, and required at a minimum a 386 at 25 MHz, 12 MB RAM, and 90 MB hard disk space. That's quite a step up from the requirements of Windows 95; three times the RAM, twice the disk space, and quite possibly a faster CPU (the 386 came in versions clocked at 12-40 MHz over its product lifetime), and again, that's just to boot the operating system.



    Keep in mind that at the time, a 486 with 8-12 MB RAM and 500 MB hard disk was a reasonably high end system. Compare Multimedia PC level 2 (1993) and level 3 (1996), only the latter of which went beyond a minimum of 4 MB RAM. Even a MPC Level 3 PC in 1996 wouldn't meet the hardware requirements of the 1995 Windows NT 3.51, as MPC 3 only required 8 MB RAM.



    From a stability point of view, even Windows NT 3.51 was vastly better than Windows 95 could ever hope to be. It achieved this, however, by sacrificing a lot of things that home users would care about; the ability to run well on at the time reasonably affordable hardware, the ability to run DOS software that accessed hardware directly (as far as I know, while basic MS-DOS application compatibility was provided, there was no way other than dual-boot to run most DOS games on a Windows NT system), plug-and-play, and the ability to use hardware that lacked dedicated Windows NT drivers.



    And that's what Microsoft has been building on for the last about two decades to create what we now know as Windows 10, by way of Windows NT 4.0, Windows 2000, XP, Vista, 7 and 8. (The DOS/Windows lineage ended with Windows ME.)



    As another-dave said in another answer, process isolation (which is a cornerstone for, but on its own not sufficient to ensure, system stability) isn't a bolt-on; it pretty much needs to be designed in from the beginning as, if it isn't there, programmers (especially back in the day, when squeezing every bit of performance out of a system was basically a requirement) will take shortcuts which will break if you add such isolation later on. (Compare all the trouble Apple had adding even basic protections to classic Mac OS; they, too, ended up doing a complete redesign of the OS that, among other things, added such protections.) Windows 95 didn't have it, nor was the desire from Microsoft to do the work needed to add it there; Windows NT did have such isolation (as well as paid the cost for having it). So even though Windows NT was far from uncrashable, this difference in the level of process isolation provided by the operating system shows in their stability when compared to each other, even when comparing contemporary versions.




    *) The idea behind kernel/userspace isolation (usually referred to as "ring 0" and "ring 3" respectively in an Intel environment) is that while the operating system kernel has full access to the entire system (it needs to, in order to do its job properly; a possible exception could perhaps be argued for a true microkernel design, but even there, some part of the operating system needs to perform the lowest-level operations; there's just less of it), normal applications generally don't need to have that level of access. In a multitasking environment, for just any application to be able to write to just any memory location, or access any hardware device directly, and so on, comes with the completely unnecessary risk of doing harm to the operating system and/or other running applications.



    This isn't anywhere near as much of a problem in a single-tasking environment such as MS-DOS, where the running application is basically assumed to be in complete control of the computer anyway.



    Usually, the only code (other than the operating system kernel proper) that actually needs to have such a level of access in a multitasking environment is hardware drivers. With good design, even those can usually be restricted only to the portions of the system they actually need to work with, though that does increase complexity further, and absent separate controls, a driver can always claim to need more than it strictly speaking would need.



    Windows 95 did have rudimentary kernel/userspace and process/process separation, but it was pretty much trivial to bypass if you wanted to, and drivers (even old DOS drivers) basically bypassed it by design. Windows NT fully enforced such separation right from the beginning. The latter makes it much easier to isolate a fault to a single process, thereby greatly reducing the risk of an errant userspace process causing damage that cannot be known to be restricted only to that process.



    Even with Windows NT, back then as well as today, if something went/goes wrong in kernel mode, it would generally cause the OS to crash. It was just a lot harder to, in software, cause something to go sufficiently wrong in kernel mode in Windows NT than in Windows 95, and therefore, it was correspondingly harder to cause the entire operating system to crash. Not impossible, just harder.






    share|improve this answer















    You are comparing apples to motorcycles.



    Windows 95 traces its lineage back through Windows 3.x all the way to Windows 1.x and MS-DOS/PC-DOS, themselves inspired by CP/M. It was conceived and designed as a single-user, cooperatively multitasking environment in which applications have a large degree of freedom in what to do. Windows 95 moved towards a preemptive multitasking design, but still had significant cooperative elements built-in.



    The fact that it was intended as a consumer OS replacement for the combination of MS-DOS and Windows 3.1/3.11, and was to work (not necessarily provide a great user experience, but boot and allow starting applications) on as low end a system as any 386DX with 4 MB RAM and around 50 MB of hard disk space, also put huge limitations on what Microsoft could do. Not least of this is its ability to use old MS-DOS device drivers to allow interoperability with hardware which did not have native Windows 95 drivers.



    So while Windows 95 provided a hugely revamped UI compared to Windows 3.x, many technical improvements and paved the way for more advanced features, a lot of it had compatibility restraints based on choices, and to support limitations in hardware, dating back over a decade. (The 386 itself was introduced in 1985.)



    Now compare this to modern versions of Windows, which don't trace their lineage back to MS-DOS at all. Rather, modern versions of Windows are based on Windows NT which was basically a complete redesign, originally dubbed NT OS/2 and named Windows NT prior to release.



    Windows NT was basically designed and written from the beginning with such things as user isolation (multiuser support), process isolation, kernel/userspace isolation (*), and no regard for driver compatibility with MS-DOS.



    For a contemporary version, Windows NT 3.51 was released three months before Windows 95, and required at a minimum a 386 at 25 MHz, 12 MB RAM, and 90 MB hard disk space. That's quite a step up from the requirements of Windows 95; three times the RAM, twice the disk space, and quite possibly a faster CPU (the 386 came in versions clocked at 12-40 MHz over its product lifetime), and again, that's just to boot the operating system.



    Keep in mind that at the time, a 486 with 8-12 MB RAM and 500 MB hard disk was a reasonably high end system. Compare Multimedia PC level 2 (1993) and level 3 (1996), only the latter of which went beyond a minimum of 4 MB RAM. Even a MPC Level 3 PC in 1996 wouldn't meet the hardware requirements of the 1995 Windows NT 3.51, as MPC 3 only required 8 MB RAM.



    From a stability point of view, even Windows NT 3.51 was vastly better than Windows 95 could ever hope to be. It achieved this, however, by sacrificing a lot of things that home users would care about; the ability to run well on at the time reasonably affordable hardware, the ability to run DOS software that accessed hardware directly (as far as I know, while basic MS-DOS application compatibility was provided, there was no way other than dual-boot to run most DOS games on a Windows NT system), plug-and-play, and the ability to use hardware that lacked dedicated Windows NT drivers.



    And that's what Microsoft has been building on for the last about two decades to create what we now know as Windows 10, by way of Windows NT 4.0, Windows 2000, XP, Vista, 7 and 8. (The DOS/Windows lineage ended with Windows ME.)



    As another-dave said in another answer, process isolation (which is a cornerstone for, but on its own not sufficient to ensure, system stability) isn't a bolt-on; it pretty much needs to be designed in from the beginning as, if it isn't there, programmers (especially back in the day, when squeezing every bit of performance out of a system was basically a requirement) will take shortcuts which will break if you add such isolation later on. (Compare all the trouble Apple had adding even basic protections to classic Mac OS; they, too, ended up doing a complete redesign of the OS that, among other things, added such protections.) Windows 95 didn't have it, nor was the desire from Microsoft to do the work needed to add it there; Windows NT did have such isolation (as well as paid the cost for having it). So even though Windows NT was far from uncrashable, this difference in the level of process isolation provided by the operating system shows in their stability when compared to each other, even when comparing contemporary versions.




    *) The idea behind kernel/userspace isolation (usually referred to as "ring 0" and "ring 3" respectively in an Intel environment) is that while the operating system kernel has full access to the entire system (it needs to, in order to do its job properly; a possible exception could perhaps be argued for a true microkernel design, but even there, some part of the operating system needs to perform the lowest-level operations; there's just less of it), normal applications generally don't need to have that level of access. In a multitasking environment, for just any application to be able to write to just any memory location, or access any hardware device directly, and so on, comes with the completely unnecessary risk of doing harm to the operating system and/or other running applications.



    This isn't anywhere near as much of a problem in a single-tasking environment such as MS-DOS, where the running application is basically assumed to be in complete control of the computer anyway.



    Usually, the only code (other than the operating system kernel proper) that actually needs to have such a level of access in a multitasking environment is hardware drivers. With good design, even those can usually be restricted only to the portions of the system they actually need to work with, though that does increase complexity further, and absent separate controls, a driver can always claim to need more than it strictly speaking would need.



    Windows 95 did have rudimentary kernel/userspace and process/process separation, but it was pretty much trivial to bypass if you wanted to, and drivers (even old DOS drivers) basically bypassed it by design. Windows NT fully enforced such separation right from the beginning. The latter makes it much easier to isolate a fault to a single process, thereby greatly reducing the risk of an errant userspace process causing damage that cannot be known to be restricted only to that process.



    Even with Windows NT, back then as well as today, if something went/goes wrong in kernel mode, it would generally cause the OS to crash. It was just a lot harder to, in software, cause something to go sufficiently wrong in kernel mode in Windows NT than in Windows 95, and therefore, it was correspondingly harder to cause the entire operating system to crash. Not impossible, just harder.







    share|improve this answer














    share|improve this answer



    share|improve this answer








    edited 21 hours ago

























    answered Jul 29 at 17:14









    a CVna CVn

    2,3281 gold badge13 silver badges34 bronze badges




    2,3281 gold badge13 silver badges34 bronze badges










    • 4





      Its a rare occurence for an newer answer to "kill"" an older one by simply being lot better. Congratulations.

      – Delta Oscar Uniform
      Jul 29 at 18:04






    • 10





      Linux was lucky in this case because it was designed from UNIX which had process isolation (in certain forms) from the beginning, as a result of its use on mainframes. That's one reason why Windows was considered so much less stable than *nix since the former was far more vulnerable to fatal crashes. Nowadays though, they both have excellent stability and a very strong multi-process isolation model.

      – forest
      Jul 30 at 6:54






    • 1





      NT and OS/2 were different things.

      – OrangeDog
      Jul 30 at 10:51






    • 1





      Apparently it was called "NT OS/2" internally during design, before they knew whether it was going to get IBM or Microsoft branding.

      – OrangeDog
      Jul 30 at 11:03







    • 1





      @OrangeDog See en.wikipedia.org/wiki/Windows_NT_3.1#As_NT_OS/2 and en.wikipedia.org/wiki/Windows_NT#Development I've edited the answer to clarify the naming issue.

      – a CVn
      Jul 30 at 11:17













    • 4





      Its a rare occurence for an newer answer to "kill"" an older one by simply being lot better. Congratulations.

      – Delta Oscar Uniform
      Jul 29 at 18:04






    • 10





      Linux was lucky in this case because it was designed from UNIX which had process isolation (in certain forms) from the beginning, as a result of its use on mainframes. That's one reason why Windows was considered so much less stable than *nix since the former was far more vulnerable to fatal crashes. Nowadays though, they both have excellent stability and a very strong multi-process isolation model.

      – forest
      Jul 30 at 6:54






    • 1





      NT and OS/2 were different things.

      – OrangeDog
      Jul 30 at 10:51






    • 1





      Apparently it was called "NT OS/2" internally during design, before they knew whether it was going to get IBM or Microsoft branding.

      – OrangeDog
      Jul 30 at 11:03







    • 1





      @OrangeDog See en.wikipedia.org/wiki/Windows_NT_3.1#As_NT_OS/2 and en.wikipedia.org/wiki/Windows_NT#Development I've edited the answer to clarify the naming issue.

      – a CVn
      Jul 30 at 11:17








    4




    4





    Its a rare occurence for an newer answer to "kill"" an older one by simply being lot better. Congratulations.

    – Delta Oscar Uniform
    Jul 29 at 18:04





    Its a rare occurence for an newer answer to "kill"" an older one by simply being lot better. Congratulations.

    – Delta Oscar Uniform
    Jul 29 at 18:04




    10




    10





    Linux was lucky in this case because it was designed from UNIX which had process isolation (in certain forms) from the beginning, as a result of its use on mainframes. That's one reason why Windows was considered so much less stable than *nix since the former was far more vulnerable to fatal crashes. Nowadays though, they both have excellent stability and a very strong multi-process isolation model.

    – forest
    Jul 30 at 6:54





    Linux was lucky in this case because it was designed from UNIX which had process isolation (in certain forms) from the beginning, as a result of its use on mainframes. That's one reason why Windows was considered so much less stable than *nix since the former was far more vulnerable to fatal crashes. Nowadays though, they both have excellent stability and a very strong multi-process isolation model.

    – forest
    Jul 30 at 6:54




    1




    1





    NT and OS/2 were different things.

    – OrangeDog
    Jul 30 at 10:51





    NT and OS/2 were different things.

    – OrangeDog
    Jul 30 at 10:51




    1




    1





    Apparently it was called "NT OS/2" internally during design, before they knew whether it was going to get IBM or Microsoft branding.

    – OrangeDog
    Jul 30 at 11:03






    Apparently it was called "NT OS/2" internally during design, before they knew whether it was going to get IBM or Microsoft branding.

    – OrangeDog
    Jul 30 at 11:03





    1




    1





    @OrangeDog See en.wikipedia.org/wiki/Windows_NT_3.1#As_NT_OS/2 and en.wikipedia.org/wiki/Windows_NT#Development I've edited the answer to clarify the naming issue.

    – a CVn
    Jul 30 at 11:17






    @OrangeDog See en.wikipedia.org/wiki/Windows_NT_3.1#As_NT_OS/2 and en.wikipedia.org/wiki/Windows_NT#Development I've edited the answer to clarify the naming issue.

    – a CVn
    Jul 30 at 11:17














    37














    The decision about whether to kill a process or crash the OS generally depends on whether the problem can be isolated to the process.



    For example, if a running process in user mode attempts to read from an address that's not present in its address space, that's not going to affect anything else. The process can be terminated cleanly.



    At the other extreme, if the file system running in kernel mode discovers that some data structure is not as expected, then it is wise to crash the entire system immediately, because the consequence of corrupt in-memory control structures could be loss of disk data, and that's the worst thing that could happen.



    With specific respect to the Windows NT (-2000, -XP, -7) family: the OS was designed with good process isolation from the beginning. For Windows 9x, the heritage of Windows up through 3.x required some compromises in the name of compatibility. In particular, the first megabyte of address space is common to all processes: corruption there can kill the whole system.



    TL;DR - process isolation is a day-0 design issue.






    share|improve this answer




















    • 23





      All these poor people that actually spent money on Windows ME instead of 2000...

      – Delta Oscar Uniform
      Jul 28 at 19:25







    • 3





      What does 0-day design issue mean?

      – Wilson
      Jul 29 at 8:05






    • 23





      @Wilson it means it's basically impossible to retrofit and has to be considered on the first day of designing the operating system when drawing up the one-page or even one-sentence description of what you're intending to build.

      – pjc50
      Jul 29 at 9:21






    • 4





      @vilx Microsoft claimed (which I do believe accurate) that third party drivers were responsible for the vast majority of the blue screens in Windows back in the late Windows 98 SE era. This is the justification on why they made it mandatory to use signed drivers to pass WHQL tests and ship Windows as an OEM.

      – UnhandledExcepSean
      Jul 29 at 14:17






    • 4





      @UnhandledExcepSean third party drivers were responsible for... - they still are, just about a month ago I had a Windows 10 PC that would BSOD daily (sometimes twice a day) due to a faulty Intel HDD driver. Not saying things haven't improved, though - the screen now has a nicer shade of blue and a QR-code!

      – Headcrab
      Jul 30 at 8:01















    37














    The decision about whether to kill a process or crash the OS generally depends on whether the problem can be isolated to the process.



    For example, if a running process in user mode attempts to read from an address that's not present in its address space, that's not going to affect anything else. The process can be terminated cleanly.



    At the other extreme, if the file system running in kernel mode discovers that some data structure is not as expected, then it is wise to crash the entire system immediately, because the consequence of corrupt in-memory control structures could be loss of disk data, and that's the worst thing that could happen.



    With specific respect to the Windows NT (-2000, -XP, -7) family: the OS was designed with good process isolation from the beginning. For Windows 9x, the heritage of Windows up through 3.x required some compromises in the name of compatibility. In particular, the first megabyte of address space is common to all processes: corruption there can kill the whole system.



    TL;DR - process isolation is a day-0 design issue.






    share|improve this answer




















    • 23





      All these poor people that actually spent money on Windows ME instead of 2000...

      – Delta Oscar Uniform
      Jul 28 at 19:25







    • 3





      What does 0-day design issue mean?

      – Wilson
      Jul 29 at 8:05






    • 23





      @Wilson it means it's basically impossible to retrofit and has to be considered on the first day of designing the operating system when drawing up the one-page or even one-sentence description of what you're intending to build.

      – pjc50
      Jul 29 at 9:21






    • 4





      @vilx Microsoft claimed (which I do believe accurate) that third party drivers were responsible for the vast majority of the blue screens in Windows back in the late Windows 98 SE era. This is the justification on why they made it mandatory to use signed drivers to pass WHQL tests and ship Windows as an OEM.

      – UnhandledExcepSean
      Jul 29 at 14:17






    • 4





      @UnhandledExcepSean third party drivers were responsible for... - they still are, just about a month ago I had a Windows 10 PC that would BSOD daily (sometimes twice a day) due to a faulty Intel HDD driver. Not saying things haven't improved, though - the screen now has a nicer shade of blue and a QR-code!

      – Headcrab
      Jul 30 at 8:01













    37












    37








    37







    The decision about whether to kill a process or crash the OS generally depends on whether the problem can be isolated to the process.



    For example, if a running process in user mode attempts to read from an address that's not present in its address space, that's not going to affect anything else. The process can be terminated cleanly.



    At the other extreme, if the file system running in kernel mode discovers that some data structure is not as expected, then it is wise to crash the entire system immediately, because the consequence of corrupt in-memory control structures could be loss of disk data, and that's the worst thing that could happen.



    With specific respect to the Windows NT (-2000, -XP, -7) family: the OS was designed with good process isolation from the beginning. For Windows 9x, the heritage of Windows up through 3.x required some compromises in the name of compatibility. In particular, the first megabyte of address space is common to all processes: corruption there can kill the whole system.



    TL;DR - process isolation is a day-0 design issue.






    share|improve this answer













    The decision about whether to kill a process or crash the OS generally depends on whether the problem can be isolated to the process.



    For example, if a running process in user mode attempts to read from an address that's not present in its address space, that's not going to affect anything else. The process can be terminated cleanly.



    At the other extreme, if the file system running in kernel mode discovers that some data structure is not as expected, then it is wise to crash the entire system immediately, because the consequence of corrupt in-memory control structures could be loss of disk data, and that's the worst thing that could happen.



    With specific respect to the Windows NT (-2000, -XP, -7) family: the OS was designed with good process isolation from the beginning. For Windows 9x, the heritage of Windows up through 3.x required some compromises in the name of compatibility. In particular, the first megabyte of address space is common to all processes: corruption there can kill the whole system.



    TL;DR - process isolation is a day-0 design issue.







    share|improve this answer












    share|improve this answer



    share|improve this answer










    answered Jul 28 at 19:24









    another-daveanother-dave

    2,7341 gold badge8 silver badges23 bronze badges




    2,7341 gold badge8 silver badges23 bronze badges










    • 23





      All these poor people that actually spent money on Windows ME instead of 2000...

      – Delta Oscar Uniform
      Jul 28 at 19:25







    • 3





      What does 0-day design issue mean?

      – Wilson
      Jul 29 at 8:05






    • 23





      @Wilson it means it's basically impossible to retrofit and has to be considered on the first day of designing the operating system when drawing up the one-page or even one-sentence description of what you're intending to build.

      – pjc50
      Jul 29 at 9:21






    • 4





      @vilx Microsoft claimed (which I do believe accurate) that third party drivers were responsible for the vast majority of the blue screens in Windows back in the late Windows 98 SE era. This is the justification on why they made it mandatory to use signed drivers to pass WHQL tests and ship Windows as an OEM.

      – UnhandledExcepSean
      Jul 29 at 14:17






    • 4





      @UnhandledExcepSean third party drivers were responsible for... - they still are, just about a month ago I had a Windows 10 PC that would BSOD daily (sometimes twice a day) due to a faulty Intel HDD driver. Not saying things haven't improved, though - the screen now has a nicer shade of blue and a QR-code!

      – Headcrab
      Jul 30 at 8:01












    • 23





      All these poor people that actually spent money on Windows ME instead of 2000...

      – Delta Oscar Uniform
      Jul 28 at 19:25







    • 3





      What does 0-day design issue mean?

      – Wilson
      Jul 29 at 8:05






    • 23





      @Wilson it means it's basically impossible to retrofit and has to be considered on the first day of designing the operating system when drawing up the one-page or even one-sentence description of what you're intending to build.

      – pjc50
      Jul 29 at 9:21






    • 4





      @vilx Microsoft claimed (which I do believe accurate) that third party drivers were responsible for the vast majority of the blue screens in Windows back in the late Windows 98 SE era. This is the justification on why they made it mandatory to use signed drivers to pass WHQL tests and ship Windows as an OEM.

      – UnhandledExcepSean
      Jul 29 at 14:17






    • 4





      @UnhandledExcepSean third party drivers were responsible for... - they still are, just about a month ago I had a Windows 10 PC that would BSOD daily (sometimes twice a day) due to a faulty Intel HDD driver. Not saying things haven't improved, though - the screen now has a nicer shade of blue and a QR-code!

      – Headcrab
      Jul 30 at 8:01







    23




    23





    All these poor people that actually spent money on Windows ME instead of 2000...

    – Delta Oscar Uniform
    Jul 28 at 19:25






    All these poor people that actually spent money on Windows ME instead of 2000...

    – Delta Oscar Uniform
    Jul 28 at 19:25





    3




    3





    What does 0-day design issue mean?

    – Wilson
    Jul 29 at 8:05





    What does 0-day design issue mean?

    – Wilson
    Jul 29 at 8:05




    23




    23





    @Wilson it means it's basically impossible to retrofit and has to be considered on the first day of designing the operating system when drawing up the one-page or even one-sentence description of what you're intending to build.

    – pjc50
    Jul 29 at 9:21





    @Wilson it means it's basically impossible to retrofit and has to be considered on the first day of designing the operating system when drawing up the one-page or even one-sentence description of what you're intending to build.

    – pjc50
    Jul 29 at 9:21




    4




    4





    @vilx Microsoft claimed (which I do believe accurate) that third party drivers were responsible for the vast majority of the blue screens in Windows back in the late Windows 98 SE era. This is the justification on why they made it mandatory to use signed drivers to pass WHQL tests and ship Windows as an OEM.

    – UnhandledExcepSean
    Jul 29 at 14:17





    @vilx Microsoft claimed (which I do believe accurate) that third party drivers were responsible for the vast majority of the blue screens in Windows back in the late Windows 98 SE era. This is the justification on why they made it mandatory to use signed drivers to pass WHQL tests and ship Windows as an OEM.

    – UnhandledExcepSean
    Jul 29 at 14:17




    4




    4





    @UnhandledExcepSean third party drivers were responsible for... - they still are, just about a month ago I had a Windows 10 PC that would BSOD daily (sometimes twice a day) due to a faulty Intel HDD driver. Not saying things haven't improved, though - the screen now has a nicer shade of blue and a QR-code!

    – Headcrab
    Jul 30 at 8:01





    @UnhandledExcepSean third party drivers were responsible for... - they still are, just about a month ago I had a Windows 10 PC that would BSOD daily (sometimes twice a day) due to a faulty Intel HDD driver. Not saying things haven't improved, though - the screen now has a nicer shade of blue and a QR-code!

    – Headcrab
    Jul 30 at 8:01











    11














    Although Windows 95 introduced support for 32 bit applications with memory protection, it was still somewhat reliant on MS DOS. For example, where native 32 bit drivers were not available, it used 16 bit DOS drivers instead. Even 32 bit applications had to be synchronized with the 16 bit DOS environment.



    A fault in the DOS part of the system would bring the whole thing crashing down. 16 bit DOS applications do not have any meaningful memory protection or resource management and a crash cannot be recovered in most instances. And since even 32 bit applications had to interact with DOS components, they were not entirely immune either.



    Another major cause of instability was that 32 bit drivers ran inside the Windows kernel (the core of the system). That reduced the amount of memory protection they had, and meant bugs would crash or corrupt the kernel too.



    By the time Windows 7 came around drivers had mostly been moved out of the kernel and faults could be recovered from similar to an application crashing. There are some exceptions such as low level storage drivers.






    share|improve this answer




















    • 1





      I'm not convinced by "drivers had been mostly moved out of the kernel". How many, what devices did they drive, etc?

      – another-dave
      Jul 30 at 11:47















    11














    Although Windows 95 introduced support for 32 bit applications with memory protection, it was still somewhat reliant on MS DOS. For example, where native 32 bit drivers were not available, it used 16 bit DOS drivers instead. Even 32 bit applications had to be synchronized with the 16 bit DOS environment.



    A fault in the DOS part of the system would bring the whole thing crashing down. 16 bit DOS applications do not have any meaningful memory protection or resource management and a crash cannot be recovered in most instances. And since even 32 bit applications had to interact with DOS components, they were not entirely immune either.



    Another major cause of instability was that 32 bit drivers ran inside the Windows kernel (the core of the system). That reduced the amount of memory protection they had, and meant bugs would crash or corrupt the kernel too.



    By the time Windows 7 came around drivers had mostly been moved out of the kernel and faults could be recovered from similar to an application crashing. There are some exceptions such as low level storage drivers.






    share|improve this answer




















    • 1





      I'm not convinced by "drivers had been mostly moved out of the kernel". How many, what devices did they drive, etc?

      – another-dave
      Jul 30 at 11:47













    11












    11








    11







    Although Windows 95 introduced support for 32 bit applications with memory protection, it was still somewhat reliant on MS DOS. For example, where native 32 bit drivers were not available, it used 16 bit DOS drivers instead. Even 32 bit applications had to be synchronized with the 16 bit DOS environment.



    A fault in the DOS part of the system would bring the whole thing crashing down. 16 bit DOS applications do not have any meaningful memory protection or resource management and a crash cannot be recovered in most instances. And since even 32 bit applications had to interact with DOS components, they were not entirely immune either.



    Another major cause of instability was that 32 bit drivers ran inside the Windows kernel (the core of the system). That reduced the amount of memory protection they had, and meant bugs would crash or corrupt the kernel too.



    By the time Windows 7 came around drivers had mostly been moved out of the kernel and faults could be recovered from similar to an application crashing. There are some exceptions such as low level storage drivers.






    share|improve this answer













    Although Windows 95 introduced support for 32 bit applications with memory protection, it was still somewhat reliant on MS DOS. For example, where native 32 bit drivers were not available, it used 16 bit DOS drivers instead. Even 32 bit applications had to be synchronized with the 16 bit DOS environment.



    A fault in the DOS part of the system would bring the whole thing crashing down. 16 bit DOS applications do not have any meaningful memory protection or resource management and a crash cannot be recovered in most instances. And since even 32 bit applications had to interact with DOS components, they were not entirely immune either.



    Another major cause of instability was that 32 bit drivers ran inside the Windows kernel (the core of the system). That reduced the amount of memory protection they had, and meant bugs would crash or corrupt the kernel too.



    By the time Windows 7 came around drivers had mostly been moved out of the kernel and faults could be recovered from similar to an application crashing. There are some exceptions such as low level storage drivers.







    share|improve this answer












    share|improve this answer



    share|improve this answer










    answered Jul 29 at 9:12









    useruser

    8,7172 gold badges15 silver badges35 bronze badges




    8,7172 gold badges15 silver badges35 bronze badges










    • 1





      I'm not convinced by "drivers had been mostly moved out of the kernel". How many, what devices did they drive, etc?

      – another-dave
      Jul 30 at 11:47












    • 1





      I'm not convinced by "drivers had been mostly moved out of the kernel". How many, what devices did they drive, etc?

      – another-dave
      Jul 30 at 11:47







    1




    1





    I'm not convinced by "drivers had been mostly moved out of the kernel". How many, what devices did they drive, etc?

    – another-dave
    Jul 30 at 11:47





    I'm not convinced by "drivers had been mostly moved out of the kernel". How many, what devices did they drive, etc?

    – another-dave
    Jul 30 at 11:47











    2














    Addendum:



    Some special memory areas (eg the infamous "GDI resources") that all applications needed were extremely limited in size (due to being shared with 16-bit APIs, which needed segment size limits respected) - and very easily ran into exhaustion, with no effective safeguards present.



    A lot of essential system APIs did not sanity check their parameters well - if you accidentally fed them invalid pointers or pointers to resources of a different type of resource than expected, all kinds of unwanted behaviour could happen - especially when involving something in a 16-bit-shared area. Getting GDI object handles in a twist ... ouch.



    Also, the system was trusting the responses to certain messages too much. I remember you could make Windows 9x extremely hard to shut down properly by simply installing a WM_QUERYENDSESSION handler that silently FALSE'd out everytime...



    16-bit apps were run with A LOT of gratuitous privileges for compatibility reasons - enough to directly access ... and in the worst case crash! ... some of the hardware.






    share|improve this answer





























      2














      Addendum:



      Some special memory areas (eg the infamous "GDI resources") that all applications needed were extremely limited in size (due to being shared with 16-bit APIs, which needed segment size limits respected) - and very easily ran into exhaustion, with no effective safeguards present.



      A lot of essential system APIs did not sanity check their parameters well - if you accidentally fed them invalid pointers or pointers to resources of a different type of resource than expected, all kinds of unwanted behaviour could happen - especially when involving something in a 16-bit-shared area. Getting GDI object handles in a twist ... ouch.



      Also, the system was trusting the responses to certain messages too much. I remember you could make Windows 9x extremely hard to shut down properly by simply installing a WM_QUERYENDSESSION handler that silently FALSE'd out everytime...



      16-bit apps were run with A LOT of gratuitous privileges for compatibility reasons - enough to directly access ... and in the worst case crash! ... some of the hardware.






      share|improve this answer



























        2












        2








        2







        Addendum:



        Some special memory areas (eg the infamous "GDI resources") that all applications needed were extremely limited in size (due to being shared with 16-bit APIs, which needed segment size limits respected) - and very easily ran into exhaustion, with no effective safeguards present.



        A lot of essential system APIs did not sanity check their parameters well - if you accidentally fed them invalid pointers or pointers to resources of a different type of resource than expected, all kinds of unwanted behaviour could happen - especially when involving something in a 16-bit-shared area. Getting GDI object handles in a twist ... ouch.



        Also, the system was trusting the responses to certain messages too much. I remember you could make Windows 9x extremely hard to shut down properly by simply installing a WM_QUERYENDSESSION handler that silently FALSE'd out everytime...



        16-bit apps were run with A LOT of gratuitous privileges for compatibility reasons - enough to directly access ... and in the worst case crash! ... some of the hardware.






        share|improve this answer













        Addendum:



        Some special memory areas (eg the infamous "GDI resources") that all applications needed were extremely limited in size (due to being shared with 16-bit APIs, which needed segment size limits respected) - and very easily ran into exhaustion, with no effective safeguards present.



        A lot of essential system APIs did not sanity check their parameters well - if you accidentally fed them invalid pointers or pointers to resources of a different type of resource than expected, all kinds of unwanted behaviour could happen - especially when involving something in a 16-bit-shared area. Getting GDI object handles in a twist ... ouch.



        Also, the system was trusting the responses to certain messages too much. I remember you could make Windows 9x extremely hard to shut down properly by simply installing a WM_QUERYENDSESSION handler that silently FALSE'd out everytime...



        16-bit apps were run with A LOT of gratuitous privileges for compatibility reasons - enough to directly access ... and in the worst case crash! ... some of the hardware.







        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered Jul 30 at 20:44









        rackandbonemanrackandboneman

        3,8959 silver badges16 bronze badges




        3,8959 silver badges16 bronze badges
























            1














            Everyone is talking about the improvements in software between Windows 95 and Windows 7, but in those 15 years there were huge advancements in hardware as well. You can run an identical copy of Linux on some consumer-grade hardware from 1996 and some hardware from 2016 and you will find a world of difference in system stability.



            Older hardware simply crashed more often, and it was only really around 2003-2004 that things really changed. Manufacturers of motherboards, CPUs, RAM and various other hardware upped their game significantly as businesses and home users demanded better stability.



            One of the most popular motherboard manufacturers in the 90s was a company named "PC Chips", who also traded under about 20 other names. They produced these shonky, poorly-soldered, barely-shielded motherboards at rock bottom prices. A lot of the system crashes back then were due to people running those motherboards, and not Windows.



            That said, Win95 was horribly crash prone itself, and it was always a guessing game as to whether your crashes were hardware or software related.






            share|improve this answer

























            • Pc Chips? Did good Ole IBM computers use these ugly mother boards?

              – Delta Oscar Uniform
              Jul 31 at 15:45











            • Also where I can find info about these cheapskate? Any Wikipedia pages or something like that.?

              – Delta Oscar Uniform
              Jul 31 at 15:47






            • 1





              Certainly in my case, moving from Windows 9x (I think from 95 OSR2 at the time, but it might possibly have been 98) to NT 4.0 Workstation made an enormous difference in system stability, with absolutely no hardware changes. But then again, by that time I had a powerful enough system to run NT well.

              – a CVn
              Jul 31 at 15:50






            • 1





              @DeltaOscarUniform They are still around, and almost every OEM uses something from them today. The company and brand was changed to ECS, mostly because of the reputation of the "PC Chips" brand name. Speaking of which, my first decent computer that I got on my own, was a2003-ish Socket A with an Athlon Thunderbird. The motherboard was an ECS, and ... it did eat it eventually, it was like the most budget board you could imagine. I found a gutted tower in a warhouse basement, nothing mounted in it except that board and CPU.

              – J. M. Becker
              Aug 1 at 4:32







            • 1





              Some late 90s hardware (not all, and you never knew what you got) was perfectly able to do 100s of days of uptime under either linux/unix (if the drivers you needed to use weren't bugged!) or Windows NT or when running non-bugged DOS single purpose, equipment control programs. No, it did not under most Windows 9x versions: Research what the 49 day bug was if you are curious :) ... One problem with 90s hardware was still-widespread ISA hardware - easy to misconfigure, and easy to crash the system with on a hardware level if misconfigured :)

              – rackandboneman
              Aug 1 at 21:45















            1














            Everyone is talking about the improvements in software between Windows 95 and Windows 7, but in those 15 years there were huge advancements in hardware as well. You can run an identical copy of Linux on some consumer-grade hardware from 1996 and some hardware from 2016 and you will find a world of difference in system stability.



            Older hardware simply crashed more often, and it was only really around 2003-2004 that things really changed. Manufacturers of motherboards, CPUs, RAM and various other hardware upped their game significantly as businesses and home users demanded better stability.



            One of the most popular motherboard manufacturers in the 90s was a company named "PC Chips", who also traded under about 20 other names. They produced these shonky, poorly-soldered, barely-shielded motherboards at rock bottom prices. A lot of the system crashes back then were due to people running those motherboards, and not Windows.



            That said, Win95 was horribly crash prone itself, and it was always a guessing game as to whether your crashes were hardware or software related.






            share|improve this answer

























            • Pc Chips? Did good Ole IBM computers use these ugly mother boards?

              – Delta Oscar Uniform
              Jul 31 at 15:45











            • Also where I can find info about these cheapskate? Any Wikipedia pages or something like that.?

              – Delta Oscar Uniform
              Jul 31 at 15:47






            • 1





              Certainly in my case, moving from Windows 9x (I think from 95 OSR2 at the time, but it might possibly have been 98) to NT 4.0 Workstation made an enormous difference in system stability, with absolutely no hardware changes. But then again, by that time I had a powerful enough system to run NT well.

              – a CVn
              Jul 31 at 15:50






            • 1





              @DeltaOscarUniform They are still around, and almost every OEM uses something from them today. The company and brand was changed to ECS, mostly because of the reputation of the "PC Chips" brand name. Speaking of which, my first decent computer that I got on my own, was a2003-ish Socket A with an Athlon Thunderbird. The motherboard was an ECS, and ... it did eat it eventually, it was like the most budget board you could imagine. I found a gutted tower in a warhouse basement, nothing mounted in it except that board and CPU.

              – J. M. Becker
              Aug 1 at 4:32







            • 1





              Some late 90s hardware (not all, and you never knew what you got) was perfectly able to do 100s of days of uptime under either linux/unix (if the drivers you needed to use weren't bugged!) or Windows NT or when running non-bugged DOS single purpose, equipment control programs. No, it did not under most Windows 9x versions: Research what the 49 day bug was if you are curious :) ... One problem with 90s hardware was still-widespread ISA hardware - easy to misconfigure, and easy to crash the system with on a hardware level if misconfigured :)

              – rackandboneman
              Aug 1 at 21:45













            1












            1








            1







            Everyone is talking about the improvements in software between Windows 95 and Windows 7, but in those 15 years there were huge advancements in hardware as well. You can run an identical copy of Linux on some consumer-grade hardware from 1996 and some hardware from 2016 and you will find a world of difference in system stability.



            Older hardware simply crashed more often, and it was only really around 2003-2004 that things really changed. Manufacturers of motherboards, CPUs, RAM and various other hardware upped their game significantly as businesses and home users demanded better stability.



            One of the most popular motherboard manufacturers in the 90s was a company named "PC Chips", who also traded under about 20 other names. They produced these shonky, poorly-soldered, barely-shielded motherboards at rock bottom prices. A lot of the system crashes back then were due to people running those motherboards, and not Windows.



            That said, Win95 was horribly crash prone itself, and it was always a guessing game as to whether your crashes were hardware or software related.






            share|improve this answer













            Everyone is talking about the improvements in software between Windows 95 and Windows 7, but in those 15 years there were huge advancements in hardware as well. You can run an identical copy of Linux on some consumer-grade hardware from 1996 and some hardware from 2016 and you will find a world of difference in system stability.



            Older hardware simply crashed more often, and it was only really around 2003-2004 that things really changed. Manufacturers of motherboards, CPUs, RAM and various other hardware upped their game significantly as businesses and home users demanded better stability.



            One of the most popular motherboard manufacturers in the 90s was a company named "PC Chips", who also traded under about 20 other names. They produced these shonky, poorly-soldered, barely-shielded motherboards at rock bottom prices. A lot of the system crashes back then were due to people running those motherboards, and not Windows.



            That said, Win95 was horribly crash prone itself, and it was always a guessing game as to whether your crashes were hardware or software related.







            share|improve this answer












            share|improve this answer



            share|improve this answer










            answered Jul 31 at 15:42









            John EddowesJohn Eddowes

            1364 bronze badges




            1364 bronze badges















            • Pc Chips? Did good Ole IBM computers use these ugly mother boards?

              – Delta Oscar Uniform
              Jul 31 at 15:45











            • Also where I can find info about these cheapskate? Any Wikipedia pages or something like that.?

              – Delta Oscar Uniform
              Jul 31 at 15:47






            • 1





              Certainly in my case, moving from Windows 9x (I think from 95 OSR2 at the time, but it might possibly have been 98) to NT 4.0 Workstation made an enormous difference in system stability, with absolutely no hardware changes. But then again, by that time I had a powerful enough system to run NT well.

              – a CVn
              Jul 31 at 15:50






            • 1





              @DeltaOscarUniform They are still around, and almost every OEM uses something from them today. The company and brand was changed to ECS, mostly because of the reputation of the "PC Chips" brand name. Speaking of which, my first decent computer that I got on my own, was a2003-ish Socket A with an Athlon Thunderbird. The motherboard was an ECS, and ... it did eat it eventually, it was like the most budget board you could imagine. I found a gutted tower in a warhouse basement, nothing mounted in it except that board and CPU.

              – J. M. Becker
              Aug 1 at 4:32







            • 1





              Some late 90s hardware (not all, and you never knew what you got) was perfectly able to do 100s of days of uptime under either linux/unix (if the drivers you needed to use weren't bugged!) or Windows NT or when running non-bugged DOS single purpose, equipment control programs. No, it did not under most Windows 9x versions: Research what the 49 day bug was if you are curious :) ... One problem with 90s hardware was still-widespread ISA hardware - easy to misconfigure, and easy to crash the system with on a hardware level if misconfigured :)

              – rackandboneman
              Aug 1 at 21:45

















            • Pc Chips? Did good Ole IBM computers use these ugly mother boards?

              – Delta Oscar Uniform
              Jul 31 at 15:45











            • Also where I can find info about these cheapskate? Any Wikipedia pages or something like that.?

              – Delta Oscar Uniform
              Jul 31 at 15:47






            • 1





              Certainly in my case, moving from Windows 9x (I think from 95 OSR2 at the time, but it might possibly have been 98) to NT 4.0 Workstation made an enormous difference in system stability, with absolutely no hardware changes. But then again, by that time I had a powerful enough system to run NT well.

              – a CVn
              Jul 31 at 15:50






            • 1





              @DeltaOscarUniform They are still around, and almost every OEM uses something from them today. The company and brand was changed to ECS, mostly because of the reputation of the "PC Chips" brand name. Speaking of which, my first decent computer that I got on my own, was a2003-ish Socket A with an Athlon Thunderbird. The motherboard was an ECS, and ... it did eat it eventually, it was like the most budget board you could imagine. I found a gutted tower in a warhouse basement, nothing mounted in it except that board and CPU.

              – J. M. Becker
              Aug 1 at 4:32







            • 1





              Some late 90s hardware (not all, and you never knew what you got) was perfectly able to do 100s of days of uptime under either linux/unix (if the drivers you needed to use weren't bugged!) or Windows NT or when running non-bugged DOS single purpose, equipment control programs. No, it did not under most Windows 9x versions: Research what the 49 day bug was if you are curious :) ... One problem with 90s hardware was still-widespread ISA hardware - easy to misconfigure, and easy to crash the system with on a hardware level if misconfigured :)

              – rackandboneman
              Aug 1 at 21:45
















            Pc Chips? Did good Ole IBM computers use these ugly mother boards?

            – Delta Oscar Uniform
            Jul 31 at 15:45





            Pc Chips? Did good Ole IBM computers use these ugly mother boards?

            – Delta Oscar Uniform
            Jul 31 at 15:45













            Also where I can find info about these cheapskate? Any Wikipedia pages or something like that.?

            – Delta Oscar Uniform
            Jul 31 at 15:47





            Also where I can find info about these cheapskate? Any Wikipedia pages or something like that.?

            – Delta Oscar Uniform
            Jul 31 at 15:47




            1




            1





            Certainly in my case, moving from Windows 9x (I think from 95 OSR2 at the time, but it might possibly have been 98) to NT 4.0 Workstation made an enormous difference in system stability, with absolutely no hardware changes. But then again, by that time I had a powerful enough system to run NT well.

            – a CVn
            Jul 31 at 15:50





            Certainly in my case, moving from Windows 9x (I think from 95 OSR2 at the time, but it might possibly have been 98) to NT 4.0 Workstation made an enormous difference in system stability, with absolutely no hardware changes. But then again, by that time I had a powerful enough system to run NT well.

            – a CVn
            Jul 31 at 15:50




            1




            1





            @DeltaOscarUniform They are still around, and almost every OEM uses something from them today. The company and brand was changed to ECS, mostly because of the reputation of the "PC Chips" brand name. Speaking of which, my first decent computer that I got on my own, was a2003-ish Socket A with an Athlon Thunderbird. The motherboard was an ECS, and ... it did eat it eventually, it was like the most budget board you could imagine. I found a gutted tower in a warhouse basement, nothing mounted in it except that board and CPU.

            – J. M. Becker
            Aug 1 at 4:32






            @DeltaOscarUniform They are still around, and almost every OEM uses something from them today. The company and brand was changed to ECS, mostly because of the reputation of the "PC Chips" brand name. Speaking of which, my first decent computer that I got on my own, was a2003-ish Socket A with an Athlon Thunderbird. The motherboard was an ECS, and ... it did eat it eventually, it was like the most budget board you could imagine. I found a gutted tower in a warhouse basement, nothing mounted in it except that board and CPU.

            – J. M. Becker
            Aug 1 at 4:32





            1




            1





            Some late 90s hardware (not all, and you never knew what you got) was perfectly able to do 100s of days of uptime under either linux/unix (if the drivers you needed to use weren't bugged!) or Windows NT or when running non-bugged DOS single purpose, equipment control programs. No, it did not under most Windows 9x versions: Research what the 49 day bug was if you are curious :) ... One problem with 90s hardware was still-widespread ISA hardware - easy to misconfigure, and easy to crash the system with on a hardware level if misconfigured :)

            – rackandboneman
            Aug 1 at 21:45





            Some late 90s hardware (not all, and you never knew what you got) was perfectly able to do 100s of days of uptime under either linux/unix (if the drivers you needed to use weren't bugged!) or Windows NT or when running non-bugged DOS single purpose, equipment control programs. No, it did not under most Windows 9x versions: Research what the 49 day bug was if you are curious :) ... One problem with 90s hardware was still-widespread ISA hardware - easy to misconfigure, and easy to crash the system with on a hardware level if misconfigured :)

            – rackandboneman
            Aug 1 at 21:45

















            draft saved

            draft discarded
















































            Thanks for contributing an answer to Retrocomputing Stack Exchange!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid


            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.

            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fretrocomputing.stackexchange.com%2fquestions%2f11878%2fwhy-did-windows-95-crash-the-whole-system-but-newer-windows-only-crashed-program%23new-answer', 'question_page');

            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Category:9 (number) SubcategoriesMedia in category "9 (number)"Navigation menuUpload mediaGND ID: 4485639-8Library of Congress authority ID: sh85091979ReasonatorScholiaStatistics

            Circuit construction for execution of conditional statements using least significant bitHow are two different registers being used as “control”?How exactly is the stated composite state of the two registers being produced using the $R_zz$ controlled rotations?Efficiently performing controlled rotations in HHLWould this quantum algorithm implementation work?How to prepare a superposed states of odd integers from $1$ to $sqrtN$?Why is this implementation of the order finding algorithm not working?Circuit construction for Hamiltonian simulationHow can I invert the least significant bit of a certain term of a superposed state?Implementing an oracleImplementing a controlled sum operation

            Magento 2 “No Payment Methods” in Admin New OrderHow to integrate Paypal Express Checkout with the Magento APIMagento 1.5 - Sales > Order > edit order and shipping methods disappearAuto Invoice Check/Money Order Payment methodAdd more simple payment methods?Shipping methods not showingWhat should I do to change payment methods if changing the configuration has no effects?1.9 - No Payment Methods showing upMy Payment Methods not Showing for downloadable/virtual product when checkout?Magento2 API to access internal payment methodHow to call an existing payment methods in the registration form?