My third home computer was an IBM PS/2 Model 30 286. I was already an systems architecture nerd at that age, and so I was excited to get this new machine. I believe it came equipped with 256KiB RAM and 20MB HDD, among other things.
Of course I ran DOS for the most part, and DOS did absolutely nothing with 286's special protected-mode features. Really, Windows 3.1 didn't do anything with it either. Later, I eschewed DOS, upgraded to 2MiB RAM, and installed Minix, which turned into a really fun exploration of protected mode and its strict limitations, like the 64/64 code/data limit on each task.
286 protected mode, for its segmented memory model and all the fascinating advanced features, was more or less neglected by the consumer OSes of its time, and they bided their time until the fully 32-bit 386 with its flat memory model, able to address a huge amount of physical RAM. In retrospect, probably the correct decision. The 286 was a logical transition along the 8-to-32-bit path.
There was an OS especially made for the new 286 and its protected mode: Concurrent DOS by Digital Research (aka Gary Killdall).
This OS would function strictly in protected-mode, allowing users to take full advantage of the protected mode to perform multi-user, multitasking operations while running 8086 emulation.
But, by an irony of fate, this only worked on the B-1 prototype step of the chip. When Intel went out for production with a new C-1 step, it broke everything and Concurrent DOS was delayed[0] until it was too late, and then the OS was recycled as a Point of Sale for cash register software[1].
The 8086 emulation would work by trapping every segment register load, emulating the instruction's real mode semantics, and then loading the new CPU state using the undocumented LOADALL instruction.
In principle this should work on any stepping. I'm not sure how Digital Research implemented it, but they might have relied on some detail of the exception handling that changed between steppings. For example, if early steppings left "invalid" values in segment registers instead of zeroing them, DR might have used these instead of going to the effort of emulating the faulting instruction?
Concurrent DOS/286 became FlexOS, which seems to be available here (but I haven't tested it to see if this actually includes the real-mode emulation, it might have been a feature that was only in the prototype):
In any case, this way of running real mode code might just be too impractical. For "large model" programs using more than 64K data, every pointer dereference would have to be trapped and emulated, taking possibly thousands of clock cycles instead of just a few. You could optimize accesses to code and static data segments by defining protected-mode selectors for those, but running code that e.g. walks a linked list allocated on the heap might be intolerably slow.
Also all of the memory allocated to the program would have to be in one contiguous block, since there is no linear->physical address translation in the 286.
I have no idea about the actual technical matter, but here is another take on this story[0] (at the end of the page "The true story or..."):
Did you ever own an original IBM AT? On the IBM AT, the power-up post routine printed the words "AT Multiuser System" at the top of the screen during power up. What the heck is the AT Multiuser System??? The AT prototypes didn’t boot msdos, they booted a secret new OS. This new OS (I don’t know the code-name for it) was a multiuser dos. The design was simple, you buy an AT, and it comes with one or more 8 port serial cards. You buy Wyse terminals (the ones with the PC Emulation mode( as opposed to modes like VT100 emulation)) and put one one each desk, then you run serial cables to your AT. This OS was a true multitasking OS, and the AT was a small cheap minicomputer. THAT was what the AT was designed to be. Period.
So what the heck happened?
IBM had a successful project, the first manufacturing run filled the warehouse, the software was ready, and it was time to start packaging the retail boxes. The first of the manufactured systems used up the last of the initial supply of 286 chips, before Intel switched to a lower-cost run of the 286 chips. The new 286 chips had some minor changes to increase yields, allowing the price to be lowered. Apparently the yields were low on the original version that IBM started manufacturing with. At some point IBM found a bug in the OS that slipped through the QA tests... It was a subtle bug that only had an impact when the machine was multitasking with multiple users, and as it turns out it was only present on the systems with the new 286 chips.
There was a lot of finger pointing and hand-wringing going on over it. Something was broken in the protected mode functions. IBM sales execs were either going to have to insist Intel use the more expensive low-yield version of the 286 chip, OR IBM was going to have to scrap the multiuser system. IBM is managed by the sales department. Someone in sales did the math... we use the expensive 286 chip, and sell one AT to each customer’s office of 15 to 20 people... OR we sell them 15 or 20 machines with a cheaper chip. Which makes more money? It was decided that scrapping the multiuser system would increase system sales volume in theory. Even if the 15 to 20 machines were PC’s instead of AT’s, more money was made selling more systems than selling one.
So what happened to the multiuser system? The 8-port serial boards were ground into dust, except for a few units that escaped with the beta testers (I used to have a bunch of these). The Wyse terminals still had the PC emulation mode, but the expected sales volume of those never happened, so Wyse suffered as their R&D investment in the project was scrapped. DR got paid for their work on the OS, but they never got to see any revenue from sales. Bill Gates went from being a footnote in history to being who he is today. The first widely distributed multitasking OS never saw the light of day for several more years, and Gary went from being the grandfather of the small computer industry to a footnote. The operating system sat on a shelf for a couple years, but was dusted off and used when IBM started developing scanner cash registers. The new cash registers needed some sort of network or centralized computer, so DR got to sell a couple hundred copies of an OS that was supposed to sell millions (billions by today). The software got a new name, ConcurrentDOS.
Is there any detail as to what the chip did before the change, and no longer did after? Did Intel change the spec of the chip, or was IBM leveraging undocumented behavior?
The story may be opinionated, but for sure Intel and DR went back and forth a lot on this, till a late E-2 step, as quoted in another InfoWorld article[0].
Agree! I don't have an original AT, but that string appears nowhere in the ROM images I found. There is also a fairly accurate web-based emulator, and it doesn't print this message:
I also doubt that IBM had anything to do with the development of Concurrent DOS, which AFAIK was based on earlier multitasking versions of CP/M. It's highly unlikely that IBM worked together with DR on a "secret" 16-bit protected mode OS, only to throw it away and then start over with OS/2, this time in collaboration with Microsoft.
My dad had an original IBM AT at work, back in '85 or '86, and I don't remember seeing that message. I rebooted the machine several times. It was sure built like a tank though. And I loved that keyboard, even as a kid.
And now I know why that weird version of DOS existed a few decades ago when I was working on cash registers. Back then you ran across a lot of DOS compatible OS's and documentation. As long as all the same commands worked you were usually fine.
There was also OS/2 which was co-created by Microsoft and IBM as the DOS replacement. At IBM’s insistence it was designed to target the 286 protected mode.
Bill Gates grumbled a lot about that, he would have preferred to wait to use the 386 instead. And he was right because no interesting applications emerged for OS/2 1.x.
>At IBM’s insistence it was designed to target the 286 protected mode.
> Bill Gates grumbled a lot about that,... And he was right because no interesting applications emerged for OS/2 1.x
286 protected mode wasn't the issue... Windows 3.0 was also largely built around 286 protected mode and it's where most of the interesting PC application development happened for years.
OS/2's early issues were really related to the cost of the OS (x2-3 windows), the equipment needed to run it (at least 3MB RAM), and delays in both the networking and GUI capabilities. OS/2 also required a wholesale commitment to use it, due to the lack of a decent DOS story, which is in contrast to Windows (that ran on DOS, more or less).
This is a good point. But there’s a nuance in that OS/2 1.x used 286 protected mode the way Intel had intended it, by booting into it and never returning to real mode (which was supposed to be impossible).
Whereas Windows 3.0 was a so-called DOS extender: it switched to protected mode for an application but could also come back. That kind of execution model wasn’t envisaged in 1985 when the OS/2 project started.
Steven Sinofsky writes about it in his personal Microsoft history:
’DanN shared with me what he was most excited about—one of the key “secrets” of Windows 3, which was how the product enabled protected mode but could also remain compatible with old MS-DOS programs. David Weise (DavidW) and Murray Sargent (MurrayS) had invented some novel uses of the Intel chipset that even Intel did not anticipate, which enabled these efforts. One programming trick called PrestoChangeoSelector was a key “hack” they developed and later became an absurd symbol of “secret” application programmable interfaces (APIs) in Windows (absurd, because it was supposedly secret, when in fact it was right there to see). Dan told this story with great Microsoft pride, as the development of Windows 3 and these techniques represented much of what Microsoft did so well in those days. Hack.’
> But there’s a nuance in that OS/2 1.x used 286 protected mode the way Intel had intended it, by booting into it and never returning to real mode (
OS/2 had the ability to run DOS programs in a DOS box. my recollection was that OS/2 had to switch back to real mode to do this (thereby suspending all the OS/2 native processes.)
Windows 3.0 in standard (286) mode did something similar.
OS/2 1.x did switch from protected back to real mode by resetting the CPU, in order to run a single DOS application in the "penalty box". Device drivers even had to be able to run seamlessly in both modes!
That article is paywalled, but what you quoted reads like pure bullshit. PrestoChangoSelector was used in protected mode to make code segments writable, and had nothing to do with DOS compatibility:
Yes, it was. I evaluated OS/2 1.0 professionally, when it was new; did you?
The 286 protected mode was an issue, because it meant that OS/2 could not multitask dos applications — exactly as Concurrent DOS could not on the shipping 286 hardware.
> Windows 3.0 was also largely built around 286 protected mode
Not really, no.
1. Windows 2 had an edition built around 286 protected mode. This was nothing new for Windows 3.0. The problem is that the 286's protected mode didn't deliver anything that was very useful for dos applications, and in the 1980s, DOS applications were the sole driving factor behind the PC industry.
In fact, Windows/286 actually delivered one main benefit: you got an extra 64 kB of RAM. That was it. It wasn't really worth having.
Windows 2 also had a 386 edition, generally known as Windows/386. That delivered the core functionality that was the useful thing in the 386 in the 1980s: hardware assisted multitasking of DOS applications.
2. You cite Windows 3.0, but that's not relevant to a discussion of OS/2 1.X, because Windows 3.0 is what happened as a result of OS/2 1.X.
OS/2 1.x was a 1980s product. Windows 3.0 came out in 1990, a few months after OS/2 1.2. Windows 3.0 happened in response to the failure of OS/2 1.X, and OS/2 1.X failed because it ran on the 286 processor, when it should've run on the 386 processor — as Microsoft wanted.
Windows 2 was able to multitask dos applications, in software, but the problem with doing it that way is that they all had to fit into 640 kB of RAM. And by the late 1980s, a single DOS application barely fit into 640 kB!
Windows 3.x was able to multitask DOS applications as well, and it could do it on 8086, on 286, and on 386. The difference is that on a 386, each DOS application got it own 640 kB. So if you had 4 MB of RAM, you could multitask a whole bunch of them.
And it sold because the single same edition ran on all three processors, so you didn't have to choose which one you needed in advance, and buy the right product. The one product did it all. And if you upgraded the hardware, you got the additional facilities, with that same copy of the software. (Remember, no product activation in those days!)
> Yes, it was. I evaluated OS/2 1.0 professionally, when it was new; did you?
I was 12, so just a user and programmer. (Dating back to Windows 2.11)
> 1. Windows 2 had an edition built around 286 protected mode. ... In fact, Windows/286 actually delivered one main benefit: you got an extra 64 kB of RAM.
Windows/286 didn't use protected mode, all it let you do is run in real mode with A20 unmasked. (Which is where the extra almost-64K came from). Probably did terrible things for the marketing of 80286-specific software for a while.
Windows/386 did use protected mode to run multiple DOS apps, but the Windows subsystem ran just like any other DOS app - in real mode.
It was Windows 3.0 that migrated the Windows subsystem over to protected - mostly with existing apps. You run Excel on Windows/386, you have 640K + whatever EMS you might have. You run Excel on Windows 3.0 (even on a 286) and you have direct access (albeit segmented) to the full installed RAM. You're right, though, that none of this helped DOS apps.
> Windows 2 was able to multitask dos applications, in software, but the problem with doing it that way is that they all had to fit into 640 kB of RAM.
I've done this with DesqView on an 8088 also. It worked well enough to run a couple BASIC interpreters, although it wouldn't have been useful at all for anything real.
It was interesting to play around with the total lack of memory protection. The two BASICs (with the same DEF SEG) could easily be able to share data with peeks and pokes to the same memory location.
There were a couple things Windows did better regarding DOS when compared to OS/2:
* Windows ran on a DOS file system, so data portability was transparent.
* You could easily quit Windows to get back to something close to whatever DOS environment you had before.
* There was a runtime version of Windows that could be packaged with applications (Excel mainly, iirc) to make it possible to sell Windows apps to DOS users.
* Windows had support for LIM/EMS (which users might already have to run large Lotus 1-2-3 spreadsheets).
None of this was a great way to run DOS apps under Windows, but it did make it easier to transition from DOS to Windows slowly, without a wholesale jump, and the risk that entailed. It was easier (and cheaper) for a DOS user to experiment with Windows than it was with OS/2. The lower barrier to entry is worth a lot.
Given how [cough] successful the 386-specific version of OS/2 was, even after IBM put a load of money and marketing behind it...perhaps the real problem was not the 286's protected mode.
I don't know. MS and IBM wasted several years trying to make OS/2 for 286. If they'd started on a 386-only system back in 1985 when the cooperation was signed, maybe they could have shipped a compelling 32-bit GUI operating system by 1989 and avoided the acrimony that broke the alliance. (Probably not though — the technology choices were only one ingredient in the divorce.)
This is absolutely true, but the thing is, SCO didn't have to be backwards compatible with a damned thing.
Also note that the base addition of Xenix didn't support graphics, or sound, or networking, or come with a C compiler or anything. It was basically a runtime, for text only apps, and most machines only ran a single app. I put in dozens of boxes running Xenix back then.
In my world, the bulk of them ran one binary: an accountancy application provided by Tetra Corporation, and we sold one called Tetraplan.
SCO Xenix was a wonderful little operating system, extremely resource efficient, and astonishingly stable in use. Some of those boxes had uptimes in years. But that's relatively easy when it's not connected to anything, not connected to the outside world in any way, and only runs a single program.
I am absolutely not denigrating Xenix.
Xenix was my first experience of UNIX of any kind, I supported it for years, right into the 21st-century for one exceptional customer, and it caused me remarkably little pain of any kind. It was great. But it was not a mass market, general-purpose product.
The funny thing is that Xenix was a Microsoft product. They originally licensed it to SCO who created the x86 port, then eventually sold it entirely to SCO in 1987.
If things had turned out slightly differently, Microsoft's 32-bit OS in the 1990s would have been Unix.
OS/2 286's backwards compatibility was miserably limited. Xenix selling the networking and C compiler separately was a business decision, not a way to make anything easier for the developers. Running "just one app" was also a business decision. And contrast that Xenix stability with umpteen million other 1980's and 90's computers which did one job, with no network...but ran crash-tastic MS-DOS.
OS/2 had both MS-DOS 5 and Windows 3.x compatibility, first by running a copy of Windows in a VDM and then later through integration. OS/2 was also very damn stable.
286 protected mode was bug-ridden and lacked features. 386 protected mode fixed most of them to make it more stable and usable.
Before I got a 286 around 1987 or so I had a TRS-80 Color Computer and then a Coco 3. Those booted into BASIC but could run OS-9 which was a miniature Unix-like operating system w/ a C compiler, bytecode based structured BASIC, and even a graphical windowing system.
I did all the data entry and analysis for a customer survey my uncle needed and spent the money to get a new 286 machine at 12MHz which utterly devastated 8-bit machines in performance. It was so fast that it could emulate the Z80 for CP/M development much faster than any real Z-80!
At that time people were jumping ship from 8-bit platforms frequently onto the PC because it had finally become superior in all respects, you could even write good games for the EGA. Some people were going to Amiga and such platforms but the 68k turned out to be a dead end.
My uncle told me later on that the information I'd gotten for him had saved his employer a vast amount of money, knowing that I should have asked for enough money to buy a 386 machine and a used car.
AIUI, another big problem with the 80286 was that it did not support returning to real mode after switching to protected mode. This made compatibility a huge issue, which was a big problem for Microsoft. The 80836, besides marking a switch to 32-bit, added virtual 8086 mode, which allowed for emulating real mode after already having switched to protected mode.
This "virtual 8086" mode is what was used for the VMM kernel of Windows 9x, and later the NTVDM system on 32-bit versions of Windows NT. I remember being able to run some DOS software on Windows XP (but it wasn't perfect).
The tech side of this is right but the corporate history may be a bit off
> The fastest 80286 CPU produced was the 25 MHz variant manufactured by Harris, who were later acquired by Intersil.
I worked at Harris (post split with Intersil) and we still used the same parking lot. Everyone at both companies said that Harris acquired Interstil, merged it with Harris semiconductor, then split it off from Harris as Intersil once more. Intersil never acquired Harris.
My folks bought a 286-20 from Dell and it was the fastest machine I had ever worked on. Early in my career as a computer entrepreneur I incrementally upgraded it from mono/Hercules to VGA, added a '287 math-co and eventually a bigger hard drive. I think they got 10 years out of that box before jumping to a 486-DX-100.
For my honors thesis in college I splurged on an 80386-40 and ULSI math co-processor. It was a 386 running like a 486. For a brief and shining moment I had the fastest personal machine on campus. As always the moment was fleeting but I didn't replace that machine until Windows 95 came out.
> Skip forward a full 5 years to 1991 and the most budget-oriented PC was still running a green-screen 8088 CPU for the price of around £300!
This is true and the reason is that most people were using PC's to run operating systems from Microsoft, which at the time treated everything in real mode, thus effectively like an 8088. The fact that an 80386 had VAX-like virtual memory didn't mean a damn thing, unless you were trying to run some kind of Unix or Unix-like or at least a "DOS extender".
> The PC market was very good at catering to every price point.
No wonder, it's because processors were actually cheap, but higher price points were created artificially. Great example is Intel 486SX ($333) vs 486DX ($588). In fact, they had to disable co-processor in 486DX in order to produce 486SX, so SX net cost per unit was higher. [1] Referenced article has many more such anecdotal examples.
It is a little like profiling the 27.5 MHz VLSI VL82C201 80286, which would out-benchmark an 80386 in DOS mode, but at the end, the 80386 could boot Linux and become a workstation and the 80286 could not, and there was no point in trying to run Xenix.
> I don't understand the fascination with obsolete commodity hardware on HN.
It's pretty simple - on the emotional side there's nostalgia, and on the rational side there's the fact that old hardware is typically much easier to understand.
A modern CPU has so much technology that there's probably no person in the world who can understand all of it: cache coherency, out-of-order execution, branch prediction, TSX instructions, the list goes on.
In contrast, a dedicated person could feasibly understand all the logic in an old CPU and keep a good mental model of how all of it works.
> old hardware is typically much easier to understand.
The old hardware is also more trustworthy -- perhaps as a corollary to said easier comprehensibility, or simply thanks to unwitting limitations-as-a-feature. There weren't computers all the way down, each with its own bugs, security issues, modes of surveillance/exfiltration, and ability to poke memory with impunity.
Old stuff is usually simple enough to be fully understood by a single person. This is very much not true of any modern hardware. As a result, many people passionate about computing machinery like to learn about and study old hardware. This is especially true of the 8 bit era, but much 16 bit stuff straddled the 8 bit line and simplicity.
It's always shocking to me that in my lifetime (programming on a VIC-20 as a kid), computers have gone from something that you could understand at all levels to a deep pile of abstractions that no one can honestly claim to understand without a lot of handwaving.
I think we also marvel at how much the early programmers/hardware guys were able to do with such limited resources. Useful applications could be written that take kilobytes of RAM, and the optimizations to get there are reasonably easy to understand.
Well, when the hardware is simpler the OS environment is (usually) simpler, and the application software will likewise end up simpler. There is no reason that one couldn't write something like VisiCalc in AMD64 ASM, and I am sure that someone somewhere has done so. The main problem is that no one today would care to use that application. We've come to want and expect far more. As a result, software is a fluid that will always expand to fill its container.
> I don't understand the fascination with obsolete commodity hardware on HN.
Older hardware had (obviously?) more restrictions and less to no documentation to current one. That ignited creative and search activities that are lower or non-existant now. For example, now GPUs are more or less blackboxes while before you should study some basic linear algebra to modify and display objects in 3D.
> What would be fascinating would be if modern software could run as fast as 1980s/1990s software with the same functionality...!
Interestingly enough, when you advocate, or even worse, try that, the efforts are shunned with "no business value" & "hardware is cheap, anyway" label.
However, working with snappy software makes life more livable and enjoyable.
I was trying to find a way to copy the ruler in Libre Office Writer the other day, and had a flashback to how easy it was in 1990 in ClarisWorks on a Mac.
Nostalgia of having lived a unique period in human history when computer capabilities were evolving by leaps and bounds by the year. Nothing is even close to that nowadays.
Like, I brought my first 386 SX-25 Mhz home from college campus when vacation arrived and so replaced the ZX-Spectrum that my younger brother was using. Arrived home late in the evening, my brother was nearly delirious with excitement but didn't have much time to play with the PC coze parents demanded he goes to sleep, afterwards what big deal, there's plenty of time to do whatever in the morning (parents sensitivity never changes).
He told me he barely closed an eye that entire night, checking the clock every half hour waiting for the dawn. When I woke up it was day outside and my brother was typing on the PC keyboard.
Today (almost 30 years later) he tells me nothing in the World can replicate that experience. You could offer him the most preposterous Windows gaming machine or ridiculously expensive Mac, he'd check it for like 5 minutes, say "nice" then shrug and go to sleep :)
Of course I ran DOS for the most part, and DOS did absolutely nothing with 286's special protected-mode features. Really, Windows 3.1 didn't do anything with it either. Later, I eschewed DOS, upgraded to 2MiB RAM, and installed Minix, which turned into a really fun exploration of protected mode and its strict limitations, like the 64/64 code/data limit on each task.
286 protected mode, for its segmented memory model and all the fascinating advanced features, was more or less neglected by the consumer OSes of its time, and they bided their time until the fully 32-bit 386 with its flat memory model, able to address a huge amount of physical RAM. In retrospect, probably the correct decision. The 286 was a logical transition along the 8-to-32-bit path.