Reasonably Priced Business Computer (IBM/360)

The IBM/360 is the first mass computer, designed as a general-purpose computer affordable for mid-sized businesses yet powerful enough for large enterprises.

Background

In 1962, IBM’s revenue was $2.5 billion. CEO Thomas Watson Jr. believed in the vision of a general-purpose computer that supports timesharing, the ability of a computer to do multiple things at once. Thereafter, he invested a staggering $5 billion ($42.5 billion adjusted to 2019), double the company’s annual revenue, to develop the IBM/360. Indeed, more than 100,000 people scattered over 165 cities worked together to create the IBM/360.

One key feature of the IBM/360 was forward and backward compatibility, along with upgradability. Before the IBM/360, businesses purchased a computer and, when they outgrew it, purchased a new computer. In contrast, the IBM/360 enabled extra peripherals, increasing the capacity of the computer. Additionally, a significant amount of older IBM software ran on an emulator.

Prior to the IBM/360, computers were typically custom-tailored to the task at hand. Scientific computers were different than business computers. Additionally, a computer to run an accounting system was different than a computer to run inventory management. Much like Intel created a general-purpose microchip, IBM created a general-purpose overall computer.

The IBM/360 is one of the few computers that both sit in the Computer History Museum and is still in use, 55 years after its introduction. Even though the vast majority of smartphones contain more computing power and memory, the 360 oftentimes does one task, do it well, and have done it for decades. Businesses should move the tasks to newer computers but the 360 is so reliable that migration is oftentimes a low priority.

Third-Party Peripheral Market

Besides forward and backward combability with other computers, IBM allowed third-party companies to create certified peripherals for the 360. While this idea seems common now, it was a groundbreaking experiment when the 360 launched. “Half a million saved is half a million earned,” read third-party peripheral makers advertising low-cost high-quality add-on’s.

Success

The IBM/360 was incredibly successful. IBM was unable to keep up with orders for years. Eventually, even the Soviet Union copied it and named their System/360 knockoff the “Ryad” computer. By 1989, the 360 and successor computers accounted for more than $130 billion in annual revenue.

Electronic Airline Reservation System (SABRE)

As the Cold War heated up during the 1950s, the United States installed an enormous number of missiles, radars, and nuclear weapons to track and respond to nuclear war. WWII radars were good enough for propeller planes but the delay between detection and analysis proved too slow for jet engines and missiles.

SAGE

As the first step in a master defense plan, the United States created a series of computer-assisted command-and-control centers. These featured an MIT-designed computer system called Semi-Automatic Ground Environment, or SAGE.

Enormous computers automatically translated input from radar stations into graphics showing airborne threats and trajectories around large parts of the world. The SAGE systems were finished in 1963, just in time to be rendered obsolete. They were replaced by better systems under the control of North American Aerospace Defense (NORAD).

Mr. Smith, meet Mr. Smith

In 1953, American Airlines president C.R. Smith sat next to IBM salesman R. Blair Smith. C.R. described American Airlines’ travails handling an ever-increasing number of flights around the world. Blaire thought a commercial version of SAGE, the tracked countless flights, might be the starting point of a solution.

Eventually, IBM and American Airlines worked together to build a “Passenger Name Record” (PRN) system, to track all people and flights. Declassified SAGE technology formed the core of the system. Reflecting that the project was experimental, American Airlines named the system Semi-Automated Business Research Environment (SABRE).

Sabre went live in 1964. Rather than the slow and error-prone card-based system in use, an IBM mainframe computer tracked everything. Reservations, flight check-ins, schedules: Sabre handled it all. It took 400 FTE years and cost just under $40 million ($385 million adjusted to 2019) to develop. Other airlines created their own reservation systems but Sabre went online first, a year earlier and proved more reliable.

Sabre Takes Over the World

In 1972, travel agents still called airlines to inquire about routes, fares, and availability. Some knew about Sabre and asked to access the system directly, adding value and lowering costs for both the travel agents and airlines. American Airlines agreed and, during the 1970s, granted access to authorized third-parties.

Eventually, other airlines joined and Sabre offered the ability for travel agents to find the lowest priced fare across all airlines, not just American Airlines flights. By the 1980s the system, in use by 130,000 travel agents worldwide, enabled basic searching through proprietary consumer computer networks.

By the 1990s it became clear that Sabre did not belong in the American Airlines IT department. In 1996 it spun off into its own company, The Sabre Technology Group. Today, Sabre technology powers online airline search technology.

Time-Sharing/Multitasking Computer

Early Computers

Early computers stored programs and data on punch cards. Most cards contained 80-characters, which is why early terminal programs were typically 80-characters per line. Punch cards are exactly what they sound like, physical cards. Each card is one line of a computer program or one piece of data. As users typed, a machine punched a hole representing a letter or number. Programs were a literal pile of cards, with the data after the program.

Image result for 80 column punch card

For example, if a user wanted to compute the average, median, high, and low figures in a set of data they would write a program on a set of cards telling the computer to analyze the data. Next, they would add the data cards physically on top of the program. The whole stack was then put into a special card reader that read the cards, one after another, then the data. Finally, the computer performed calculations and printed results.

Besides being clunky and loud and wasting a massive number of paper cards this system wasted an enormous amount of expensive computing power. Computers, which cost millions of dollars in the 1960s, sat idly around slowly reading a program. In the meantime, they could not do anything else until the card reading process finished. This problem was especially pronounced in Universities because many students shared one computer. Therefore, students waited in line while, in the meantime, the expensive computer wasn’t doing anything. Students and the computer were both idle.

Time-Sharing

In response, companies and University’s built computers and operating systems that did more than one thing at a time. To get around the card reader issue, these new computers used terminals. While one person was typing a program, using little processing power, another could be running their program, which requires more computing power.

The computer could still only do one thing at a time. However, by switching back and forth between tasks, it appeared to do multiple things at once. Rather than the CPU sitting idle it was almost always humming away doing something, whether waiting for keystrokes, compiling a program, or running a piece of software.

IBM OS/360, Unix, VMS

This new type of operating system, that supported multiple people doing multiple things at once, is called a time-sharing system. IBM released the first modern time-sharing operating system, the IBM OS/360, in 1966. Engineer Fred Brooks was lead engineer and wrote a seminal project management book about building the operating system, The Mythical Man Month.

Other systems soon followed including, notably, Multics and its successor, Unix. Later, DEC’s VMS operating system was also especially good at multitasking. Unix eventually morphed into Linux, powering today’s internet, and BSD, which became the core of Mac OS. Eventually, Microsoft hired away many VMS engineers to create Windows NT.

An evolution of timesharing remains the reason that servers, personal computers, and even phones can do multiple things at once.

Machine Translation

Background

In 1933, Soviet scientist Peter Troyanskii presented “the machine for the selection and printing of words when translating from one language to another” to the Academy of Sciences of the USSR. Soviet aparchnicks during the Stalin era declared the invention “useless” but allowed Troyanskii to continue his work. He died of natural causes in 1950 – a noteworthy accomplishment for a professor during the Stalin era – but never finished his translation machine.

Early IBM Machine Translation

In the US, during the Cold War, Americans had a different problem: there were few Russian speakers. Whereas the Anglophone countries pushed out countless media to learn English, the Soviet Union produced far less. Furthermore, spoken Russian was different than the more formalized written Russian. As the saying goes, even Tolstoy didn’t speak like Tolstoy.

In response, the US decided the burgeoning computer field might be helpful. On January 7, 1954, at IBM headquarters in New York, an IBM 701 automatically translated 60 Russian sentences into English.

“A girl who didn’t understand a word of the language of the Soviets punched out the Russian messages on IBM cards. The “brain” dashed off its English translations on an automatic printer at the breakneck speed of two and a half lines per second.

“‘Mi pyeryedayem mislyi posryedstvom ryechyi,’ the girl punched. And the 701 responded: We transmit thoughts by means of speech.’

“‘Vyelyichyina ugla opryedyelyayetsya otnoshyenyiyem dlyini dugi k radyiusu,’ the punch rattled. The ‘brain’ came back: ‘Magnitude of angle is determined by the relation of length of arc to radius.'”

IBM Press Release

Georgetown’s Leon Dostert led the team that created the program.

Blyat

Even IBM notes that the computer cannot think for itself, limiting the usefulness of the program for vague sentences. Apparently, nobody at Georgetown or IBM ever heard real Russians speak or they’d know that vague is an understatement with a language that has dozens of ways to say the same word. Furthermore, the need to transliterate the Russian into Latin letters, rather than typing in Cyrillic, no doubt further introduced room for enormous error.

In 1966, the Automatic Language Processing Advisory Committee, a group of seven scientists, released a more somber report. They found that machine translation is “expensive, inaccurate, and unpromising.” The message was clear: the best way to translate to and from Russian, or any other language, is to learn the language.

Progress continued, usually yielding abysmal results. Computers would substitute dictionary words in one language for comparable words in another, with results oftentimes more amusing than informative.

Towards Less Terrible Translations

One breakthrough came from Japan in 1984, which favored machine learning because few Japanese people learned English. Researcher Mankoto Nagao came up with the idea of searching for and substituting phrases rather than words. This yielded far better, but still generally terrible results.

Eventually, in the early 1990s, IBM built on Nagao’s method by running accurate manual translations and building an enormous database analyzing word frequency. The translations became slightly less horrible. This led to “statistical translation” that was significantly less terrible.

As the World Wide Web shrunk the world the need for automated translations grew and the vast majority of these were some type of statistical translation. Subsequently, they continually improved to the point where Google Translate could pretty much help decipher, say, a bill.

Modern Translating

Finally, in 2016, neural networks and machine learning (artificial intelligence) started to produce vastly superior machine translations. All the sudden, translations were actually readable. As of 2019, the best online translation engine, German-based DeepL, is entirely AI-powered.

Microcomputer Operating System (CP/M)

Operating systems tie the parts of a computer together, transforming it from silicon into something we can interact and use.

Gary Kildall

In 1973, Gary Kildall wrote the first widely used microcomputer operating system, CP/M. It gained popularity over the years. Kildall had a Ph.D. in computer science of Univ. WI. He created a simulator for the Intel 4004 and consulted with Intel, in the (then) three-person software group, creating the first microcomputer compiler for the Intel 8008, PL/M.

It took a year but Kildall eventually built a controller for the disk, computer, and memory ー a full operating system ー calling it control program/monitor, CP/M. That evolved in basic input/output system, BIOS, that still exists today.

There were many knock-offs of Kildall’s operating system. One was called “Quick DOS,” or QDOS for short (DOS stands for Disk Operating System). Kildall had a thriving business and seldom bothered hiring lawyers to shut down the small copycats. Most computer manufacturers preferred the original material and were unwilling to violate copyrights.

Working on the IBM-PC in stealth, IBM approach Kildall in the late 1970’s to license his operating system. Dorthy, Kildall’s wife, was running the business. When IBM arrived for discussions, she refused to sign a one-sided non-disclosure, concerned it would impede her ability to work with other companies that were buying their software. IBM left.

Microsoft

Kildall had been working with Microsoft, which created programming languages. Kildall had a theory that one company should be dominant in operating systems, another should focus on programming languages and a third or more on user software (ex: word processors). Each should have multiple competitors, he believed, but they should not break through their silos. Keeping them separate, he reasoned, would prevent monopolization and stagnation of technological progress. Microsoft was aware of both Kildall and his theory.

After IBM failed to make progress on their operating system, the lead engineer for the IBM PC suggested bringing Microsoft’s Bill Gates, who they’d been working with for programming languages. Lead IBM engineer Philip “Don” Estridge personally liked Gates – he worked with his mother on a charity board – and Gates’ father was a prominent attorney. Gates had already signed IBM’s nondisclosure.

IBM asked Gates if he could produce an operating system. Gates agreed, despite that neither he nor anybody at Microsoft had ever built an operating system. He then quickly contacted Tim Paterson, the “inventor” of QDOS, and licensed it for $25,000 (later adding another $50,000 and employing Paterson as an early Microsoft employee).

Licensing Wars

Microsoft’s licensed QDOS to IBM for little money but reserved the right to sell the software to other computer makers as Microsoft DOS, or MS-DOS.

IBM eventually also licensed CP-M, agreeing to allow buyers to choose. However, IBM sold CP-M computers at a much higher price. Since the operating systems did the same thing – because QDOS was a copy of CP-M – buyers purchased the less expensive MS-DOS. More importantly, Microsoft went on to sell countless copies of MS-DOS to other computer makers, since they knew that was shipping with the reference computer, the IBM-PC.

Microsoft eventually evolved MS-DOS into Microsoft Windows. Kildall eventually sold his business to Novell and died, about three years later, in a bar fight.

Kildall’s children, in a memoir written from an autobiography he died, wrote:

“Gary viewed computers as learning tools rather than profit engines. His career choices reflect a different definition of success, where innovation means sharing ideas, letting passion drive your work and making source code available for others to build upon. His work ethic during the 1970s resembles that of the open-source community today.”

Gary Kildall

Floppy Disk

Floppy disks allowed inexpensive, portable storage of digital information. Floppies were faster, more flexible, more convenient, and lower cost than tape drives. Floppy drives made computers simpler to use, more convenient and increased productivity for computer operators who did not have to load tapes.

Noble invented the floppy disk as an IBM engineer.

His first floppy disk was 8-inches. Eventually, that evolved into the 5.25-inch disk used in the original IBM PC. That disk held 360 kilobytes. Over time, the more rigid 3.5-inch diskette, that held 1.44 megabytes, became more common. Comparatively, there are 1024 kilobytes in a megabyte and 1024 megabytes in a gigabyte. As of 2019, a 128GB flash-drive – that stores the equivalent of almost 90,000 3.5-inch disks, costs $16.

Nobel’s floppy disk team reported into IBM executive Al Shugart. He left IBM to join Memorex then, eventually, start Shugart Associates in 1973, later sold to Xerox. In 1979, he started Shugart Technology later renamed Seagate Technology. Seagate became a large technology company specializing in storage, first in floppy disks and later in hard drives.

Relational Database Management System (RDBMS)

Relational databases simplify the storage and retrieval of related information. For example, rather than storing the state a person lives in a relational database might store a number to a single list of all states. Significantly, this reduces overall storage needs and makes indexing and searching significantly easier and subsequently faster.

Edgar Codd, working at IBM, introduced the idea of a relational database in a 1970 whitepaper. However, IBM failed to actually implement the technology. Eventually, various University’s created early RDBMS systems though none of them commercialized the technology.

Subsequently, a young Ampex employee named Larry Ellison became intrigued by Codd’s work. Ampex’s best days behind it and Ellison thought an RDBMS could produce enormous value at a lower cost than existing storage systems. In 1979, Ellison quit Ampex and founded Relational Software, later renamed Oracle.

Both Oracle and other RDBMS vendors battled fiercely for market share. Competing technology Sybase was doing well until gutted by private equity mania. Subsequently, they licensed their technology to Microsoft, who renamed it SQL Server, and it continues as a market leader. Informix was a contender for top database until their CEO committed fraud. He was imprisoned for only two months but that was enough to scare away customers. Open source MySQL has become a widely used RDBMS: Oracle purchased it in April 2009, for $1 billion USD.

RDBMS inventor Codd is British but he did the bulk of his RDBMS work in the US.

Random Access Memory

Random Access Memory (RAM) is a type of fast memory. The Central Processing Unit (CPU), the brain of a computer, relies on RAM. RAM exists to this day. Every computer, including smartphones, contains RAM. The “random” in RAM refers to its ability to access memory anywhere, instantly, unlike a disk drive that must search for a piece of memory. This makes the memory much faster than disk drives.

Dennard’s RAM improved on core memory in that the individual units of storage were on silicon, not metal. This vastly miniaturized the memory, enabling countless units could be put on a chip.

However, unlike core memory, RAM has to be continually refreshed. Because the chips are made of silicon, they are fast enough to do this.

Although Dennard invented RAM as an IBM employee, it was Intel, a startup at that time, which most successfully commercialized the innovation.

Plasma Panel

1964

Donald Bitzer
Gene Slottow
Robert Wilson

Plasma panels form fonts, images, and other patterns using plasma, rather than tubes. The panels are flat, cool, and use less power than CRT tubes.

Bitzer developed the plasma panel as the monitor for his teaching computer, PLATO.

PLATO is a lesser-known fountain of innovation. Along with Bell Labs, Xerox PARC, GE, and Kodak, PLATO created an enormous number of modern technologies. Networked computers, text messaging, online community, and touch-screens all came from PLATO besides plasma panels.

The system was originally developed for Computer Based Training (CBT) by Donald Bitzer and others at the University of Illinois at Urbana-Champaign. PLATO was used for training, much like the web also has an enormous amount of training material. But, much like the web, the system did much more than train people. PLATO is arguably the forerunner of the World Wide Web.

Unlike CRT screens, the panels are flat which helped enable PLATO’s infrared touch-screen technology. PLATO plasma panels were orange with 512×512 single-color pixels, 262,144 total. In contrast, an iPhone XR has 1792×828 red, green, and blue pixels, 4,451,328 total.

Figure 7
Early plasma panel experiment

The panels worked by creating a matrix of wires. When electrical impulses from the X and Y sides of the panel had an intersecting pulse, the plasma glowed orange. Using these early pixels created fonts and even primitive graphics.

Image result for plato display
PLATO panel displaying EMPIRE game

Control Data Corporation, IBM, and TDK Electronics all licensed plasma panel technology.

CRT technology cost less and responded faster than plasma panels. It was decades before plasma panels became less expensive and morphed into a popular consumer electronic in the form of television. Starting in 2005, plasma panels replaced projection televisions and CRT’s as large televisions. Hanging on walls everywhere, the panels dropped quickly in price. They grew in popularity because the background was pure black, unlike other flat panels. However, they tended to “burn-in” — leaving images behind — and eventually dropped in popularity.

Eventually, LCD, LED, and finally OLED technology replaced plasma panels.

Minicomputer

Computers were big. They were enormously expensive and physically giant machines. IBM’s nickname from this time was Big Blue on account of the size of the company and their computers.

History

Olsen developed, by current standards, small transistor-based computers at MIT. He left in ’57 to form a company, the Digital Computer Corporation. It was funded by Doriot’s ARD which renamed it Digital Equipment Corporation (DEC). Doriot was concerned that the word “computer” would confuse people and computer companies of that era suffered a high failure rate. Doriot invested $70,000 for 70 percent of the company.

DEC created a new type of computer, a minicomputer, small and inexpensive enough they could be used by just one person. These minicomputers were “interactive” – people could interact and change programs as they were running. In contrast, traditional IBM computers ran a program, took input (typically in the form of punch card data), and delivered a result. Interactive computing enabled word processing and, the killer app, games.

The company struggled at first because buyers, expecting computers to be large and clunky, did not understand the offering. Eventually, they found a buyer for their first system, the PDP-1. Their next release, the PDP-8, became a popular hit. Eventually, the PDP-11 became one of the highest-selling non-microcomputer computers of all time.

DEC Thrives

Besides the hardware, DEC also created the VAX memory management software that became a foundation for other computers, including later versions of Windows. DEC computers ran the VMS operating system.

Entrepreneurs would buy the DEC computers, program them, then sell them as single-purpose machines for games, word processors, etc… Additionally, DEC sold dedicated computers and also logic boards.

DEC grew 30-percent a year for 19 consecutive years. By the mid-1980s they were the second-largest computer maker, just behind IBM. Eventually, the trailblazer struggled by refusing to transition into microcomputers. In 1998, microcomputer maker Compaq purchased DEC for $9.6B.

Countless enterprise customers still use DEC equipment and run the VMS operating system, albeit usually on newer non-DEC hardware.