Ted Hoff’s General Purpose Microprocessor

“…even though science and technology are wonderful, what really gets them out there for people to use is to have businesses built around them. It takes savvy businessmen as well as savvy technologists to make that work.”

Tedd Hoff

Background

Ted Hoff had access to then state-of-the-art vacuum tube circuits in high school. In 1954, he graduated and gained access to then-new transistors and magnetic core memory. Eventually, he earned a bachelor’s degree when came to Stanford, earning a Ph.D. in 1962.

During that time, he talked to Rex Rice, an on-campus recruiter for Fairchild Semiconductor. Particularly, the Traitorous Eight founded Fairchild and Doriot student Arthur Rock funded the business.

Hoff believed the new field of integrated circuits could work well for memory, replacing the clunky and relatively enormous core memory. Eventually, this led to a referral to Bob Noyce. He worked at Fairchild but was starting a new company, Intel. Evidently, Noyce intended Intel to focus on semiconductor memory and was searching for somebody with Hoff’s background.

Intel

In 1970, while waiting for the technology to mature, Intel decided to build one-off chips for the desktop calculator market. Eventually, Hoff was assigned to assist building a chip for Japanese company Busicom. At first, Japanese engineers were expected to do all the work with Hoff acting as a liaison and coordinator.

However, Hoff noticed the Japanese design was sub-optimal. There were about a dozen chips and the entire system appeared needlessly complex. Hoff raised his concerns to Noyce who encouraged him to make a “backup” design.

Hoff’s design incorporated random access memory and programmability. It was vastly simpler yet overall more powerful by being programmable rather than single-purpose. After a meeting, the customer adopted Hoff and Intel’s design.

Federico Faggin joined Intel and refined Hoff’s idea, optimizing the general-purpose chip to take advantage of Intel technology. By January 1971, the team had a fully functional microprocessor.

The Microprocessor is Born

Their original goal was an embedded system, not a PC chip. Embedded systems are specialty chips that people never see; they make other machines work. The final chip, renamed the Intel 4004, contained between 2,100 and 2,300 transistors, depending upon how one counted. In 1974, Intel’s 4004 was followed by the 8008 then the 8080. That chip became the foundation of the Altair, the first microcomputer. The Altair inspired a young Bill Gates and Co. to start a software company and a young Steve Jobs and Wozniak to form a computer company.

Consumer Shared Computer Network (CompuServe)

CompuServe is the first computer network targeted towards ordinary people though it did not start out that way.

Background

Jeff Wilkins sold burglar alarms. His father-in-law ran a small insurance company and needed to buy a computer. However, the DEC model he wanted had far more computing power than his father required.

Wilkins realized he could use the computer modem to sell extra capacity to other businesses that did not want to purchase or maintain an entire computer. Companies had been sharing mainframe computers for some time. However, Wilkins is the first to miniaturize the idea, to sell time on a relatively low-power computer.

In 1969, he launched the business and it quickly became popular. Wilkins quit his job selling alarms and set out full-time selling computing power.

He expanded the idea in 1978 with the introduction of personal computers, though early-on there wasn’t much reason to purchase time from him.

CompuServe Grows

In July 1980, the Columbus Dispatch newspaper became the first paper to publish electronically, on Wilkins’ CompuServe. Thanks to relatively low-cost personal computers the service began to rapidly grow. In Q3 1980, CompuServe had 3,600 subscribers. By the end of Q1, 1981, they’d grown to over 10,000 customers.

Interestingly, the most popular CompuServe app was text chatting. About 20 percent of total usage consisted of people chatting to one another. Reading the newspaper accounted for just 5 percent of total usage.

By 1984, CompuServe charged $5/hr. after 6 PM but the service was mind-numbingly slow at 300bps.

Despite the slow speed, the service continued to grow. By 1984, CompuServe had 60,000 subscribers. In 1986, tax preparation company H&R Block purchased the company, paying $68 million. By 1993, CompuServe had over 1.5 million subscribers throughout the world.

Disruption

Eventually, CompuServe was overtaken by upstart competitor America Online (AOL) which offered lower rates and more content. However, AOL was soon enough shuttered by cable companies and internet service providers. These market incumbents often provided faster speeds at lower prices often bundled with television and phone service. Additionally, they enjoyed US government monopolies on the cable lines used to transmit high-speed data.

Reasonably Priced Business Computer (IBM/360)

The IBM/360 is the first mass computer, designed as a general-purpose computer affordable for mid-sized businesses yet powerful enough for large enterprises.

Background

In 1962, IBM’s revenue was $2.5 billion. CEO Thomas Watson Jr. believed in the vision of a general-purpose computer that supports timesharing, the ability of a computer to do multiple things at once. Thereafter, he invested a staggering $5 billion ($42.5 billion adjusted to 2019), double the company’s annual revenue, to develop the IBM/360. Indeed, more than 100,000 people scattered over 165 cities worked together to create the IBM/360.

One key feature of the IBM/360 was forward and backward compatibility, along with upgradability. Before the IBM/360, businesses purchased a computer and, when they outgrew it, purchased a new computer. In contrast, the IBM/360 enabled extra peripherals, increasing the capacity of the computer. Additionally, a significant amount of older IBM software ran on an emulator.

Prior to the IBM/360, computers were typically custom-tailored to the task at hand. Scientific computers were different than business computers. Additionally, a computer to run an accounting system was different than a computer to run inventory management. Much like Intel created a general-purpose microchip, IBM created a general-purpose overall computer.

The IBM/360 is one of the few computers that both sit in the Computer History Museum and is still in use, 55 years after its introduction. Even though the vast majority of smartphones contain more computing power and memory, the 360 oftentimes does one task, do it well, and have done it for decades. Businesses should move the tasks to newer computers but the 360 is so reliable that migration is oftentimes a low priority.

Third-Party Peripheral Market

Besides forward and backward combability with other computers, IBM allowed third-party companies to create certified peripherals for the 360. While this idea seems common now, it was a groundbreaking experiment when the 360 launched. “Half a million saved is half a million earned,” read third-party peripheral makers advertising low-cost high-quality add-on’s.

Success

The IBM/360 was incredibly successful. IBM was unable to keep up with orders for years. Eventually, even the Soviet Union copied it and named their System/360 knockoff the “Ryad” computer. By 1989, the 360 and successor computers accounted for more than $130 billion in annual revenue.

Electronic Desktop Calculator

Desktop calculators led the idea of computers small and cheap enough to sit on an individual’s desk. Eventually, they also became the impetus for the general-purpose microchip.

History

The first desktop electronic calculator is the ANITA Mark VII and ANITA Mark VIIII, both launched late 1961. The Bell Punch Co. of Britain designed the ANITA. Markedly, they used vacuum tubes and cold-cathode, and nixie tubes for the numerical display. Norbert (“Norman”) Kitz led the design and engineering work.

Eventually, the ANITA VII sold in continental Europe and the ANITA VIII in the UK and the rest of the world. However, soon after launch, Bell dropped the ANITA VII and consolidated the product line.

Cost was a major factor producing the ANITA. To make the calculator, Bell Punch needed to sell the product for about 1/100th the least expensive electronic computers of the day cost. Eventually, ANITA went on the market for £355 (about £7,800 in 2018, about $10,500 USD). In contrast, the least expensive general-purpose computers in 1961 cost about £50,000 (just over £1 million adjusted to 2018). The device weighed 34 pounds (15.5 kg).

Transistor-Based Calculators

Eventually, by 1964, competitors started to release calculators that used transistors rather than tubes. Sharp, Canon, Sony, Toshiba, Wang, and countless others released transistor-based calculators. However, these calculators were similarly priced to the ANITA, or even more expensive. Significantly, were significantly smaller and lighter due to the lack of tubes.

The Soviet Union literally weighed in with the T-64 built in Bulgaria. However, despite the use of semiconductors, the calculator weighed 8kg (17.6 lbs.) and is the first calculator to compute square roots.

Calculators continued to decrease in price, size, and increase in performance.

General-Purpose Microchip

Many calculator companies hired Intel, a young company, to produce custom chips for their calculators. Eventually,  in 1970, Intel engineer Ted Hoff instead created a general-purpose chip for Japanese company Busicom. Unlike other calculator chips, the Busicom chip was programmable to do multiple functions, not only those specific to one calculator. In 1971, Intel licensed the chip back and rebranded it the Intel 4004, Intel’s first general-purpose microprocessor.

Vacuum Tube (Diode)

Working for the Edison Electrical Light Company of England, Sir John Fleming invented the diode, a vacuum tube at the heart of all early electronics. Radios, television, telephones, computers – virtually every electronic we’re familiar with today – was first built with diodes.

Diodes are typically vacuum tubes, though some have specialized gasses in them. They conduct electricity, moving it from the cathode to the anode.

Early diodes evolved from lightbulbs. Electrons flow free in lightbulbs. Their purpose is to emit light and there is no need to shepherd the energy. Diodes enable the controlled flow of electrons. This enables all sorts of nifty tricks when tied together into circuits.

Compared to modern electronics, Diodes were enormous and also enormously power-hungry. Since they were tubes that typically operate at high heat they also tend to burn out, like old-fashioned pre-LED lightbulbs. Diodes often consumed as much as 100 volts. An iPhone, with billions of more circuits, consumes five volts.

Diodes typically contained additional electrodes, called grids, to control the flow of electricity and form circuits. Diodes with one grid are called a triode, because electricity flows from the cathode to the anode but can be diverted by the grid. Tetrode’s are four-grid diodes, etc…

Despite that Fleming was a physicist he was an avid anti-evolutionist. He profited from the invention of his diode, and subsequent discoveries, and left the bulk of his fortune to Christian charities serving the poor.

Transistors eventually replaced the vacuum tube.

Stepping Switch

Stepping switches change the direction of a magnetic flow to one of multiple channels, stepping through them incrementally. Which sounds incredibly boring until we realize they enabled the modern phone system and powered the decryption machines which morphed into the modern computer. Stepping switches were literally a step from the industrial revolution to the modern world.

Background

Let’s step back. When you dial a phone number each digit zero’s in on the intended recipient. Take the theoretical number +1 212 345-6789. +1 indicated the US. The next set of numbers, 212, routes to the Washington, DC area. The next three numbers, 345, route your call to an exchange somewhere that used to be nearby your house. Finally, the last four digits find you.

Before stepping switches humans had to do manually. The +1 was implied (unless it wasn’t, in which case an overseas operator would reach the US). Dialing 212 is optional but, before stepping switches, if you wanted to dial long-distance an operator would have to plug your call into a long-distance line.

Finally, for the last part, an operator would always have to find you and plug the call in.

In case you’re wondering how this worked you would pick up the phone and tell the operator the number you wished to call. She (it was always a she) would then work with operators to get to the telephone you wished to reach.

If this sounds slow, clunky, annoying and expensive you’re right, it was. Therefore, Almon Stowger invented a device to do the work automatically. Rather than an operator routing the line, a series of stepping switches does the same work faster, cheaper, and more accurately.

Stepping switches were integral to the war effort. In Bletchley Park, the English code-breaking facility, they allowed the Allies to break Nazi encryption. Alan Turing, inventor of the modern computer, worked as a lead scientist.

Slide Rule

Slide rules are the original mechanical calculators. They could quickly multiply and divide large numbers.

Slide rules are based on logarithms. These are tables of the number another number is raised to produce a third number. Scales of roots do the opposite.

John Napier realized sets of log scales placed next to one another easily and accurately multiply and divide.

William Oughtred, a minister, took this to the next step placing scales on a piece of wood with a slider to align the numbers. By sliding the device to the right two numbers the user could quickly and accurate multiply and divide large numbers.

For hundreds of years, mathematicians and engineers relied on slide rules.

Newton used them to develop his rules of physics. James Watt, after joining with Boulton, used them to refine and build his condensing steam engine that kicked off the first Industrial Revolution. Centuries later, during the computer era, NASA engineers still used them while planning the Apollo missions.

Virtually every entry before 1970 on innowiki relied, to some extent, on slide rules.

Many “computer museums” feature the slide rule as the second computing device ever invented, after the ancient abacus which was more focused on addition.

After hundreds of years, computers superseded slide rules. However, the impact of slide rules cannot be overstated. These primitive yet vital calculating machines built the modern world.

eLearning / Computer Based Training, PLATO

In 1960, Prof. Donald Bitzer introduced an educational computer system, the Programmed Logic for Automatic Teaching Operations, PLATO.

In hindsight, PLATO is arguably one of the least known but most important technological advances ever. Countless elements of the world wide web were first introduced via PLATO.

Background

Bitzer was a professor of electrical engineering at the University of Illinois at Champagne-Urbana. His inspiration to create PLATO was about half of US high school graduates, in the 1940s, were functionally illiterate. Bitzer theorized that a computer would have the patience that some human teachers lacked, especially for challenging students.

PLATO was an early timesharing system, a new concept at the time. Users and programmers shared the computer using terminals, not punch cards. Programs were interactive, where users would do something, and the computer would respond immediately. In contrast, most computers at the time ran a program, read data for the program, then printed results.

As the system developed and Bitzer found faster computers, PLATO eventually supported up to 1,000 simultaneous users. PLATO featured an early modem, enabling geographic diversity. Terminals were small plasma panels, a new invention, and supported touch, also a new invention. TUTOR, the programming language, was reasonably approachable. Children could and did learn to program the system.

Today’s Computers, Yesterday

PLATO invented the notion of online community, enabling people to send text messages to one another. These messages could be read in real-time or later. Messages could be sent to a virtual “room” of users or to an individual user, a precursor of today’s short message text service. Asynchronous notes, called PLATO notes, also worked. PLATO notes evolved into Lotus Notes.

PLATO featured the first massive multi-player online game (MMOG), Empire. It was massively popular. However, many users couldn’t play Empire because PLATO also featured the first parental control system, The Enforcer. It ensured users were learning or programming, not gaming. For incorrigibles who snuck into games not listed in The Enforcer, senior users (oh, yeah, PLATO also was the first system with interchangeable roles) could zap into the first screen-sharing program.

Maybe the WWW would evolve but for PLATO, and easy interfaces, touch-screens, variable roles, online community, group chat, forums, SMS, and who knows what else. That’d be conjecture. What we do know, with certainty, is PLATO was first with all these features that make up the modern world. It also featured some teaching programs, the original purpose.

CERL

PLATO was soley a teaching computer. Inventing the modern computer world, in hindsight, was an accident. In this spirit, Bitzer and colleagues created a special lab, the Computer-Based Education Research Laboratory (CERL), at UI Champaign-Urbana.

Time-Sharing/Multitasking Computer

Early Computers

Early computers stored programs and data on punch cards. Most cards contained 80-characters, which is why early terminal programs were typically 80-characters per line. Punch cards are exactly what they sound like, physical cards. Each card is one line of a computer program or one piece of data. As users typed, a machine punched a hole representing a letter or number. Programs were a literal pile of cards, with the data after the program.

Image result for 80 column punch card

For example, if a user wanted to compute the average, median, high, and low figures in a set of data they would write a program on a set of cards telling the computer to analyze the data. Next, they would add the data cards physically on top of the program. The whole stack was then put into a special card reader that read the cards, one after another, then the data. Finally, the computer performed calculations and printed results.

Besides being clunky and loud and wasting a massive number of paper cards this system wasted an enormous amount of expensive computing power. Computers, which cost millions of dollars in the 1960s, sat idly around slowly reading a program. In the meantime, they could not do anything else until the card reading process finished. This problem was especially pronounced in Universities because many students shared one computer. Therefore, students waited in line while, in the meantime, the expensive computer wasn’t doing anything. Students and the computer were both idle.

Time-Sharing

In response, companies and University’s built computers and operating systems that did more than one thing at a time. To get around the card reader issue, these new computers used terminals. While one person was typing a program, using little processing power, another could be running their program, which requires more computing power.

The computer could still only do one thing at a time. However, by switching back and forth between tasks, it appeared to do multiple things at once. Rather than the CPU sitting idle it was almost always humming away doing something, whether waiting for keystrokes, compiling a program, or running a piece of software.

IBM OS/360, Unix, VMS

This new type of operating system, that supported multiple people doing multiple things at once, is called a time-sharing system. IBM released the first modern time-sharing operating system, the IBM OS/360, in 1966. Engineer Fred Brooks was lead engineer and wrote a seminal project management book about building the operating system, The Mythical Man Month.

Other systems soon followed including, notably, Multics and its successor, Unix. Later, DEC’s VMS operating system was also especially good at multitasking. Unix eventually morphed into Linux, powering today’s internet, and BSD, which became the core of Mac OS. Eventually, Microsoft hired away many VMS engineers to create Windows NT.

An evolution of timesharing remains the reason that servers, personal computers, and even phones can do multiple things at once.

Speech Recognition

Speech recognition is the ability of a computer to recognize the spoken word.

“Alexa: read me something interesting from Innowiki.”

“Duh human, everything on Innowiki is interesting or it wouldn’t be there.”

Today, inexpensive pocket-sized phones connect to centralized servers and understand the spoken word in countless languages. Not so long ago, that was science fiction.

Background

Star Trek in 1966, The HAL 9000 of 2001: A Space Odyssey of 1968, Westworld in 1973, and Star Wars in 1977 all assumed computers will understand the spoken word. What they missed is that people would become so fast at using other input devices, especially keyboards, that speaking is viewed as an inefficient input method.

The first real speech recognition actually predates science fiction ones. In 1952, three Bell Labs scientists created a system, “Audrey,” which recognized a voice speaking digits. A decade later, IBM researchers launched “Shoebox” that recognized 16 English words.

In 1971, DARPA intervened with the “Speech Understanding Research” (SUR) program aimed at a system which could understand 1,000 English words. Researchers at Carnegie Mellon created “Harpy” which understood a vocabulary comparable to a three-year-old child.

Researched continued. In the 1980s the “Hidden Markov Model” (HMM) proved a major breakthrough. Computer scientists realized computers need not understand what a person was saying but, rather, just to listen to sounds and look for patterns. By the 1990’s faster and less expensive CPUs brought speech recognition to the masses with software like Dragon Dictate. Bell South created the voice portal phone-tree system which, unfortunately, frustrates and annoys people to this day.

DARPA stepped back in during the 2000s, sponsoring multi-language speech recognition systems.

Rapid Advancement

However, a major breakthrough came from the private sector. Google released a service called “Google 411” allowing people to dial Google and lookup telephone numbers for free. People would speak to a computer that would guess what they said then an operator would answer, check the computer’s accuracy, and delivered the phone number. The real purpose of the system was to better train computers with a myriad of voices, including difficult-to-decipher names. Eventually, this evolved into Google’s voice recognition software still in use today.

Speech recognition continues to advance in countless languages. Especially for English, the systems are nearing perfection. They are fast, accurate, and require relatively little computer processing power.

In 2019 anybody can speak to a computer though unless their hands are busy doing something else, most prefer not to.