MENU
Computer technology

A brief history of the computer

23. December 2021

They sit on our desks, fit in our bags and pockets, drive cars and fly planes, and calculate tomorrow’s weather and digital currencies: computers are ubiquitous. To find out how these freely programmable calculating machines came into being and how they evolved to the point where they would fit into our homes, read our short history of the computer.

The relationship between man and computer is a recurring theme in the science fiction world, as it is for ethics commissions and our society as a whole. In the Middle Ages and the early modern period, this was not yet a big issue – or, you might say, the computer was both man and machine. At that time, the term “computer” was a job description for people who performed recurring calculations for astronomers or ballisticians. But because people naturally get tired and make mistakes, the desire to find ways to lend mechanical support to tedious calculation processes first arose very early on. In the middle of the 17th century, German Wilhelm Schickard and Frenchman Blaise Pascal independently developed the first calculating machines. Over the next two centuries, dozens more were conceived and developed, but none of them ever got as far as series production. This was because they usually required thousands upon thousands of individual parts, the production and assembly of which took precision engineers to the very limits of their knowledge and skills – and, time and time again, beyond. It was not until the late 19th century that significant progress was made innthe production of simple calculating machines, to the extent that they soon found their way into large US offices.

 

Data on punched cards

As industrialisation progressed, however, the need to perform calculations was increasingly matched by the need to process data. Take, for example, the 1890 census in the United States. Engineer Herman Hollerith developed a method for encrypting, storing and, where necessary, reading out data on perforated cards based on the patterns punched into them. This development heralded the age of mass data processing: Hollerith machines would soon become indispensable in accounting, banking, HR departments and elections. It was not until the 1950s that they were replaced by magnetic tapes and then by floppy disks, on which significantly more data could be accommodated. Initially, however, the first computers also worked with perforated cardboard or paper.

Not one inventor, but many

The invention of the computer was, as it were, in the air at the beginning of the 20th century. And, like photography, the steam engine or the internal combustion engine, the computer didn’t have just one father, but many, who in many cases developed and advanced the technology along parallel paths completely independently of one another. The German father of the computer went by the name of Konrad Zuse. By his own admission, the young civil engineer was too lazy to do calculations. So, he built a program-controlled calculating machine in his parents’ living room to relieve him of the tedious legwork. In 1937, his Z1 was finished. However, it could only solve simple arithmetic problems, was purely mechanical – and would more often than not get jammed. Even so, it was ground-breaking. This was because it was the first of its kind to use the binary system, in which all information is encoded using zeroes and ones.

The fundamental advantage of this system was that numbers of any size could be represented by just two digits. And both digits could in turn be represented by “switch on” or “switch off”. The type of switch did not ultimately matter: The resourceful Zuse used both toggle switches and telephone relays in his electromechanical successor machine. On May 12, 1941, he officially unveiled the Z3, the “first fully automatic, freely programmable, program-controlled computing machine to use the binary system”. The Z3 was the size of three refrigerators and was fed with programs using punched tapes. It had mastery of the four basic arithmetic operations and was capable of calculating square roots. Compared to today’s machines, it was not much more than a programmable calculator – and yet, it was the forerunner to our modern-day computers.

 

Punched tapes and cables

Chronologically speaking, Zuse stole a march, for example, on the Mark I, which was completed at Harvard in 1944 by the American computer pioneer Howard Aiken. However, Zuse's invention did not have international implications. This was because Zuse was working in isolation from the rest of the world, in Germany. And his Z3 was destroyed in a bombing raid in 1943. In the same year, the British military put its Colossus computer into operation in an attempt to crack the German codes. The Colossus also used the binary system, but the mechanical switches had been replaced by 1,500 electron tubes, making it one of the first fully electronic computers in the world.

In the USA, work began as early as 1942 on ENIAC, a fully electronic, programmable computer consisting of around 18,000 tubes, compared to which Colossus was no more than a toy. ENIAC was used to calculate ballistic tables to indicate the trajectories of artillery shells. However, it was not yet using the binary system and had to be manually rewired for each program – a complicated task that required the collective labour of six scientists.

 

Neumann’s new computer architecture

We have to do better than this: such was the thought of temporary worker on the ENIAC project John von Neumann, who went on in 1945 to develop an architectural concept for a programmable logic computer. In Neumann’s computer, the program’s commands were treated like the processed data, encoded in binary and processed in the internal memory. In other words, they actually ran inside the machine for the first time. Compared to the cumbersome programming that relied on punched tape or cables, the von Neumann architecture allowed changes to programs to be made very quickly or various programs to run in quick succession. It thus found its way into computer science textbooks as a blueprint for today’s computers.

 

From tube to microprocessor

The early computers still weighed many tons and filled entire rooms, had to be constantly maintained and, when seen from our modern point of view, did very little. This changed in 1947 with the invention of the transistor. Compared to the tubes in use in the computers of the day, they took up significantly less space and used less power, failed less frequently and opened the door to higher processing speeds. A technological quantum leap for which researchers William B. Shockley, John Bardeen and Walter Brattain, who were instrumental in its invention, received the Nobel Prize in 1956.

In the following years, the transistors became smaller and smaller. In 1971, the US companies Texas Instruments (TI) and Intel independently caused a sensation in the professional world by unveiling a miniature format. They succeeded in cramming the various electronic components of a processor – its transistors, resistors and capacitors – onto a small square of silicon: this was the birth of the microprocessor, the heart and control centre of every modern computer.

The Altair 8800

The miniaturization of the computer itself was not long in coming. In 1975, it was finally time: American electrical engineer and inventor Ed Roberts launched a home computer kit, the Altair 8800, for $397. It was controlled by toggle switches, and the output used LED lights. The box did not have a mouse or keyboard – which meant that it was very hard to use. Even so, the Altair struck a chord with the public, and 5,000 units were sold in the first six months. As user-unfriendly as it may have been, the Altair was given a further boost by the appearance of the first ever programming language in July 1975. Altair-Basic was developed by two young students called Paul Allen and Bill Gates, who had recently founded a small software company by the name of Microsoft. The following year, the two coders developed their programming language into Microsoft Basic, which would soon be running on the first home computers from Commodore, Texas Instruments and Apple.

Apple goes solder-crazy

To start with, however, the possibilities and limits of the Altair computer attracted the attention of enthusiastic inventors and spare-time engineers, who joined forces to launch the Homebrew Computer Club. At their meetings, they discussed ideas and swapped circuit diagrams and programming tricks. Also present was a certain Steve Wozniak, who was working on his own computer. In 1976, he went on to found the Apple Computer Company with his friend Steve Jobs and Ronald Wayne. In the same year, at a meeting of the Homebrew Computer Club, he presented the Apple I, which he had soldered together with Jobs in his office at Hewlett-Packard. Unlike the Altair, it came equipped with all the connections required for operation with a monitor and keyboard – although these had to be purchased separately. At 666 dollars, it was also affordable for private households and is therefore considered by some to be the world’s first true personal computer.

Its successor, launched by Wozniak and Jobs in 1977, would go on to be a genuine global hit: The Apple II already came with a monitor and keyboard. Its data storage needs were initially met by a cassette recorder. Compared to competing devices such as the recently released Commodore PET 2001, it could display high-resolution graphics and even colours. And the inclusion of a number of slots also opened up the possibility of upgrading it. Apple would also soon go on to set new standards in relation to software. In 1979 it unveiled VisiCalc, the first spreadsheet program for PCs. Thanks to this program, office clerks could now for the first time use a computer to perform calculations without needing to have coding skills. This was a strong selling point for the Apple – and the first killer application for a computer system.

The computer moves into the nursery

So as not to leave the growing microcomputer market to Apple, Commodore and the like, IBM launched the IBM Personal Computer Model 5150 in 1981. IBM’s first workstation was assembled from freely available standard components and was built around the compatibility principle: for the first time, existing software could also be used on successor models. The brain of the system was the MS-DOS operating system, which had been developed by Microsoft for IBM. Despite its enormous price tag of $5,000, the computer and its guiding principle became a huge success. IBM-compatible became the informal industry standard and a synonym for the PC. And while the IBM PC and its cheaper clones gradually went on to populate offices and workrooms, from 1982 it was the Commodore 64 which took up residence in living rooms and children’s bedrooms. This would bring an entire generation into contact with the computer for the first time. What was truly sensational was its sound chip, which musicians could use to program songs with a real synth sound. Thanks to its graphic power, however, the “breadbin” was mainly used for gaming by millions of first-time users.

Of mice and windows

But if you wanted to play or work on a computer at this time, there was no way of avoiding the blinking cursor on the command line. To find and run what you wanted to run without getting lost in subfolders required the addition of shortcuts. But this all changed with the 1983 launch by Apple of Lisa, the first computer with a mouse and graphical user interface. “If it [...] launches in the summer, senior executives should only need 20 minutes to learn to operate Lisa, whereas, with a normal microcomputer, it would take them 20 hours,” were the prescient words of the Sunday Times as it enthused about the implications of the mouse revolution. With its price tag of 10,000 dollars, however, Lisa turned out to be a slow seller – and Apple went on to add the cheaper Macintosh the following year. However, it was the intuitive mouse control which would make affordable computers such as the Atari ST, introduced in 1985, really suitable for the masses. In the same year, Microsoft released its first graphical user interface, which the company renamed “Windows” shortly before its release. However, commercial success proved elusive until the advent of Windows 3.1, which was released in 1992 and sold around three million licences in the first two months alone. The rest is computer history.

 

The next quantum leap

Since then, our computers have become faster and more powerful and can store and process enormous amounts of data, even in pocket format, that their room-filling predecessor models would never have dared to dream of. Currently being worked on is the next technological leap: the quantum computer. This operates with Qbits instead of bits. A Qbit can represent not only “one” and “zero” but, theoretically, an infinite number of states in between – all at the same time. Researchers want to use these computers, for instance, to simulate the interactions of molecules, which might lead to breakthroughs in the development of new drugs.

Artificial Intelligence and Big Data might also take a giant leap forward with quantum computers, and it might prove possible to make long-range weather forecasts. The development is being driven by digital heavyweights like IBM, Google and Microsoft. Depending on who you talk to, it will probably take anything from five to over ten years for fully functional quantum computers to become available. However, there is no prospect, even in the future, of quantum computers for home use. This is because they require an operating temperature of minus 270 degrees. And that’s hard to achieve in the living room.