24 October 2019
The Internet – infinite vistas Never before in human history has a technology spread at such breakneck speed. And nothing has changed the way we communicate, date and shop, relax, entertain ourselves and find information as fundamentally as the World Wide Web. In our brief history of the Internet, we explain what the Cold War had to do with the genesis of the Internet, why you can’t simply switch it off and how the ways we protect computers against viruses and hackers have evolved.
4 October 1957 is a big day for the Soviet Union: Soviet engineers send the first artificial satellite into orbit from the Baikonur space centre in southern Kazakhstan. The spherical Sputnik with its four antennae is a major breakthrough for humanity – and astounding coup for the Russians in the competition between the systems. Not only does this technological leap forward on the part of the Soviets wound the pride of the US; it also sets nerves jangling at the Pentagon. Can the Russians now attack us using intercontinental rockets? What if our information systems are crippled at the same time? Nor is this a completely unjustified concern. After all, since the very beginning of long-distance communication, information has always reached its recipients along set pathways. Whether they use smoke signals, pigeon post, pony express, telegraphy or telephone lines – messages always follow a linear path from a A via B to Z. If the pigeon is caught by a goshawk or the rider of the pony apprehended, or if the telegraph wires or phone cables are cut, the message will no longer reach its target. The solution to this problem takes the form of a system in which every computer is connected with all the others, meaning that messages reach their destinations via different pathways, and the system as a whole is less easy to disrupt.
Digital packets with noughts and ones: The birth of the ARPA
It is with the aim of turning these ideas into reality that the Advanced Research Projects Agency (ARPA) is launched on 7 February 1958. Soon after its launch, ARPA’s researchers start putting packets together – packets consisting of noughts and ones. The data to be communicated is broken down into little blocks, or packets. Every packet is furnished with sender and recipient addresses and a sequence number. The communication system then sends the individual packets to the recipient, who uses the sequence number to put them back together in the correct order. If packets go missing in the process, the recipient sends a request for the lost packets to be resent. The special feature here is that the packets are free to choose different pathways through the network. Unlike a telephone connection with a constant bandwidth, this can admittedly lead to delays. On the upside, however, the network is used more efficiently, the interfaces are are under less strain, and the messages can still get to their recipients even if an individual channel is blocked or taken down.
This idea of a decentralised computer network in which data are sent in packets is first put into practice in the late 1960s. On 29 October 1969, computers at the University of California in Los Angeles are connected with their counterparts in the Stanford Research Institute in Menlo Park near San Francisco via a 50-kilobit data cable. The world’s first Internet message sent through the network consists of two letters: “lo” A few weeks later, the mainframe computers of the University of California in Santa Barbara and the University of Utah are added as additional nodal points to what becomes known as ARPANET.
The first e-mail and “directory enquiries” for the Internet
In 1971, IT technician Ray Tomlinson sends the first electronic mail through the net. In order to distinguish between the names of the user and the computer which sends the message, Tomlinson decides on the “@” character, inserting it between them. He uses this character because it doesn’t otherwise feature in the written language. In this way, he invents the e-mail.
And yet, digital communication between people and the various networks is still very much in its infancy. In 1973, ARPA scientists Vint Cerf and Bob Kahn find a solution to the problem of connecting networks based on different technologies: The Transmission Control Protocol and the Internet Protocol, or TCP and IP for short, make it possible, for instance, for radio and cable networks to talk to each other. This undergoes further evolution to become the TCP/IP protocol and becomes the standard for data exchanges on the Internet.
© dpaThe father of the World Wide Web: Tim Berners-Lee.
By 1982, 88 institutions are already linked to ARPANET. In 1983, computer scientist Paul Mockapetris begins developing the Domain Name System, or DNS for short – thus inventing directory enquiries for the Internet. Instead of having to remember the string of numbers that is the IP address, it’s now sufficient to know a domain name such as “example.org” to get connected by DNS to the address of the correct computer. Six years later, Tim Berners-Lee lays the foundations for the World Wide Web as we know it today.
Winds of change: Tim Berners-Lee and the World Wide Web
All the British physicist at the CERN research centre near Geneva actually wants to do is to develop a system to improve the exchange of information for scientists and universities. To do so, the researcher comes up with the required components of such a system. The first building block is a shared language, HTML, with which electronic documents with text, images and hyperlinks can be structured. The second is the URL, a web address that is unique to each website and can be called up via links. Added to this is a protocol that regulates the transmission of information in the Net: the Hypertext Transfer Protocol, or HTTP for short. The webpages are stored on a server. Visitors can then read this information via an application on their PC - the first browser, which goes by the name of WorldWideWeb.
Berners-Lee first presents his concept in March 1989. By the end of 1990, the WWW prototype is ready to go into service. The first web server goes live in the physicist’s lab and is graced with a hand-written note: “This is a server. Don’t switch it off!” Berners-Lee has already built the first website on the server, which is used by the CERN researchers as a platform to swap information and also explains the principle of the World Wide Web.
© dpaThe actual idea was just to improve the exchange of information in the European Organisation for Nuclear Research, CERN - and thus it was that the WWW was born.
NCSA Mosaic: The browser expands
In 1990, the commercial phase of the Internet begins with the decommissioning of ARPANET. The first commercial Internet provider, World, is launched. From this point on, anyone with a computer and a modem can officially get onto the Internet. And the first machines go online too. The father of all online equipment is John Romkey’s toaster, which can be switched on and off over the Internet. In the following year, the first ever webcam goes into service in the local network of the computer laboratory at the University of Cambridge. It shows how full the coffee machine is outside what is known as the Trojan Room, with the aim of sparing students the bother of pointless trips to stock up on caffeine.
In the browser programmed by Berners-Lee, however, an additional programme is needed to view the coffee machine. This is because Berners-Lee’s WorldWideWeb browser can only show graphic images in a separate window. This changes in 1993 with the introduction of the NCSA Mosaic. This graphics-enabled browser with the rotating globe in its logo spreads like wildfire and soon opens a window to the Internet for people all over the world; it becomes the blueprint for the first commercial browser, Netscape, and goes on to form the basis for the code for Internet Explorer. In April 1993, CERN releases the WWW technology into the public domain. This ensures that everyone can use and further develop this system free of charge, which is the basis of the principle that the World Wide Web should have an open structure. In 1994, Tim Berners-Lee leaves CERN for MIT, where he founds the World Wide Web Consortium (W3C), whose purpose from that time on is to monitor the development of the WWW.
The net becomes searchable
As the web spreads, the number of websites grows, and, with it, the need to introduce order into the information Wild West. Search engines start to pop up like mushrooms: the year 1994 sees the birth of Lycos and Yahoo, followed in 1995 by AltaVista; in 1996, the precursor to Fireball, Germany’s first browser, goes live. Two years later, a search engine by the name of Google exits its beta phase. Although the search engine market is in already populated by various providers, the new kid on the block soon becomes extremely popular. This is because, unlike many of its competitors, the search engine’s interface is uncluttered and well laid-out. And, above all, it yields above-average results at high speed.
1998 a search engine by the name of Google exits its beta phase.
Alongside the content of a website, Google also factors its popularity into its search results rankings by determining the number and quality of links to a site. The logic behind this is that the more often people place links to a site, the more relevant it is to them - and to other users who are interested in similar content. This reduces the number of obscure hits with which the competitors have to contend.
Digital infections: The Brain virus
Ever larger numbers of users can soon browse through ever increasing volumes of websites, offers and information - also, however, allowing people and programs to find them that they would rather not be in contact with. As early as 1986, the first ever computer virus, known as Brain, starts doing the rounds. Because this harmful program infects the boot sector of diskettes, which in turn frequently change hands, especially on school playgrounds, it soon spreads all over the world.
The Internet’s global network opens up a whole array of new possibilities for reaching huge numbers of users with homemade viruses and worms in the shortest possible time. The latter are especially dangerous: Whereas viruses burrow into software before becoming infectious when the host program is launched, worms activate independently. They try to spread autonomously via e-mail programs or vulnerabilities in other network services. It’s for this reason that they can go viral extremely quickly, as demonstrated by the first computer worm in history.
The Morris worm and its impact
This little computer program, written on 2 November 1988 by US IT student Robert Tappan Morris, is actually merely intended to count how many computers are connected to the Internet. Due to a programming error, however, the program isn’t content just to replicate itself once as it jumps from one computer to the next. Instead, it produces multiple copies of itself in an endless loop, infecting anything from 6,000 to 60,000 computers.
For the first time in the history of cybercrime, The US Department of Justice invokes the Computer Fraud and Abuse Act, which has been enacted two years previously. Morris, the unwitting malware coder, is given a three-year suspended jail sentence, fined 10,050 dollars and sentenced to 400 hours of community service. As a response to his worm, funds from the US defence department are diverted in November 1988 to Carnegie-Mellon University for the purpose of founding the world’s first Computer Emergency Response Team (CERT). CERTs have since then been working on means, including prevention, to address issues and problems of computer security.
From the Field Office for IT Security to TÜViT
How to protect sensitive data is also a question with which the IT specialists from RWTÜV in Essen are wrestling in the mid-1980s. In 1985, the IT experts start to develop a network for centralised data processing, to which the Medical-Psychological Institute (MPI) is to be connected. The particular problem is that confidential data are transferred out of the company via the same cable as other data. The computer specialists accordingly develop a technical solution to guarantee the security of the data. Armed with this experience, they will in the coming years go on to provide solutions for data security at other companies.
Starting in 1989, the newly founded Field Office for IT Security at RWTÜV examines the IT systems of the German Aerospace Center (DLR), the Dortmund-based steel giant Hoesch and Westphalian electricity provider VEW. The year 1991 sees the launch of the Institute for Information Technology (IfT), whose job it is to safeguard the security of software and electronic systems such as heart pacemakers. In the following year, the Field Office for IT Security is affiliated to the IfT, which changes its name to TÜV Informationstechnik (TÜViT) in 1995.rt.
“Good hackers” in the test laboratory
In Germany, TÜViT advises companies like Mannesmann on modern motorway toll systems and Deutsche Telekom on security issues in network management systems. International tech giants like Microsoft, IBM and Toshiba also beat a path to the security experts’ door in Essen for advice. These, in turn, soon become involved in investigations into smart cards that eventually find their way into people’s wallets and bags in the form of telephone, credit or health insurance cards.
In their software and hardware labs, the specialists attack the chips with laser beams, electric coils and in cold chambers to find out if their memories disclose information when manipulated that they are supposed to keep to themselves. As good hackers, they also train their digital crosshairs on the IT systems of companies and the operators of critical infrastructures. In 2012, for example, the experts simulate an attack on the IT systems of the distribution network operator Rhein-Main-Neckar (VNB). Such penetration tests are designed to detect and close down potential vulnerabilities in IT infrastructure before malicious hackers or their computer worms can slip through. Starting in 2014, the experts also carry out security checks for smartphone apps.
These begin their triumphal march in 2007 with the release of the first iPhone. In combination with ever faster and cheaper data connections, the number of users and devices on the network also explodes: In 2009, around six million people own a smartphone in Germany; three years later the number is 31 million. By the end of 2018, 57 million Germans are connected to the Internet via their mobiles, and according to recent studies, one in three people worldwide is now online.
Security for the fully connected future
We humans are no longer alone on the Net: John Romkey’s Internet toaster has fathered countless progeny: across the globe, for example, around 120 million smart speakers have been installed in people’s living rooms. As the era of the Internet of Things advances, more and more machines, vehicles and household appliances are going to be permanently online to make our lives more convenient and production processes easier, faster and more efficient. An enticing prospect - for cybercriminals as well as ordinary people. After all, the number of devices potentially multiplies the number of vulnerabilities that can be exploited by hackers.
The recipe of the security specialists from the TÜV to reduce the digital threat is called Security by Design. The idea behind it is that manufacturers should take full account of software and hardware security requirements during product development to prevent potential security vulnerabilities from the outset. A principle that the experts are putting into practice with what are known as smart meter gateways. These communication units transmit the data from intelligent electricity meters to the energy suppliers and will play a key role in the smart grid of the future. The TÜV experts are supporting the authorities and the Federal Government in their attempts to define the safety requirements for the smart meter gateways. At the same time, they are putting the physical devices themselves through their electronic paces to ensure that consumer data leave the house in a manner that guards them against hackers – and that the fully networked future is as secure as possible.