Search the Community
Showing results for tags 'the internet'.
-
A history of the Internet, part 2: The high-tech gold rush begins
Karlston posted a news in Technology News
The Web Era arrives, the browser wars flare, and a bubble bursts. Welcome to the second article in our three-part series on the history of the Internet. If you haven’t already, read part one here. As a refresher, here’s the story so far: The ARPANET was a project started by the Defense Department’s Advanced Research Project Agency in 1969 to network different mainframe computers together across the country. Later, it evolved into the Internet, connecting multiple global networks together using a common TCP/IP protocol. By the late 1980s, investments from the National Science Foundation (NSF) had established an “Internet backbone” supporting hundreds of thousands of users worldwide. These users were mostly professors, researchers, and graduate students. In the meantime, commercial online services like CompuServe were growing rapidly. These systems connected personal computer users, using dial-up modems, to a mainframe running proprietary software. Once online, people could read news articles and message other users. In 1989, CompuServe added the ability to send email to anyone on the Internet. In 1965, Ted Nelson submitted a paper to the Association for Computing Machinery. He wrote: “Let me introduce the word ‘hypertext’ to mean a body of written or pictorial material interconnected in such a complex way that it could not conveniently be presented or represented on paper.” The paper was part of a grand vision he called Xanadu, after the poem by Samuel Coleridge. A decade later, in his book “Dream Machines/Computer Lib,” he described Xanadu thusly: “To give you a screen in your home from which you can see into the world’s hypertext libraries.” He admitted that the world didn’t have any hypertext libraries yet, but that wasn’t the point. One day, maybe soon, it would. And he was going to dedicate his life to making it happen. As the Internet grew, it became more and more difficult to find things on it. There were lots of cool documents like the Hitchhiker’s Guide To The Internet, but to read them, you first had to know where they were. The community of helpful programmers on the Internet leapt to the challenge. Alan Emtage at McGill University in Montreal wrote a tool called Archie. It searched a list of public file transfer protocol (FTP) servers. You still had to know the file name you were looking for, but Archie would let you download it no matter what server it was on. An improved search engine was Gopher, written by a team headed by Mark McCahill at the University of Minnesota. It used a text-based menu system so that users didn’t have to remember file names or locations. Gopher servers could display a customized collection of links inside nested menus, and they integrated with other services like Archie and Veronica to help users search for more resources. Gopher is a text-based Internet search and retrieval system. It’s still running in 2025! Jeremy Reimer Here is the multi-page result of searching for “Hitchhiker” on Gopher. Jeremy Reimer By hitting the Enter key, you can view the document you were looking for. Jeremy Reimer A Gopher server could provide many of the things we take for granted today: search engines, personal pages that could contain links, and downloadable files. But this wasn’t enough for a British computer scientist who was working at CERN, an intergovernmental institute that operated the world’s largest particle physics lab. The World Wide Web Hypertext had come a long way since Ted Nelson had coined the word in 1965. Bill Atkinson, a member of the original Macintosh development team, released HyperCard in 1987. It used the Mac’s graphical interface to let anyone develop “stacks,” collections of text, graphics, and sounds that could be connected together with clickable links. There was no networking, but stacks could be shared with other users by sending the files on a floppy disk. The home screen of HyperCard 1.0 for Macintosh. Jeremy Reimer Hypercard came with a tutorial, written in Hypercard, explaining how it worked. Jeremy Reimer There were also sample applications, like this address book. Jeremy Reimer Hypertext was so big that conferences were held just to discuss it in 1987 and 1988. Even Ted Nelson had finally found a sponsor for his personal dream: Autodesk founder John Walker had agreed to spin up a subsidiary to create a commercial version of Xanadu. It was in this environment that CERN fellow Tim Berners-Lee drew up his own proposal in March 1989 for a new hypertext environment. His goal was to make it easier for researchers at CERN to collaborate and share information about new projects. The proposal (which he called “Mesh”) had several objectives. It would provide a system for connecting information about people, projects, documents, and hardware being developed at CERN. It would be decentralized and distributed over many computers. Not all the computers at CERN were the same—there were Digital Equipment minis running VMS, some Macintoshes, and an increasing number of Unix workstations. Each of them should be able to view the information in the same way. As Berners-Lee described it, “There are few products which take Ted Nelson's idea of a wide ‘docuverse’ literally by allowing links between nodes in different databases. In order to do this, some standardization would be necessary.” The original proposal document for the web, written in Microsoft Word for Macintosh 4.0, downloaded from Tim Berners-Lee’s website. Credit: Jeremy Reimer The document ended by describing the project as “practical” and estimating that it might take two people six to 12 months to complete. Berners-Lee’s manager called it “vague, but exciting.” Robert Cailliau, who had independently proposed a hypertext system for CERN, joined Berners-Lee to start designing the project. The computer Berners-Lee used was a NeXT cube, from the company Steve Jobs started after he was kicked out of Apple. NeXT workstations were expensive, but they came with a software development environment that was years ahead of its time. If you could afford one, it was like a coding accelerator. John Carmack would later write DOOM on a NeXT. The NeXT workstation that Tim Berners-Lee used to create the World Wide Web. Please do not power down the World Wide Web. Credit: Coolcaesar (CC BY-SA 3.0) Berners-Lee called his application “WorldWideWeb.” The software consisted of a server, which delivered pages of text over a new protocol called “Hypertext Transport Protocol,” or HTTP, and a browser that rendered the text. The browser translated markup code like “h1” to indicate a larger header font or “a” to indicate a link. There was also a graphical webpage editor, but it didn’t work very well and was abandoned. The very first website was published, running on the development NeXT cube, on December 20, 1990. Anyone who had a NeXT machine and access to the Internet could view the site in all its glory. The original WorldWideWeb browser running on NeXTstep 3, browsing the world’s first webpage. Jeremy Reimer Clicking links in WorldWideWeb would open up new windows. Nevertheless, there were options to navigate “forward,” “backward,” and “up.” Jeremy Reimer There were no inline images. However, you could link to images that would pop up in a new window as long as they were TIFF files. Jeremy Reimer Because NeXT only sold 50,000 computers in total, that intersection did not represent a lot of people. Eight months later, Berners-Lee posted a reply to a question about interesting projects on the alt.hypertext Usenet newsgroup. He described the World Wide Web project and included links to all the software and documentation. That one post changed the world forever. Mosaic On December 9, 1991, President George H.W. Bush signed into law the High Performance Computing Act, also known as the Gore Bill. The bill paid for an upgrade of the NSFNET backbone, as well as a separate funding initiative for the National Center for Supercomputing Applications (NCSA). NCSA, based out of the University of Illinois, became a dream location for computing research. “NCSA was heaven,” recalled Alex Totic, who was a student there. “They had all the toys, from Thinking Machines to Crays to Macs to beautiful networks. It was awesome.” As is often the case in academia, the professors came up with research ideas but assigned most of the actual work to their grad students. One of those students was Marc Andreessen, who joined NCSA as a part-time programmer for $6.85 an hour. Andreessen was fascinated by the World Wide Web, especially browsers. A new browser for Unix computers, ViolaWWW, was making the rounds at NCSA. No longer confined to the NeXT workstation, the web had caught the attention of the Unix community. But that community was still too small for Andreessen. “To use the Net, you had to understand Unix,” he said in an interview with Forbes. “And the current users had no interest in making it easier. In fact, there was a definite element of not wanting to make it easier, of actually wanting to keep the riffraff out.” Andreessen enlisted the help of his colleague, programmer Eric Bina, and started developing a new web browser in December 1992. In a little over a month, they released version 0.5 of “NCSA X Mosaic”—so called because it was designed to work with Unix’s X Window System. Ports for the Macintosh and Windows followed shortly thereafter. It wasn’t easy to get Mosaic working on a Windows computer in 1993. You had to purchase and configure a third-party TCP/IP application, like Trumpet Winsock, before Mosaic would even start. Jeremy Reimer But once you got it going, the web was a lot easier and more exciting to use. Mosaic added the tag that allowed images to be displayed inside webpages as long as they were GIFs. The GIF format was invented by CompuServe. Jeremy Reimer Being available on the most popular graphical computers changed the trajectory of the web. In just 18 months, millions of copies of Mosaic were downloaded, and the rate was accelerating. The riffraff was here to stay. Netscape The instant popularity of Mosaic caused the management at NCSA to take a deeper interest in the project. Jon Mittelhauser, who co-wrote the Windows version, recalled that the small team “suddenly found ourselves in meetings with forty people planning our next features, as opposed to the five of us making plans at 2 am over pizzas and Cokes.” Andreessen was told to step aside and let more experienced managers take over. Instead, he left NCSA and moved to California, looking for his next opportunity. “I thought I had missed the whole thing,” Andreessen said. “The overwhelming mood in the Valley when I arrived was that the PC was done, and by the way, the Valley was probably done because there was nothing else to do.” But his reputation had preceded him. Jim Clark, the founder of Silicon Graphics, was also looking to start something new. A friend had shown him a demo of Mosaic, and Clark reached out to meet with Andreessen. At a meeting, Andreessen pitched the idea of building a “Mosaic killer.” He showed Clark a graph that showed web users doubling every five months. Excited by the possibilities, the two men founded Mosaic Communications Corporation on April 4, 1994. Andreessen quickly recruited programmers from his former team, and they got to work. They codenamed their new browser “Mozilla” since it was going to be a monster that would devour Mosaic. Beta versions were titled “Mosaic Netscape,” but the University of Illinois threatened to sue the new company. To avoid litigation, the name of the company and browser were changed to Netscape, and the programmers audited their code to ensure none of it had been copied from NCSA. Netscape became the model for all Internet startups to follow. Programmers were given unlimited free sodas and encouraged to basically never leave the office. “Netscape Time” accelerated software development schedules, and because updates could be delivered over the Internet, old principles of quality assurance went out the window. And the business model? It was simply to “get big fast,” and profits could be figured out later. Work proceeded quickly, and the 1.0 version of Netscape Navigator and the Netsite web server were released on December 15, 1994, for Windows, Macintosh, and Unix systems running X Windows. The browser was priced at $39 for commercial users, but there was no charge for “academic and non-profit use, as well as for free evaluation purposes.” Version 0.9 was called “Mosaic Netscape,” and the logo and company were still Mosaic. Jeremy Reimer Version 1.0 was just called Netscape, although the old logo snuck into the installation screen. Jeremy Reimer The new logo was a giant N. Images were downloaded progressively, making browsing much faster. Jeremy Reimer Netscape quickly became the standard. Within six months, it captured over 70 percent of the market share for web browsers. On August 9, 1995, only 16 months after the founding of the company, Netscape filed for an Initial Public Offering. A last-minute decision doubled the offering price to $28 per share, and on the first day of trading, the stock soared to $75 and closed at $58.25. The Web Era had officially arrived. The web battles proprietary solutions The excitement over a new way to transmit text and images to the public over phone lines wasn’t confined to the World Wide Web. Commercial online systems like CompuServe were also evolving to meet the graphical age. These companies released attractive new front-ends for their services that ran on DOS, Windows, and Macintosh computers. There were also new services that were graphics-only, like Prodigy, a cooperation between IBM and Sears, and an upstart that had sprung from the ashes of a Commodore 64 service called Quantum Link. This was America Online, or AOL. Even Microsoft was getting into the act. Bill Gates believed that the “Information Superhighway” was the future of computing, and he wanted to make sure that all roads went through his company’s toll booth. The highly anticipated Windows 95 was scheduled to ship with a bundled dial-up online service called the Microsoft Network, or MSN. The CompuServe Information Manager added a graphical front-end to the service. It helped cut down on hourly connection fees. Jeremy Reimer CompuServe Information Manager also came with a customized version of the Mosaic web browser, which let users surf the web while connected to CompuServe. Jeremy Reimer America Online was a new graphical online service that, among other things, let you send email to anyone on the Internet. It was wildly popular in the US but less so in the rest of the world. Jeremy Reimer Microsoft’s answer to services like CompuServe and AOL was the Microsoft Network, which came bundled with Windows 95. Jeremy Reimer At first, it wasn’t clear which of these online services would emerge as the winner. But people assumed that at least one of them would beat the complicated, nerdy Internet. CompuServe was the oldest, but AOL was nimbler and found success by sending out millions of free “starter” disks (and later, CDs) to potential customers. Microsoft was sure that bundling MSN with the upcoming Windows 95 would ensure victory. Most of these services decided to hedge their bets by adding a sort of “side access” to the World Wide Web. After all, if they didn’t, their competitors would. At the same time, smaller companies (many of them former bulletin board services) started becoming Internet service providers. These smaller “ISPs” could charge less money than the big services because they didn’t have to create any content themselves. Thousands of new websites were appearing on the Internet every day, much faster than new sections could be added to AOL or CompuServe. The instruction manual that came with my first full Internet connection. It was a startup that had purchased my favorite local BBS, Mind Link! Jeremy Reimer The manual did its best to introduce new users to the World Wide Web. Windows 95 came with TCP/IP built in, so it was a lot easier to get online. Jeremy Reimer The tipping point happened very quickly. Before Windows 95 had even shipped, Bill Gates wrote his famous “Internet Tidal Wave” memo, where he assigned the Internet the “highest level of importance.” MSN was quickly changed to become more of a standard ISP and moved all of its content to the web. Microsoft rushed to release its own web browser, Internet Explorer, and bundled it with the Windows 95 Plus Pack. The hype and momentum were entirely with the web now. It was the most exciting, most transformative technology of its time. The decade-long battle to control the Internet by forcing a shift to a new OSI standards model was forgotten. The web was all anyone cared about, and the web ran on TCP/IP. The browser wars Netscape had never expected to make a lot of money from its browser, as it was assumed that most people would continue to download new “evaluation” versions for free. Executives were pleasantly surprised when businesses started sending Netscape huge checks. The company went from $17 million in revenue in 1995 to $346 million the following year, and the press started calling Marc Andreessen “the new Bill Gates.” The old Bill Gates wasn’t having any of that. Following his 1995 memo, Microsoft worked hard to improve Internet Explorer and made it available for free, including to business users. Netscape tried to fight back. It added groundbreaking new features like JavaScript, which was inspired by LISP but with a syntax similar to Java, the hot new programming language from Sun Microsystems. But it was hard to compete with free, and Netscape’s market share started to fall. By 1996, both browsers had reached version 3.0 and were roughly equal in terms of features. The battle continued, but when the Apache Software Foundation released its free web server, Netscape’s other source of revenue dried up as well. The writing was on the wall. There was no better way to declare your allegiance to a web browser in 1996 than adding “Best Viewed In” above one of these icons. Credit: Jeremy Reimer The dot-com boom In 1989, the NSF lifted the restrictions on providing commercial access to the Internet, and by 1991, it had removed all barriers to commercial trade on the network. With the sudden ascent of the web, thanks to Mosaic, Netscape, and Internet Explorer, new companies jumped into this high-tech gold rush. But at first, it wasn’t clear what the best business strategy was. Users expected everything on the web to be free, so how could you make money? Many early web companies started as hobby projects. In 1994, Jerry Yang and David Filo were electrical engineering PhD students at Stanford University. After Mosaic started popping off, they began collecting and trading links to new websites. Thus, “Jerry’s Guide to the World Wide Web” was born, running on Yang’s Sun workstation. Renamed Yahoo! (Yet Another Hierarchical, Officious Oracle), the site exploded in popularity. Netscape put multiple links to Yahoo on its main navigation bar, which further accelerated growth. “We weren’t really sure if you could make a business out of it, though,” Yang told Fortune. Nevertheless, venture capital companies came calling. Sequoia, which had made millions investing in Apple, put in $1 million for 25 percent of Yahoo. Yahoo.com as it would have appeared in 1995. Credit: Jeremy Reimer Another hobby site, AuctionWeb, was started in 1995 by Pierre Omidyar. Running on his own home server using the regular $30 per month service from his ISP, the site let people buy and sell items of almost any kind. When traffic started growing, his ISP told him it was increasing his Internet fees to $250 per month, as befitting a commercial enterprise. Omidyar decided he would try to make it a real business, even though he didn’t have a merchant account for credit cards or even a way to enforce the new 5 percent or 2.5 percent royalty charges. That didn’t matter, as the checks started rolling in. He found a business partner, changed the name to eBay, and the rest was history. AuctionWeb (later eBay) as it would have appeared in 1995. Credit: Jeremy Reimer In 1993, Jeff Bezos, a senior vice president at a hedge fund company, was tasked with investigating business opportunities on the Internet. He decided to create a proof of concept for what he described as an “everything store.” He chose books as an ideal commodity to sell online, since a book in one store was identical to one in another, and a website could offer access to obscure titles that might not get stocked in physical bookstores. He left the hedge fund company, gathered investors and software development talent, and moved to Seattle. There, he started Amazon. At first, the site wasn’t much more than an online version of an existing bookseller catalog called Books In Print. But over time, Bezos added inventory data from the two major book distributors, Ingram and Baker & Taylor. The promise of access to every book in the world was exciting for people, and the company grew quickly. Amazon.com as it would have appeared in 1995. Credit: Jeremy Reimer The explosive growth of these startups fueled a self-perpetuating cycle. As publications like Wired experimented with online versions of their magazines, they invented and sold banner ads to fund their websites. The best customers for these ads were other web startups. These companies wanted more traffic, and they knew ads on sites like Yahoo were the best way to get it. Yahoo salespeople could then turn around and point to their exponential ad sales curves, which caused Yahoo stock to rise. This encouraged people to fund more web startups, which would all need to advertise on Yahoo. These new startups also needed to buy servers from companies like Sun Microsystems, causing those stocks to rise as well. The crash In the latter half of the 1990s, it looked like everything was going great. The economy was booming, thanks in part to the rise of the World Wide Web and the huge boost it gave to computer hardware and software companies. The NASDAQ index of tech-focused stocks painted a clear picture of the boom. The NASDAQ composite index in the 1990s. Credit: Jeremy Reimer Federal Reserve chairman Alan Greenspan called this phenomenon “irrational exuberance” but didn’t seem to be in a hurry to stop it. The fact that most new web startups didn’t have a realistic business model didn’t seem to bother investors. Sure, WebVan might have been paying more to deliver groceries than they earned from customers, but look at that growth curve! The exuberance couldn’t last forever. The NASDAQ peaked at 8,843.87 in February 2000 and started to go down. In one month, it lost 34 percent of its value, and by August 2001, it was down to 3,253.38. Web companies laid off employees or went out of business completely. The party was over. Andreessen said that the tech crash scarred him. “The overwhelming message to our generation in the early nineties was ‘You’re dirty, you’re all about grunge—you guys are fucking losers!’ Then the tech boom hit, and it was ‘We are going to do amazing things!’ And then the roof caved in, and the wisdom was that the Internet was a mirage. I 100 percent believed that because the rejection was so personal—both what everybody thought of me and what I thought of myself.” But while some companies quietly celebrated the end of the whole Internet thing, others would rise from the ashes of the dot-com collapse. That’s the subject of our third and final article. Source Hope you enjoyed this news post. Thank you for appreciating my time and effort posting news every day for many years. News posts... 2023: 5,800+ | 2024: 5,700+ | 2025 (till end of May): 2,377 RIP Matrix | Farewell my friend -
In our new 3-part series, we remember the people and ideas that made the Internet. In a very real sense, the Internet, this marvelous worldwide digital communications network that you’re using right now, was created because one man was annoyed at having too many computer terminals in his office. The year was 1966. Robert Taylor was the director of the Advanced Research Projects Agency’s Information Processing Techniques Office. The agency was created in 1958 by President Eisenhower in response to the launch of Sputnik. So Taylor was in the Pentagon, a great place for acronyms like ARPA and IPTO. He had three massive terminals crammed into a room next to his office. Each one was connected to a different mainframe computer. They all worked slightly differently, and it was frustrating to remember multiple procedures to log in and retrieve information. Author’s re-creation of Bob Taylor’s office with three teletypes. Credit: Rama & Musée Bolo (Wikipedia/Creative Commons), steve lodefink (Wikipedia/Creative Commons), The Computer Museum @ System Source In those days, computers took up entire rooms, and users accessed them through teletype terminals—electric typewriters hooked up to either a serial cable or a modem and a phone line. ARPA was funding multiple research projects across the United States, but users of these different systems had no way to share their resources with each other. Wouldn’t it be great if there was a network that connected all these computers? The dream is given form Taylor’s predecessor, Joseph “J.C.R.” Licklider, had released a memo in 1963 that whimsically described an “Intergalactic Computer Network” that would allow users of different computers to collaborate and share information. The idea was mostly aspirational, and Licklider wasn’t able to turn it into a real project. But Taylor knew that he could. In a 1998 interview, Taylor explained: “In most government funding, there are committees that decide who gets what and who does what. In ARPA, that was not the way it worked. The person who was responsible for the office that was concerned with that particular technology—in my case, computer technology—was the person who made the decision about what to fund and what to do and what not to do. The decision to start the ARPANET was mine, with very little or no red tape.” Taylor marched into the office of his boss, Charles Herzfeld. He described how a network could save ARPA time and money by allowing different institutions to share resources. He suggested starting with a small network of four computers as a proof of concept. “Is it going to be hard to do?” Herzfeld asked. “Oh no. We already know how to do it,” Taylor replied. “Great idea,” Herzfeld said. “Get it going. You’ve got a million dollars more in your budget right now. Go.” Taylor wasn’t lying—at least, not completely. At the time, there were multiple people around the world thinking about computer networking. Paul Baran, working for RAND, published a paper in 1964 describing how a distributed military networking system could be made resilient even if some nodes were destroyed in a nuclear attack. Over in the UK, Donald Davies independently came up with a similar concept (minus the nukes) and invented a term for the way these types of networks would communicate. He called it “packet switching.” On a regular phone network, after some circuit switching, a caller and answerer would be connected via a dedicated wire. They had exclusive use of that wire until the call was completed. Computers communicated in short bursts and didn’t require pauses the way humans did. So it would be a waste for two computers to tie up a whole line for extended periods. But how could many computers talk at the same time without their messages getting mixed up? Packet switching was the answer. Messages were divided into multiple snippets. The order and destination were included with each message packet. The network could then route the packets in any way that made sense. At the destination, all the appropriate packets were put into the correct order and reassembled. It was like moving a house across the country: It was more efficient to send all the parts in separate trucks, each taking their own route to avoid congestion. A simplified diagram of how packet switching works. Credit: Jeremy Reimer By the end of 1966, Taylor had hired a program director, Larry Roberts. Roberts sketched a diagram of a possible network on a napkin and met with his team to propose a design. One problem was that each computer on the network would need to use a big chunk of its resources to manage the packets. In a meeting, Wes Clark passed a note to Roberts saying, “You have the network inside-out.” Clark’s alternative plan was to ship a bunch of smaller computers to connect to each host. These dedicated machines would do all the hard work of creating, moving, and reassembling packets. With the design complete, Roberts sent out a request for proposals for constructing the ARPANET. All they had to do now was pick the winning bid, and the project could begin. BB&N and the IMPs IBM, Control Data Corporation, and AT&T were among the first to respond to the request. They all turned it down. Their reasons were the same: None of these giant companies believed the network could be built. IBM and CDC thought the dedicated computers would be too expensive, but AT&T flat-out said that packet switching wouldn’t work on its phone network. In late 1968, ARPA announced a winner for the bid: Bolt Beranek and Newman. It seemed like an odd choice. BB&N had started as a consulting firm that calculated acoustics for theaters. But the need for calculations led to the creation of a computing division, and its first manager had been none other than J.C.R. Licklider. In fact, some BB&N employees had been working on a plan to build a network even before the ARPA bid was sent out. Robert Kahn led the team that drafted BB&N’s proposal. Their plan was to create a network of “Interface Message Processors,” or IMPs, out of Honeywell 516 computers. They were ruggedized versions of the DDP-516 16-bit minicomputer. Each had 24 kilobytes of core memory and no mass storage other than a paper tape reader, and each cost $80,000 (about $700,000 today). In comparison, an IBM 360 mainframe cost between $7 million and $12 million at the time. An original IMP, the world’s first router. It was the size of a large refrigerator. Credit: Steve Jurvetson (CC BY 2.0) The 516’s rugged appearance appealed to BB&N, who didn’t want a bunch of university students tampering with its IMPs. The computer came with no operating system, but it didn’t really have enough RAM for one. The software to control the IMPs was written on bare metal using the 516’s assembly language. One of the developers was Will Crowther, who went on to create the first computer adventure game. One other hurdle remained before the IMPs could be put to use: The Honeywell design was missing certain components needed to handle input and output. BB&N employees were dismayed that the first 516, which they named IMP-0, didn’t have working versions of the hardware additions they had requested. It fell on Ben Barker, a brilliant undergrad student interning at BB&N, to manually fix the machine. Barker was the best choice, even though he had slight palsy in his hands. After several stressful 16-hour days wrapping and unwrapping wires, all the changes were complete and working. IMP-0 was ready. In the meantime, Steve Crocker at the University of California, Los Angeles, was working on a set of software specifications for the host computers. It wouldn’t matter if the IMPs were perfect at sending and receiving messages if the computers themselves didn’t know what to do with them. Because the host computers were part of important academic research, Crocker didn’t want to seem like he was a dictator telling people what to do with their machines. So he titled his draft a “Request for Comments,” or RFC. This one act of politeness forever changed the nature of computing. Every change since has been done as an RFC, and the culture of asking for comments pervades the tech industry even today. RFC No. 1 proposed two types of host software. The first was the simplest possible interface, in which a computer pretended to be a dumb terminal. This was dubbed a “terminal emulator,” and if you’ve ever done any administration on a server, you’ve probably used one. The second was a more complex protocol that could be used to transfer large files. This became FTP, which is still used today. A single IMP connected to one computer wasn’t much of a network. So it was very exciting in September 1969 when IMP-1 was delivered to BB&N and then shipped via air freight to UCLA. The first test of the ARPANET was done with simultaneous phone support. The plan was to type “LOGIN” to start a login sequence. This was the exchange: “Did you get the L?” “I got the L!” “Did you get the O?” “I got the O!” “Did you get the G?” “Oh no, the computer crashed!” It was an inauspicious beginning. The computer on the other end was helpfully filling in the “GIN” part of “LOGIN,” but the terminal emulator wasn’t expecting three characters at once and locked up. It was the first time that autocomplete had ruined someone’s day. The bug was fixed, and the test completed successfully. IMP-2, IMP-3, and IMP-4 were delivered to the Stanford Research Institute (where Doug Engelbart was keen to expand his vision of connecting people), UC Santa Barbara, and the University of Utah. Now that the four-node test network was complete, the team at BB&N could work with the researchers at each node to put the ARPANET through its paces. They deliberately created the first ever denial of service attack in January 1970, flooding the network with packets until it screeched to a halt. The original ARPANET, predecessor of the Internet. Circles are IMPs, and rectangles are computers. Credit: DARPA Surprisingly, many of the administrators of the early ARPANET nodes weren’t keen to join the network. They didn’t like the idea of anyone else being able to use resources on “their” computers. Taylor reminded them that their hardware and software projects were mostly ARPA-funded, so they couldn’t opt out. The next month, Stephen Carr, Stephen Crocker, and Vint Cerf released RFC No. 33. It described a Network Control Protocol (NCP) that standardized how the hosts would communicate with each other. After this was adopted, the network was off and running. J.C.R. Licklider, Bob Taylor, Larry Roberts, Steve Crocker, and Vint Cerf. Credit: US National Library of Medicine, WIRED, Computer Timeline, Steve Crocker, Vint Cerf The ARPANET grew significantly over the next few years. Important events included the first ever email between two different computers, sent by Roy Tomlinson in July 1972. Another groundbreaking demonstration involved a PDP-10 in Harvard simulating, in real-time, an aircraft landing on a carrier. The data was sent over the ARPANET to a MIT-based graphics terminal, and the wireframe graphical view was shipped back to a PDP-1 at Harvard and displayed on a screen. Although it was primitive and slow, it was technically the first gaming stream. A big moment came in October 1972 at the International Conference on Computer Communication. This was the first time the network had been demonstrated to the public. Interest in the ARPANET was growing, and people were excited. A group of AT&T executives noticed a brief crash and laughed, confident that they were correct in thinking that packet switching would never work. Overall, however, the demonstration was a resounding success. But the ARPANET was no longer the only network out there. The two keystrokes on a Model 33 Teletype that changed history. Credit: Marcin Wichary (CC BY 2.0) A network of networks The rest of the world had not been standing still. In Hawaii, Norman Abramson and Franklin Kuo created ALOHAnet, which connected computers on the islands using radio. It was the first public demonstration of a wireless packet switching network. In the UK, Donald Davies’ team developed the National Physical Laboratory (NPL) network. It seemed like a good idea to start connecting these networks together, but they all used different protocols, packet formats, and transmission rates. In 1972, the heads of several national networking projects created an International Networking Working Group. Cerf was chosen to lead it. The first attempt to bridge this gap was SATNET, also known as the Atlantic Packet Satellite Network. Using satellite links, it connected the US-based ARPANET with networks in the UK. Unfortunately, SATNET itself used its own set of protocols. In true tech fashion, an attempt to make a universal standard had created one more standard instead. Robert Kahn asked Vint Cerf to try and fix these problems once and for all. They came up with a new plan called the Transmission Control Protocol, or TCP. The idea was to connect different networks through specialized computers, called “gateways,” that translated and forwarded packets. TCP was like an envelope for packets, making sure they got to the right destination on the correct network. Because some networks were not guaranteed to be reliable, when one computer successfully received a complete and undamaged message, it would send an acknowledgement (ACK) back to the sender. If the ACK wasn’t received in a certain amount of time, the message was retransmitted. In December 1974, Cerf, Yogen Dalal, and Carl Sunshine wrote a complete specification for TCP. Two years later, Cerf and Kahn, along with a dozen others, demonstrated the first three-network system. The demo connected packet radio, the ARPANET, and SATNET, all using TCP. Afterward, Cerf, Jon Postel, and Danny Cohen suggested a small but important change: They should take out all the routing information and put it into a new protocol, called the Internet Protocol (IP). All the remaining stuff, like breaking and reassembling messages, detecting errors, and retransmission, would stay in TCP. Thus, in 1978, the protocol officially became known as, and was forever thereafter, TCP/IP. A map of the Internet in 1977. White dots are IMPs, and rectangles are host computers. Jagged lines connect to other networks. Credit: The Computer History Museum If the story of creating the Internet was a movie, the release of TCP/IP would have been the triumphant conclusion. But things weren’t so simple. The world was changing, and the path ahead was murky at best. At the time, joining the ARPANET required leasing high-speed phone lines for $100,000 per year. This limited it to large universities, research companies, and defense contractors. The situation led the National Science Foundation (NSF) to propose a new network that would be cheaper to operate. Other educational networks arose at around the same time. While it made sense to connect these networks to the growing Internet, there was no guarantee that this would continue. And there were other, larger forces at work. By the end of the 1970s, computers had improved significantly. The invention of the microprocessor set the stage for smaller, cheaper computers that were just beginning to enter people’s homes. Bulky teletypes were being replaced with sleek, TV-like terminals. The first commercial online service, CompuServe, was released to the public in 1979. For just $5 per hour, you could connect to a private network, get weather and financial reports, and trade gossip with other users. At first, these systems were completely separate from the Internet. But they grew quickly. By 1987, CompuServe had 380,000 subscribers. A magazine ad for CompuServe from 1980. Credit: marbleriver Meanwhile, the adoption of TCP/IP was not guaranteed. At the beginning of the 1980s, the Open Systems Interconnection (OSI) group at the International Standardization Organization (ISO) decided that what the world needed was more acronyms—and also a new, global, standardized networking model. The OSI model was first drafted in 1980, but it wasn’t published until 1984. Nevertheless, many European governments, and even the US Department of Defense, planned to transition from TCP/IP to OSI. It seemed like this new standard was inevitable. The seven-layer OSI model. If you ever thought there were too many layers, you’re not alone. Credit: BlueCat Networks While the world waited for OSI, the Internet continued to grow and evolve. In 1981, the fourth version of the IP protocol, IPv4, was released. On January 1, 1983, the ARPANET itself fully transitioned to using TCP/IP. This date is sometimes referred to as the “birth of the Internet,” although from a user’s perspective, the network still functioned the same way it had for years. A map of the Internet from 1982. Ovals are networks, and rectangles are gateways. Hosts are not shown, but number in the hundreds. Note the appearance of modern-looking IPv4 addresses. Credit: Jon Postel In 1986, the NFSNET came online, running under TCP/IP and connected to the rest of the Internet. It also used a new standard, the Domain Name System (DNS). This system, still in use today, used easy-to-remember names to point to a machine’s individual IP address. Computer names were assigned “top-level” domains based on their purpose, so you could connect to “frodo.edu” at an educational institution, or “frodo.gov” at a governmental one. The NFSNET grew rapidly, dwarfing the ARPANET in size. In 1989, the original ARPANET was decommissioned. The IMPs, long since obsolete, were retired. However, all the ARPANET hosts were successfully migrated to other Internet networks. Like a Ship of Theseus, the ARPANET lived on even after every component of it was replaced. The exponential growth of the ARPANET/Internet during its first two decades. Credit: Jeremy Reimer Still, the experts and pundits predicted that all of these systems would eventually have to transfer over to the OSI model. The people who had built the Internet were not impressed. In 1987, writing RFC No. 1,000, Crocker said, “If we had only consulted the ancient mystics, we would have seen immediately that seven layers were required.” The Internet pioneers felt they had spent many years refining and improving a working system. But now, OSI had arrived with a bunch of complicated standards and expected everyone to adopt their new design. Vint Cerf had a more pragmatic outlook. In 1982, he left ARPA for a new job at MCI, where he helped build the first commercial email system (MCI Mail) that was connected to the Internet. While at MCI, he contacted researchers at IBM, Digital, and Hewlett-Packard and convinced them to experiment with TCP/IP. Leadership at these companies still officially supported OSI, however. The debate raged on through the latter half of the 1980s and into the early 1990s. Tired of the endless arguments, Cerf contacted the head of the National Institute of Standards and Technology (NIST) and asked him to write a blue ribbon report comparing OSI and TCP/IP. Meanwhile, while planning a successor to IPv4, the Internet Advisory Board (IAB) was looking at the OSI Connectionless Network Protocol and its 128-bit addressing for inspiration. In an interview with Ars, Vint Cerf explained what happened next. “It was deliberately misunderstood by firebrands in the IETF [Internet Engineering Task Force] that we are traitors by adopting OSI,” he said. “They raised a gigantic hoo-hah. The IAB was deposed, and the authority in the system flipped. IAB used to be the decision makers, but the fight flips it, and IETF becomes the standard maker.” To calm everybody down, Cerf performed a striptease at a meeting of the IETF in 1992. He revealed a T-shirt that said “IP ON EVERYTHING.” At the same meeting, David Clark summarized the feelings of the IETF by saying, “We reject kings, presidents, and voting. We believe in rough consensus and running code.” Vint Cerf strips down to the bare essentials. Credit: Boardwatch and Light Reading The fate of the Internet The split design of TCP/IP, which was a small technical choice at the time, had long-lasting political implications. In 2001, David Clark and Marjory Blumenthal wrote a paper that looked back on the Protocol War. They noted that the Internet’s complex functions were performed at the endpoints, while the network itself ran only the IP part and was concerned simply with moving data from place to place. These “end-to-end principles” formed the basis of “… the ‘Internet Philosophy’: freedom of action, user empowerment, end-user responsibility for actions undertaken, and lack of controls ‘in’ the Net that limit or regulate what users can do,” they said. In other words, the battle between TCP/IP and OSI wasn’t just about two competing sets of acronyms. On the one hand, you had a small group of computer scientists who had spent many years building a relatively open network and wanted to see it continue under their own benevolent guidance. On the other hand, you had a huge collective of powerful organizations that believed they should be in charge of the future of the Internet—and maybe the behavior of everyone on it. But this impossible argument and the ultimate fate of the Internet was about to be decided, and not by governments, committees, or even the IETF. The world was changed forever by the actions of one man. He was a mild-mannered computer scientist, born in England and working for a physics research institute in Switzerland. That’s the story covered in the next article in our series. Source Hope you enjoyed this news post. Thank you for appreciating my time and effort posting news every day for many years. News posts... 2023: 5,800+ | 2024: 5,700+ | 2025 (till end of March): 1,357 RIP Matrix | Farewell my friend
-
As Internet enshittification marches on, here are some of the worst offenders
Karlston posted a news in General News
Ars staffers take aim at some of the web's worst predatory practices. Two years ago, a Canadian writer named Cory Doctorow coined the phrase "enshittification" to describe the decay of online platforms. The word immediately set the Internet ablaze, as it captured the growing malaise regarding how almost everything about the web seemed to be getting worse. "It’s my theory explaining how the Internet was colonized by platforms, why all those platforms are degrading so quickly and thoroughly, why it matters, and what we can do about it," Doctorow explained in a follow-up article. "We’re all living through a great enshittening, in which the services that matter to us, that we rely on, are turning into giant piles of shit. It’s frustrating. It’s demoralizing. It’s even terrifying." Doctorow believes there are four basic forces that might constrain companies from getting worse: competition, regulation, self-help, and tech workers. One by one, he says, these constraints have been eroded as large corporations squeeze the Internet and its denizens for dollars. If you want a real-world, literal example of enshittification, let's look at actual poop. When Diapers.com refused Amazon’s acquisition offer, Amazon lit $100 million on fire, selling diapers way below cost for months, until Diapers.com folded. With another competitor tossed aside, Amazon was then free to sell diapers at its price from wherever it wanted to source them. Anyway, we at Ars have covered a lot of things that have been enshittified. Here are some of the worst examples we've come across. Hopefully, you'll share some of your own experiences in the comments. We might even do a follow-up story based on those. Smart TVs Amazon can use its smart display to track streaming habits. Credit: Amazon Smart TVs have come a long way since Samsung released the first model readily available for the masses in 2008. While there have certainly been improvements in areas like image quality, sound capabilities, usability, size, and, critically, price, much of smart TVs’ evolution could be viewed as invasive and anti-consumer. Today, smart TVs are essentially digital billboards that serve as tools for companies—from advertisers to TV OEMs—to extract user data. Corporate interest in understanding what people do with and watch on their TVs and in pushing ads has dramatically worsened the user experience. For example, the remotes for LG’s 2025 TVs don’t have a dedicated input button but do have multiple ways for accessing LG webOS apps. This is all likely to get worse as TV companies target software, tracking, and ad sales as ways to monetize customers after their TV purchases—even at the cost of customer convenience and privacy. When budget brands like Roku are selling TV sets at a loss, you know something’s up. With this approach, TVs miss the opportunity to appeal to customers with more relevant and impressive upgrades. There's also a growing desire among users to disconnect their connected TVs, defeating their original purpose. Suddenly, buying a dumb TV seems smarter than buying a smart one. But smart TVs and the ongoing revenue opportunities they represent have made it extremely hard to find a TV that won't spy on you. —Scharon Harding Google’s voice assistant Doctorow has written a lot about how Google, on the whole, fits the concept of enshittification. I want to mention one part of Google that suffers a kind of second-order enshittification, one that people might have seen coming but which was far from inevitable: the spoken-out-loud version of Google Assistant. Every so often, an Ars reader will write in to ask why their Google Assistant devices—be they Nest Hubs or Nest Minis or just Android phones—seem to be worse than when they bought them. Someone on the r/GoogleHome subreddit will ask why something that worked for years suddenly stops working. Every so often, a reporter will try to quantify this seemingly slow rot, only to fall for the same rhetorical traps I once did. "Everybody’s setup is different," "Our expectations are different now," or "There is no real way to quantify it." And sometimes there are just outages, which get fixed but leave you with the sense that your Assistant is hard of hearing, takes a lot of days off, and knows it's due for retirement. I’m fine just saying it now: Google Assistant is worse now than it was soon after it started. Even if Google is turning its entire supertanker toward AI now, it’s not clear why "Start my morning routine," "Turn on the garage lights," and "Set an alarm for 8 pm" had to suffer. If Google's plan is to cut funding and remove features, make everybody regret surrendering their audio privacy and funds to speakers, and then wow them when its generative-AI-based stand-in shows up, I’m not sure how that plays out. After so many times repeating myself or yelling at Assistant to stop, I’ve muted my speakers, tried out open alternatives, and accepted that you can’t buy real help for $50–$100. —Kevin Purdy The Portable Document Format I'm not entirely convinced the PDF was ever really good, but it certainly performed a useful purpose once upon a time: If you could print, you could make a PDF. And if you could turn your document into a PDF, anyone on any platform could read it. It also allowed for elaborate formatting, the sort that could be nightmarish to achieve in Word or some of the page layout software of the time. And finally, unlike an image, you could copy and paste text back out of it. But Acrobat was ultimately an Adobe product, with all that came with it. It was expensive, it was prone to bloat and poor performance, and there was no end to its security issues. Features were added that greatly expanded its scope but were largely useless for most people. Eventually, you couldn't install it without also installing what felt like half a dozen seemingly unrelated Adobe products. By building PDF capabilities into its OS, Apple allowed me to go Adobe-free and avoid some of this enshittification on my computers. But the PDF has still gotten ever less useful. The vast majority of PDFs I deal with now come from academic journals, and whatever witchcraft is needed to put footnotes, formulas, and embargo details into the text wrecks the thing I care most about: copying and pasting details that I need to write articles. Instead, I often get garbled, shortened pieces of other parts of the document intermingled with the text I want—assuming I can even select it in the first place. Apple, which had given the PDF a reprieve, has now killed its main selling point. Because Apple has added OCR to the MacOS image display system, I can get more reliable results by screenshotting the PDF and then copying the text out of that. This is the true mark of its enshittification: I now wish the journals would just give me a giant PNG. —John Timmer Televised sports In some ways, the development of technology has been a godsend for watching non-mainstream sports, like professional cycling, in the United States. Back in the olden days at the turn of the century, the Outdoor Life Network carried the Tour de on cable, and NBC Sports gradually started to cover more races. But their calendar was incomplete and riddled with commercials. To find all professional cycling races, one had to look far and wide, subscribe to some services, and maybe do a little pirating. Nirvana arrived in 2020 when a media company called Global Cycling Network obtained the rights to stream virtually every professional cycling race in Europe. Anyone with a VPN in the United States could pay $40 a year and watch race coverage, from start to finish, without commercials. This was absolutely spectacular—until enshittification set in. In 2023, the parent company of the cycling network, Warner Bros. Discovery, started the process of "consolidating" its services. Global Cycling Network, or GCN+, was toast. European viewers could watch most of the same races on Discovery+ for about $80 a year, so the deal wasn't terrible. US fans were hosed, however. You needed a UK credit card to sign up for Discovery+ cycling. To watch the majority of races in the United States, therefore, one needed to sign up for Max, Peacock, and a service called FloBikes. The total annual price, without ads, is about $550. This year, it was Europe's turn. In many countries, fans must now subscribe to TNT Sports at a price of 30.99 pounds a month ($38.50). So many Europeans are now being asked to pay more than $450 a year. Even the Tour de , which had long been broadcast on free television, is going away after next year. The bottom line? The new monthly price is the same as we used to pay for a year of the superior service, GCN+, only two years ago. This is an incredibly stupid decision for the sport, which now has no chance of reaching new viewers under this model. And it takes advantage of fans who are left to pay outrageous sums of money or turn to dodgy pirated streams. And it's not just cycling. Formula 1 racing has largely gone behind paywalls, and viewership is down significantly over the last 15 years. Major US sports such as professional and college football had largely been exempt, but even that is now changing, with NFL games being shown on Peacock, Amazon Prime, and Netflix. None of this helps viewers. It enshittifies the experience for us in the name of corporate greed. —Eric Berger Google search A screenshot of an AI Overview query, "How many rocks should I eat each day" that went viral on X. Credit: Tim Onion / X Google's rapid spiral toward enshittification—where the "don't be evil company" went from altruistic avoider of ads that its founders knew could ruin search to dominating ad markets by monopolizing search while users grew to hate its search engine—could finally be disrupted by potential court-ordered remedies coming this year. Required to release its iron grip on global search, the search giant could face more competition than ever as rivals potentially get broader access to Google data, ideally leading to search product innovations that actually benefit Internet users. Having to care about Google search users' preferences could even potentially slow down the current wave of AI-flavored enshittification, as Google is currently losing its fight to keep AI out of discussions of search trial remedies. Plenty of people have griped about Google's AI overviews since their rollout. A Google search today might force you to scroll through more than 200 words of AI-generated guesswork before you get to a warning that everything you just read is "experimental." Only then can you finally start scrolling real results. Ars has pointed out that these AI overviews often misunderstand why people are even using Google. As a journalist, I frequently try to locate official documents by searching quoted paragraphs of text, and that used to be a fast way to surface source material. But now Google's AI thinks I want an interpretation of the specific text I'm trying to locate, burying the document I'm seeking in even longer swaths of useless AI babble and seemingly willfully confusing the intention of the search to train me to search differently. Where sponsored posts were previously a mildly irritating roadblock to search results, AI has emerged as a forced detour you have to take before coming anywhere close to your destination. Admittedly, some AI summaries may be useful, but they can just as easily provide false, misleading, and even dangerous answers. And in a search context, placing AI content ahead of any other results elevates an undoubtedly less trustworthy secondary source over primary sources at a time when social platforms like Facebook, YouTube, and X (formerly Twitter) are increasingly relying on users to fact-check misinformation. But Google, like many big tech companies, expects AI to revolutionize search and is seemingly intent on ignoring any criticism of that idea. The tech giant has urged the judge in the monopoly trial, Amit Mehta, to carefully weigh whether the AI remedies the US seeks could hobble Google's ability to innovate in AI search markets. The remedies include allowing publishers to opt out of web crawling for AI training without impacting search rankings or banning Google from exclusive deals that could block AI rivals from licensing Google-exclusive training data. We'll know more this August, when Mehta is expected to rule on final remedies. However, in November, Mehta said that "AI and the integration of AI is only going to play a much larger role, it seems to me, in the remedy phase than it did in the liability phase." —Ashley Belanger Email AI tools No, thank you. Credit: Dan Goodin Gmail won't take no for an answer. It keeps asking me if I want to use Google's Gemini AI tool to summarize emails or draft responses. As the disclaimer at the bottom of the Gemini tool indicates, I can't count on the output being factual, so no, I definitely don't want it. The dialog box only allows me to decline by clicking the "not now" option. I still haven't found the "not ever" option, and I doubt I ever will. I still haven't found a satisfactory way to turn Gemini off completely in Gmail. Discussions in forums on Reddit and Google support came up short, so I asked Gemini. It told me to turn off smart features in Gmail settings. I did, but I still have the Gemini icon at the top of my inbox and the top of each email I send or receive. —Dan Goodin Windows I usually try to moderate my criticism of Windows 11 because most of the things that people on the Internet really like to complain about (updates breaking things, attempts at mandatory Microsoft account sign-in, apps that auto-download to your computer when you set it up whether you want them or not, telemetry data being sent to Microsoft, forceful insistence that users switch to the Edge browser and Bing search engine) all actually started during the reign of Windows 10. Windows 10 is lodged in the popular imagination as one of the "good" versions of Windows partly because it retreated from most of the changes in Windows 8 (a "bad" version). But yeah, most of the Windows 11 stuff you hate has actually been happening for a while. With that being said, it sure is easy to resent Windows 11 these days, between the well-documented annoyances, the constant drumbeat of AI stuff (some of it gated to pricey new PCs), and a batch of weird bugs that mostly seem to be related to the under-the-hood overhauls in October's Windows 11 24H2 update. That list includes broken updates for some users, inoperable scanners, and a few unplayable games. With every release, the list of things you need to do to get rid of and turn off the most annoying stuff gets a little longer. Microsoft has proclaimed 2025 "the year of the Windows 11 PC refresh," partly because Windows 10 support is going away in October and there are a bunch of old PCs that can't easily be upgraded to the new version. But maybe Microsoft wouldn't need to poke people quite so hard if Windows 11 were a more streamlined version of itself, one without the unasked-for cruft that did a better job of respecting users' preferences. —Andrew Cunningham Web discourse Most media has never been that original—somebody creates something witty, clever, or popular, and others rush to mimic it; things have always been this way in my lifetime. But I still bemoan how many people or companies rush to copy nearly anything that resembles a viral moment, whether it's a trope, an aesthetic, or a word that is subsequently beaten to death by overuse. Memes can be funny until they turn into a plague. I physically cringe when "cringe" is used as a ubiquitous catch-all for anything that people don't like. Every job change posted to social media is prefaced by "personal news." I have asked colleagues what exactly is "quiet" about the verb in their headline. And the corporate jargon on LinkedIn causes me the most despair. Look, this is mostly a rant from someone who's supposed to pick words apart, so I understand that language changes, not everyone is a professional writer, and workday constraints lead to some pet phrases. But the enshittifcation of social media, particularly due to its speed and virality, has led to millions vying for their moment in the sun, and all I see is a constant glare that makes everything look indistinguishable. No wonder some companies think AI is the future. —Jacob May Source Hope you enjoyed this news post. Thank you for appreciating my time and effort posting news every day for many years. News posts... 2023: 5,800+ | 2024: 5,700+ | 2025 (till end of January): 487 RIP Matrix | Farewell my friend