
Source: Thinkstock
Chances are good that you use your smartphone and your computer every day, but don’t think about how these remarkable machines made their way to you. We’re not talking about Apple’s supply chain or the design process behind the latest build of Android. This is about much more than how the movers and shakers of Silicon Valley determine what the next iPhone will look like or how much faster the next laptop will be. The question is, how did the rapid pace of change and innovation accelerate to its current breakneck speed, which we now watch with less awe and wonder than with an idle curiosity and a perpetual question on our lips: what’s next?
Most of us don’t know much about the history of computing, the annals of which catalog years of great minds who conceived of the transition from calculation to computation, set down the logical building blocks of computing, identified speed and memory as key to computers, built the hardware to make computers possible, programmed computers, wrote programming languages, built operating systems, pioneered the Internet, and created innovation after innovation after innovation, each inventor standing on the shoulders of those who came before.
Read on to learn about the great innovators who brought technology into the modern age of computing. These are the towering figures who made our age of the Internet, smartphones, laptops, and mobile apps possible, laying its foundation brick-by-brick with skill, determination, and optimism about what the future would hold.

Source: Hulton Archive/Getty Images
1. Charles Babbage (1791-1871)
According to the “Modern History of Computing” on Stanford’s Encyclopedia of Philosophy, Charles Babbage was a Lucasian Professor of Mathematics at Cambridge University, a post formerly held by Isaac Newton, from 1828 to 1839. He proposed a Difference Engine, which was a digital computing machine for the automatic production of mathematical tables, like logarithm tables, tide tables, and astronomical tables. It consisted entirely of mechanical components, including brass gear wheels, rods, ratchets, and pinions. Numbers were represented in the decimal system by the positions of 10-toothed metal wheels.
Upon completing the Difference Engine in 1832, Babbage also proposed an Analytical Engine, far more ambitious than the Difference Engine, which would have been a general-purpose mechanical digital computer. The Analytical Engine would have a memory store and a central processing unit, and would have been able to select from among alternative actions consequent upon the outcome of previous actions. The Analytical Engine would have been controlled by a program of instructions contained on punched cards connected together with ribbons. Babbage worked closely with Ada Lovelace, who foresaw the possibility of using the Analytical Engine for non-numeric computation.

Source: Hulton Archive/Getty Images
2. Ada Lovelace (1815-1852)
Augusta Ada Byron, the only legitimate child of Annabella Milbanke and the poet George Gordon, or Lord Byron, was educated on a strict curriculum of science, logic, and mathematics on the insistence of her mother, who separated from Byron just a month after Ada was born. According to the Computer History Museum, Lovelace met Babbage at a party in 1833, when she was 17 years old. He demonstrated the working part of the Difference Engine to her.
In 1843, she published a translation of an article on the Analytical Engine by an Italian engineer, Luigi Menabrea, to which she appended extensive notes of her own. Her notes included the first published description of a sequence of operations for solving mathematical problems. Lovelace is often referred to as the first programmer, and looked at from a modern perspective, her statements are considered visionary. She speculated that Babbage’s Engine “might act upon other things besides number … the Engine might compose elaborate and scientific pieces of music of any degree of complexity or extent.”
Her finished version of the article was more than three times the length of the original, and contains what could be considered several early computer programs. Although Babbage and his assistants had sketched out programs for the theoretical Analytical Engine before, Lovelace’s are the most elaborate and complete, and the first to be published. The concept of a machine that could manipulate symbols in accordance with rules, and the idea that numbers could represent entities other than quantities, marked a fundamental transition from calculation to computation. Lovelace was the first to articulate the concept, and some believe that she saw even further than Babbage when it came to the idea’s potential. Her notes became one of the critical documents that inspired Alan Turing’s work on the first modern computers in the 1940s.

Source: Keystone/Hulton Archive/Getty Images
3. George Boole (1815-1864)
As Stanford reports, George Boole was a mathematician who revolutionized logic by applying methods of the emerging field of symbolic algebra to logic. Whereas traditional, or Aristotelian, logic relied on cataloging the valid syllogisms of various simple forms, Boole’s method produced general algorithms in an algebraic language, which applied to an infinite variety of arguments. He created a system of describing logical relations with mathematical symbols, which is now called Boolean logic and used as the basis for all modern computer processes.
Boolean algebra provides the basis for analyzing the validity of logical propositions because it captures the binary character of statements that may be either true or false. In the 1930s, researchers found that Boole’s two-valued logic lent itself to the description of electrical switching circuits. They demonstrated that the binary numbers — zero and one — could be used to analyze the circuits and thus design electronic computers, and computers and circuits today are now designed to implement binary arithmetic.

Source: General Photographic Agency/Getty Images
4. Vannevar Bush (1890-1974)
As Ibiblio reports, Vannevar Bush was never directly involved with the development of the Internet, and died before the creation of the World Wide Web. But he is often considered the godfather of the wired age thanks to his proposal of a machine called the “memex,” and his conceptualization of an idea that we now know as “hypertext.” In his 1945 essay, “As We May Think,” Bush described a theoretical machine designed to enhance the human memory by enabling the user to store and retrieve documents linked by associations. This associative linking was very similar to what is now known as hypertext.
The memex was intended as a storage and retrieval device, and would use microfilm. The machine would feature a desk with viewing screens, a keyboard, selection buttons and levers, and microfilm storage. Information stored on the microfilm could be retrieved quickly and projected on a screen. Bush envisioned that as the mind forms memories through associations, the memex would enable users to make links between documents. He called these links associative trails. Later innovators including Ted Nelson, who coined the term “hypertext” in the 1960s, acknowledged their debts to Bush, who is regarded as an early visionary, and died years before the Internet became popular.

Source: Theimitationgamemovie.com
5. Alan Turing (1912-1954)
In 1936 at Cambridge University, Turing invented the principle of the modern computer. As Stanford reports, Turing described an abstract digital computing machine that consisted of a limitless memory and a scanner that moves back and forth through the memory, symbol by symbol, reading what it finds and writing further symbols. The behavior of the scanner is dictated by a program of instructions stored in the memory in the form of symbols. Turing’s computing machine of 1936 is now known as the universal Turing machine.
From the start of the second World War, Turing was a leading cryptanalyst at the Government Code and Cypher School at Bletchley Park, where he became familiar with Thomas Flowers’s work involving large-scale high-speed electronic switching. During the wartime years, Turing gave considerable thought to the question of machine intelligence, and to the possibility of computing machines learning from experience and solving problems by searching through the space of possible solutions, guided by rule of thumb principles.
In 1945, Turing joined the National Physical Laboratory to design and develop an electronic stored-program digital computer for scientific work called the Automatic Computing Engine in homage to Babbage’s Difference Engine and Analytical Engine. Turing saw that speed and memory were key to computing, and his design called for a high-speed memory of roughly the same capacity as an early Macintosh computer, which Stanford notes was enormous by the standards of his day.

Source: Cryptomuseum.com
6. Thomas Flowers (1905-1998)
With some exceptions, early digital computing machines were electromechanical, and were built of small, electrically driven mechanical switches called “relays.” They operated slowly, while the basic components of an electronic computer — originally vacuum tubes — have no moving parts save electrons, and operate extremely quickly. The development of high-speed digital techniques using vacuum tubes made the modern computer possible, and the earliest extensive use of vacuum tubes for digital data-processing was by the engineer Thomas Flowers. His vision was that electronic equipment would replace existing systems built from relays, and remarked that at the outbreak of war with Germany in 1939, he might have been the only person in Britain to realize that vacuum tubes could be used on a large scale for high-speed digital computation.
The first fully functioning electronic digital computer was the Colossus, used by the Bletchley Park cryptanalysts from February 1944. From early in the war, the Government Code and Cypher School was successfully deciphering German radio communications encoded with the Enigma system, and by 1942 approximately 39,000 intercepted messages were being decoded each month. During the second half of 1941, messages encoded by a different means began to be intercepted, and the new cipher machine was broken in April 1942.
The need to decipher vital intelligence as quickly as possible led Max Newman to propose in November 1942 that key parts of the decryption process be automated with high-speed electronic counting devices. The first machine built to his specification was relay-based with electronic circuits, but was unreliable and slow. Flowers recommended that an all-electronic machine be built instead, and constructed the world’s first large-scale programmable electronic digital computer. Colossus I was delivered to Bletchley Park in January 1943. While it lacked two important features of modern computers (it had no internally stored programs, and was not a general purpose machine), to those acquainted with the universal Turing machine and the associated stored program concept, Flowers’s equipment was proof of the feasibility of using large numbers of vacuum tubes to implement a high-speed general-purpose stored-program computer.

Source: Cacm.acm.org
7. F.C. Williams (1911-1977) and 8. Tom Kilburn (1921-2001)
F. C. Williams and Tom Kilburn built the earliest functional general-purpose stored-program electronic digital computer in Max Newman’s Computing Machine Laboratory at Manchester University. The Manchester “Baby,” as it became known, performed its first calculation in June of 1948. The program, stored on the face of a cathode ray tube, was just 17 instructions long, and a much-enlarged version of the machine, with a programming system designed by Turing. It became the world’s first commercially available computer, the Ferranti Mark I. The first one to be completed was installed at Manchester University in February 1951, and a total of about 10 were sold, in Britain, Canada, Holland, and Italy.
While the Manchester machine is often remembered as the exclusive work of Williams and Kilburn, fundamental logico-mathematical contributions were made by Turing and Newman. In 1935, Newman introduced Turing to the concept that led directly to the Turing machine. Turing’s early input to the developments at Manchester may have been via lectures on computer design given in London during the period December 1946 to February 1947. Stanford notes that credit for the Manchester computer belongs not just to Williams and Kilburn, but also to Newman. The influence of Turing’s 1936 paper and of Flowers’s Colossus was also crucial.

Source: Keystone/Getty Images
9. J. Presper Eckert (1919-1995), 10. John Mauchly (1907-1980), 11. John von Neumann (1903-1957), and ENIAC programmers 12. Kay McNulty (1921-2006), 13. Betty Snyder (1917-2001), 14. Marlyn Wescoff (1992-2008), 15. Ruth Lichterman (1924-1986), 16. Betty Jean Jennings (1924-2011), and 17. Fran Bilas (1922-2012)
The first fully functioning general purpose electronic digital computer built in the United States was ENIAC, constructed at the Moore School of Electrical Engineering at the University of Pennsylvania for the Army Ordnance Department by J. Presper Eckert and John Mauchly. Completed in 1945, ENIAC was in some ways similar to the earlier Colossus, though it was larger and more flexible, yet far from a general purpose machine. It was primarily designed for the calculation of tables used in aiming artillery. It was not a stored program computer, and setting it up for a new task involved reconfiguring the machine with plugs and switches.
In 1944, John von Neumann joined the ENIAC group. At the Moore School, he emphasized the importance of the stored program concept, including the possibility of allowing the machine to modify its own program in useful ways while running. Because von Neumann was a prestigious figure who made the concept of a high-speed stored program digital computer widely known, it became customary — although historically inaccurate — to refer to electronic stored-program digital computers as “von Neumann machines.”
As Fortune reports, when the ENIAC was being constructed at Penn in 1945, it was thought that it would perform a specific set of calculations over and over. But the advent of the end of the war meant that the machine was needed for many other types of calculations involving sonic waves, weather patterns, and the explosive power of atom bombs, which would require it to be reprogrammed often.
In 1946, six women programmed the ENIAC, learning to program without programming languages or tools because so far, none existed. They had only logical diagrams to help them. They demonstrated that the programming of a computer would become just as important as the design and construction of its hardware. They learned both the application and the machine, and could diagnose problems as well as (if not better than) the engineers, who originally thought that the assembly of the hardware was the most important part of the project and, therefore, a man’s job.

Source: Agnesscott.edu
18. Grace Murray Hopper (1906-1992)
As PBS reports, Hopper joined the WAVES, or Women Accepted for Volunteer Emergency Service, a part of the U.S. Naval Reserve, in 1943, and a year later was Lieutenant Hopper. She was assigned to the Bureau of Ships Computation team at Harvard, which was designing a machine to make fast, difficult calculations for tasks like laying mine fields. Howard Aiken directed the work, which essentially involved creating the first programmable digital computer, called the Mark I. After removing a moth from the machine when looking for the cause of a computer failure, Hopper coined the terms “bug” and “debug” as they relate to computer errors and how to fix them.
In 1949, she joined a startup launched by Eckert and Mauchly of ENIAC fame. They created a product called Univac, a computer that recorded information on high-speed magnetic tape. It represented a significant innovation over the standard punch cards of the day. Sperry Rand acquired the company and Hopper stayed on, making important advances in reducing errors by creating a program that would translate programmers’ code to machine language.
She and her staff developed Flow-matic, the first programming language to use English words. It was later incorporated into COBOL, the business programming language that made computing a tool of the business world, not just the scientific world. Hopper led an effort to standardize COBOL and persuade the entire Navy to use the programming language. She was a backer of standardization and compatibility between computers, and under her direction, the Navy developed a set of programs for validating COBOL compilers. The concept of validation had a wide impact on other languages and organizations, and eventually led to standards and validation facilities for most programming languages.

Source: Ralphbaer.com
19. Ralph H. Baer (1922-2014)
As The New York Times reports, Ralph Baer patented the first home video game system, kicking off an area that is not only a ubiquitous pastime and a huge industry, but has acted as a catalyst that pushed scientists and engineers to multiply computer speed, memory, and visualization to the extent we see today. In 1966, he conceived of a “game box” that would enable users to play board games, action games, sports games, and more on almost any American television set. In 1971, Baer and his employer, Sanders Associates, filed for the first video game patent, laying legal claim to any product that included a domestic television with circuits that were capable of producing and controlling dots on a screen. After the patent was granted, Sanders licensed the system to Magnavox, which began selling the product as Odyssey, the first home video console, in 1972.
40 transistors and 40 diodes ran the entire system, including two player control units and a set of electronic program cards that each supported a different game. Over the next 20 years, Magnavox sued dozens of companies that infringed on its original patent. Baer’s invention marked the beginning of a monumental shift in humans’ relationship with machines. A continual revolution in microprocessing followed with the fields of computer science and television engineering emerging to support it.
Along with Atari’s Pong, which had more advanced electronics and sound, Odyssey pushed games to a faster, more complex realm, and Baer noted that it if it weren’t for the eager audience of video game enthusiasts, then high processor speeds and complex computer graphics would have been found only in the business and science worlds.

Source: Nap.edu
20. Edgar F. Codd (1923-2003)
As the website of the A.M. Turing Award reports, Edgar F. Codd created the relational model of data, an invention that spurred a database industry worth tens of billions of dollars. In the late 1950s, he led the team at IBM that developed the world’s first multiprogramming system. “Multiprogramming” refers to the ability of programs that are developed independently of each another to execute concurrently. While one program waits for an event to occur, another program can use the computer’s central processing unit. Multiprogamming is now standard on virtually all computer systems. He worked on high level techniques for software specification, and then turned his attention to database issues.
Though several database products existed at the time, they were difficult to use and required highly specialized technical skills. They also lacked a solid theoretical foundation, and Codd, who recognized the need for such a foundation, provided one by inventing the relational model of data, often recognized as one of the greatest technical achievements of the 20th century.
The relational model provides a method of structuring data using relations, or grid-like mathematical structures that are built from columns and rows. The physical manifestation of a relationship in a database is popularly known as a table, and under the relational model, all data must be stored in tables. The relational model provided a theoretical framework within which a variety of database problems could be addressed. Essentially all databases used today operate on the foundation of Codd’s ideas.

Source: Amturing.acm.org
21. John Warner Backus (1924-2007)
John Warner Backus directed the team that developed Fortran, short for “Formula Translation,” the first influential high-level programming language. The Washington Post reports that before Fortran, computers had to be meticulously hand-coded, programmed in raw strings of digits that would invoke actions within the machine. Fortran was a high-level language that abstracted that work to enable programmers to enter commands via a more intuitive system. The computer could then translate the programmer’s input into machine code on its own.
Fortran reduced the number of programming statements necessary to operate a machine by a factor of 20, and demonstrated to skeptics that machines could run efficiently without hand-coding. Afterward, programming languages and software proliferated, and Fortran is still in use today. Backus also developed a method for describing the syntax of programming languages, known as the Backus-Naur Form.

Source: Cgl.ucsf.edu
22. Seymour Cray (1925-1996)
Seymour Cray is referred to as the “father of supercomputing” and for years built the world’s fastest supercomputers. In 1957, he helped found the Control Data Corporation, and there built the fastest scientific computer ever, resulting in the CDC 1604, the first fully transistorized commercial computer, which fully replaced the vacuum tubes that had been used in earlier computers. The release of the CDC 6600, considered the world’s first actual supercomputer, followed in 1963. It was capable of nine Mflops, or million floating-point operations per second, and the CDC 7600, running at 40 Mflops, was released next.
The Cray-1 vector supercomputer, introduced in 1976, replaced transistors with integrated circuits and delivered 170 Mflops. In 1985, the Cray-2 system moved supercomputing forward again, breaking the gigaflop (one thousand Mflops) barrier. Introduced in 1988, the Cray Y-MP was the world’s first supercomputer to sustain more than 1 gigaflops on many applications, and multiple 333 megaflops processors powered the system to a record 2.3 gigaflop sustained speed.
In his “tribute” to Cray, Charles Breckenridge wrote that Cray regarded every system he worked on as a stepping stone to the next. Many of them served as foundations for systems built by others using his basic designs. Much of the competition for his machines came from companies that he played a part in making successful. He dedicated his career to the design and development of large-scale, high-performance systems for science and engineering.

Source: Sri.com
23. Doug C. Engelbart (1925-2013)
As The New York Times reports, Douglas C. Engelbart spent two years in the Navy, and read Vannevar Bush’s “As We May Think,” where Bush described the universal retrieval system called the memex. The idea idea stuck with Engelbart, who went on to establish an experimental research group at Stanford Research Institute. The unit, the Augmentation Research Center or ARC had the financial backing of the Air Force, NASA, and the Advanced Research Projects Agency, an arm of the Defense Department.
In the 1960s, Engelbart developed a variety of interactive computer technologies, and at the 1968 Fall Joint Computer Conference in San Francisco, demonstrated how a networked, interactive computing system would enable collaborating scientists to rapidly share information. He demonstrated how a mouse — which he invented just four years earlier — could be used to control a computer, and demonstrated text editing, video conferencing, hypertext, and windowing. The idea for the mouse first occurred to him in 1961 as he considered the challenge of making interactive computing more efficient.
The system Engelbart created, called the oNLine System, or NLS, enabled researchers to create and retrieve documents in the form of a structured electronic library. The technology would later be refined at Xerox’s Palo Alto Research Center and at the Stanford Artificial Intelligence Laboratory, and Apple and Microsoft would transform it for commercial use in the 1980s.
Engelbart was convinced that computers would quickly become more powerful, and that they would have enough processing power to design the Memex-like Augment system that he envisioned. In 1969, his Augment NLS system became the application for which the forerunner of the Internet we have today was created. The system was called the ARPAnet computer network, and SRI was the home of its operation center and one of its first two nodes. Engelbart was one of the first to realize the power of computers and the impact that they would have on society.

Source: Formal.stanford.edu
24. John McCarthy (1927-2011)
As Stanford reports, John McCarthy was a seminal figure in the field of artificial intelligence. He coined the term “artificial intelligence,” and spent the next five decades of his career defining the field. In 1958, McCarthy invented the computer programming language LISP, the second-oldest programming language after Fortran. LISP is still in use today, and is the programming language of choice for artificial intelligence. He also developed the concept of computer time-sharing in the late 1950s and early 1960s. Innovation significantly improved the efficiency of distributed computing, though it predated the era of cloud computing by decades.
In a 1960 paper, McCarthy outlined the principles of his programming philosophy and described “a system which is to evolve intelligence of human order.” In 1966, he drew attention by hosting a series of four simultaneous computer chess matches, conducted via telegraph against rivals in Russia. The matches were played with two pieces per side and lasted several months. McCarthy would later refer to chess and other board games as the Drosophila of artificial intelligence, referring to the fruit flies that proved important in early studies of genetics.
He later developed the first hand-eye computer system, in which the computer could see real 3D blocks via a video camera, and control a robotic arm to complete exercises in stacking and arranging the blocks. McCarthy co-founded the MIT Artificial Intelligence Project and what became the Stanford Artificial Intelligence Lab.

Source: Alumnae.mtholyoke.edu
25. Jean E. Sammet (1928-)
Jean E. Sammet supervised the first scientific programming group for Sperry Gyroscope Co., and joined IBM in 1961 to organize and manage the Boston Programming Center. According to the IEEE Computer Society, she initiated the concept and directed the development of the first FORMAC, or FORmula MAnipulation Compiler.
FORMAC was the first widely used general language and system for the symbolic manipulation of nonnumeric algebraic expressions. Sammet laid the foundation for what would become an important area of research and development in computing: the area of creating programming languages.

Source: Blogs.intel.com
26. Gordon E. Moore (1929-) and 27. Robert N. Noyce (1927-1990)
Gordon E. Moore and Robert N. Noyce co-founded Intel in 1968 with the intention of developing and producing large scale integrated products, beginning with semiconductor memories, according to IEEE Computer Society. Shortly thereafter, Intel produced the world’s first microprocessor. In the mid-1970s, Moore observed that the number of electrical elements per integrated circuit chip would double annually. Subsequently, this period was changed to 24 months. His observation became known as “Moore’s Law,” and has since enabled business and academic communities to estimate the future progress of integrated circuits.
Noyce, along with Jack Kilby, is credited with the invention of the integrated circuit, or microchip itself. In July 1959, Noyce filed for U.S. Patent 2,981,877, “Semiconductor Device and Lead Structure,” a type of integrated circuit. His independent effort was recorded just a few months after the key findings of inventor Jack Kilby. While Kilby’s invention was six months earlier, neither rejected the title of co-inventor.

Source: Computinghistory.org
28. Philip Don Estridge (1937-1985)
Don Estridge led the development of the first IBM personal computer, and is known as the father of the IBM PC. According to The New York Times, it was under Estridge’s leadership that a small team of IBM employees began work in 1980 on IBM’s first microcomputer. At the time, no one in the company had any idea how the project would revolutionize the computer industry, placing millions of small computers on office desktops and kitchen tables around the world.
The engineers under Estridge had come from the world of large computers, and his biggest task was to get them to figure out how a nonspecialist could quickly use an IBM machine. They learned that how people reacted emotionally to a computer was almost more important than what they actually did with it. In just four months, Estridge and his team developed the prototype of a small office computer that was quickly dubbed the PC. The PC was on retail shelves within a year, and by late 1983, the PC overtook the Apple II as the best-selling personal computer.
The team broke a number of traditions at IBM, and Estridge was given the authority to make whatever decisions were necessary to get the company into the personal computer business quickly. He rejected IBM-built components, and instead chose inexpensive, off-the-shelf parts from other vendors. He made the computer’s design specifications public, enabling thousands of people to write programs for the machine. Several of these programmers built multimillion-dollar businesses, and the availability of a wide array of programs for the platform drove IBM’s sales.

Source: Paw.princeton.edu
29. Bob Kahn (1938-) and 30. Vint Cerf (1943-)
Bob Kahn and Vint Cerf are are considered fathers of the Internet, and in 1974 published a research paper in which they proposed a protocol called “TCP.” That protocol would later become IP, the official network layer protocol of the Internet. TCP incorporated both connection-oriented and datagram services. It soon became apparent that the design could be subdivided into two separate protocols.
It was difficult to implement session management in an application-independent way. In practice, an application could run more efficiently or be implemented more easily when it managed network connections itself. TCP became Internet Protocol (IP), which supported datagrams and Transmission Control Protocol (TCP/IP) that added connection semantics as a layer on top of IP. TCP/IP describes the fundamental architecture of the Internet, and it made possible later developments like WiFi, Ethernet, LANs, the World Wide Web, email, FTP, 3G/4G, as well as all of the inventions built on top of those.
Cerf joined MCI Communications to lead the development of electronic mail systems for the Internet, and Kahn created the Corporation for National Research Initiatives, where he focused on managing and distributing the world’s content as a sort of nonproprietary Google. Cerf is known as an “Internet ambassador,” a strong proponent of an Internet that remains independent of state control, and a major supporter of the idea of network neutrality. The New York Times reports that Kahn has made an effort to stay out of the net neutrality debate, but has contributed efforts toward building support for a system known as Digital Object Architecture, which was made for tracking and authenticating all content distributed through the Internet.

Source: Raytheon.com
31. Ray Tomlinson (1941-)
According to The Verge in 1971, Ray Tomlinson was a recent MIT graduate hired to help build the early components of Advanced Research Projects Agency Network (ARPANET), the precursor to the Internet. On his own, he decided to build a networked messaging program. Most computers at the time enabled users to message one another, but because so few computers were networked, there was little reason to send messages across computers. Tomlinson’s solution used the now-ubiquitous “@” symbol to indicate networked email.
Tomlinson has noted that at the time, computers were expensive, and were shared by multiple users at the same time, and the user would quickly switch attention from one job to the next. The idea of sending messages to other users had been around for several years, and in 1971, Tomlinson saw an opportunity to extend it to users on other computers, using the network connection to transfer the mailbox information from one computer to another. He used an experimental file-transfer protocol and used it to send mailbox files between computers to create the first networked email system.
The realization that the innovation was significant only came upon later reflection on the 25th anniversary of the ARPANET. The idea had an organic origin, and many programmers began working on it as people latched onto the idea of leaving messages for one another on the computer.

Source: Computerhistory.org
32. Ken Thompson (1943-) and 33. Dennis Ritchie (1941-2011)
The Computer History Museum reports that in 1969, Ken Thompson and Dennis Ritchie created the UNIX operating system at Bell Telephone Laboratories. UNIX was a scaled-down version of the MIT MULTICS operating system, which was intended to run on the smaller minicomputers that were becoming available at the end of the 1960s. Ritchie built C because he and Thompson needed a better way to build UNIX. As Wired notes, the original UNIX kernel was written in assembly language, and they decided that they needed a higher level language to give them more control over the data of the operating system.
When it was re-written in the C programming language by Ritchie, UNIX became a truly portable operating system that could run on a wide array of different hardware platforms. The C language itself was widely adopted and is in wide use today. UNIX has become the backbone of the technical infrastructure of the modern world, and UNIX or one of its many variants runs on devices from supercomputers to smartphones. Almost everything on the web uses C and UNIX. Even Windows was once written in C, and UNIX underpins Mac OS X and iOS.

Source: Nwrwic.org
34. Radia Perlman (1951-)
When Radia Perlman attended MIT in the late 1960s and 1970s, she was one of just a few dozen women out of a class of one thousand. The Atlantic reports that she went on to become a leader in the field of computer science, developing the protocol behind the Spanning Tree Protocol (STP), which made today’s Internet possible.
Spanning Tree is a network protocol that ensures a loop-free topology for any bridged Ethernet local area network. The essential function of STP is to prevent bridge loops and the resulting broadcast radiation. It allows a network design to include redundant links that provide automatic backup paths if the active link fails. Perlman also made important contributions to the areas of network design and standardization, such as link-state protocols. She invented TRILL to correct the failings of Spanning Tree, and pioneered the process of teaching young children computer programming by developing TORTIS, a version of the educational robotics languages, LOGO.
Perlman is often referred to as the mother of the Internet, but she told The Atlantic that she doesn’t like the title because the Internet wasn’t invented by an individual. She acknowledges that she made “some fundamental contributions to the underlying infrastructure, but no single technology really caused the Internet to succeed.” The Internet’s success hasn’t been due to the specific technologies, but rather due to the array of ways that the Internet has come to be used.

Source: Francois Guillot/AFP/Getty Images
35. Richard Stallman (1953-)
Richard Stallman worked at the Artificial Intelligence Lab at MIT from 1971 to 1984, learning operating system development, writing the first extensible Emacs text editor there in 1976, and developing the AI technique of dependency-directed backtracking, also known as truth maintenance. According to his website, in 1983 Stallman announced the project to develop the GNU operating system, a Unix-like operating system intended to be entirely free software. With that announcement, he also launched the free software movement, and in 1985, he started the Free Software Foundation.
The GNU/Linux system, which is a variant of GNU and also uses the Linux Kernel developed by Linus Torvalds, are used in tens of millions or even hundreds of millions. But the distributors often include non-free software in those systems, so since the 1990s, Stallman has advocated for free software, and campaigning against both software patents and dangerous extension of copyright laws. (Correction 2/17/15: range of computers running GNU/Linux changed.)

Source: Sean Gallup/Getty Images
36. Bill Gates (1955-)
Bill Gates famously co-founded Microsoft with Paul Allen in 1975. At the time, their vision of a computer on every desktop and in every home seemed far-fetched, but it has since become a reality in many parts of the world. According to Biography.com, both had been working at MITS, a small company that made the Altair 8800 mini-computer kit. They built a BASIC software program that netted the company a fee and royalties, but didn’t meet their overhead. The software was also popular with hobbyists who reproduced and distributed copies of the software for free, and Gates saw the free distribution of software as stealing, especially when the software was created to be sold. Microsoft wrote software in different formats for a variety of computer companies, and as the computer industry began to grow with companies like Apple, Intel, and IBM, Gates was often on the road touting the merits of Microsoft software.
In November 1980, IBM was looking for software that would operate on its upcoming personal computer and approached Microsoft. However, Microsoft hadn’t developed the basic operating system that would run on IBM’s computers, so Microsoft bought an operating system, making a deal with the developer to make Microsoft the exclusive licensing agent and later the full owner of the software. Gates adapted and licensed the software, called MS-DOS, and soon released software called Softcard, which enabled Microsoft BASIC to operate on Apple II machines.
In 1981, Apple invited Microsoft to develop software for Macintosh computers. Through this knowledge sharing, Microsoft came to develop Windows. Apple’s system used a mouse to drive a graphic interface, displaying text and images on the screen. It differed dramatically from the text and keyboard system of MS-DOS, and Microsoft developed Windows with a graphic interface. Microsoft launched Windows in 1985 with a system that looked visually similar to Apple’s. In 1986, Gates took Microsoft public and became an instant millionaire.

Source: Justin Sullivan/Getty Images
37. Steve Jobs (1955-2011)
In 1976, when Jobs was just 21, he and Steve Wozniak co-founded Apple Computer. They began in the Jobs family garage, and are credited with revolutionizing the computer industry by making machines smaller, cheaper, more intuitive, and more accessible to the general consumer. According to Biography.com, Wozniak conceived of a series of user-friendly computers, and Jobs took charge of marketing, initially marketing them for $666.66 each. In 1980, Apple Computer became a publicly traded company, and reached a market value of $1.2 billion by the end of its first day of trading.
The next several Apple products suffered design flaws, and IBM surpassed Apple in sales. In 1984, Apple released the Macintosh, which was still not IBM-compatible, and the company began to phase Jobs out. He left Apple in 1985 to begin NeXT, and purchased an animation company that later became Pixar and merged with Walt Disney in 2006. But NeXT struggled to sell its specialized operating system to mainstream customers, and Apple eventually bought the company in 1996. Jobs then returned to his post as Apple’s chief executive, and revitalized the company with products like the iMac. Apple’s effective marketing and appealing design began to win the favor of consumers again.
The company introduced innovations like the MacBook Air, iPod, and iPhone, which each had monumental effects on the course of modern technology. After battling pancreatic cancer for nearly a decade, Jobs passed away at the age of 56 in 2011.

Source: Carl Court/AFP/Getty Images
38. Tim Berners-Lee (1955-)
Tim Berners-Lee is best known as the inventor of the World Wide Web, which he began in 1989. He founded the World Wide Consortium as a forum for the technical development of the web, and also founded the Web Foundation and co-founded the Open Data Institute. Berners-Lee invented the web while at CERN, the large particle physics laboratory near Geneva, and wrote the first web client and server in 1990.
According to the website of the World Wide Web Foundation, he noted that many scientists participating in experiments at CERN and returning to their laboratories around the world were eager to exchange data and results, but found it difficult to actually do so. He understood the unrealized potential of millions of computers connected through the Internet, and documented what would become the World Wide Web with the submission of a proposal specifying a set of technologies that would make the Internet truly accessible.
By October 1990, he had specified the three fundamental technologies that remain the foundation of today’s Web: HTML, URI, and HTTP. His specifications were refined as Web technology spread. He also wrote the first web page editor/browser and the first web server. By the end of 1990, the first web page was served, and by 1991, people outside of CERN joined the web community. In 1993, CERN announced that the technology would be available for anyone to use. Since then, the web has changed the world.

Source: Jarno Mela/AFP/GettyImages
39. Linus Torvalds (1969-)
Linus Torvalds created the Linux kernel and oversaw the open source development of the widely used Linux operating system. After purchasing a personal computer, he began using Minix, a Unix-inspired operating system developed by Andrew Tannenbaum. In 1991, Torvalds began work on a new kernel, which would later be called Linux, and after forming a team of volunteers, released the first version in 1994. He continues to oversee the development of Linux, and is the ultimate authority on what new code is implemented into the standard Linux kernel.

Source: Ralph Orlowski/Getty Images
40. Larry Page (1973-) and 41. Sergey Brin (1973-)
Larry Page and Sergey Brin met at Stanford University, where, for a research project, they developed a search engine that listed the results according to the popularity of the pages. They investigated how sites linked back to other webpages, and realized that helping people find pages with more incoming links, particularly from credible websites, would be a good way to search the Internet. They also realized that the most popular result would often be the most useful. They called the search engine Google after the mathematical term googol, which refers to the number one followed by 100 zeroes. The name, according to Biography.com, reflected their mission of organizing the massive amount of information available on the web.
Google incorporated in 1998, and Page and Brin raised $1 million from friends and family to launch their startup, moving off of Stanford’s campus and into a rented garage. Google outgrew office after office. Since they launched Google in 1998, it’s become the most popular search engine in the world, and has undertaken a huge array of endeavors, such as launching Gmail, making Google Maps, digitizing books, creating Android, and purchasing YouTube. Google’s secretive innovation lab, Google X, gave rise to projects like Google Glass and the self-driving car, and Google continues to make investments into research in a wide variety of areas, from robotics to health.

Source: Michael Kovac/Getty Images for Vanity Fair
42. Marc Andreessen (1971-)
Marc Andreessen was a student at the National Center for Supercomputing Applications (NCSA) at the University of Illinois when the World Wide Web was beginning to take off, according to iBiblio, and his position enabled him to become very familiar with the Internet and the web. Most of the browsers available at the time were for expensive Unix machines (meaning that the web was used mostly by academics and engineers), and user interfaces were not user-friendly. Both factors stood in the way of the spread of the web, and Andreessen decided to develop a browser that was easier to use and more graphically rich.
In 1992, Andreessen recruited fellow NCSA employee Eric Bina to help with the project, a new browser called Mosaic. It was more graphically sophisticated than other browsers of its era, and included significant innovations like the “image” tag, which made it possible to include images on web pages. Earlier browsers allowed the viewing of images, but only as separate files. Mosaic also featured a graphic interface with clickable buttons that enabled users to navigate easily, and controls that let them scroll through text. Another of Mosaic’s most innovative features was the hyperlink. In earlier browsers, hypertext links had reference numbers that the user would type in to navigate to the linked document. Hyperlinks enabled users to simply click on a link to retrieve a document.
In 1993, Mosaic was posted on NCSA’s servers, and within weeks, tens of thousands of people had downloaded the software. The original version was for Unix, and Andreessen and Bina assembled a team to build PC and Mac versions. Mosaic’s popularity skyrocketed. More users meant a bigger audience for the web at large, and bigger audiences catalyzed the creation of more content.
Andreessen realized that when he graduated, NCSA would take over Mosaic, so he moved to Silicon Valley, settled in Palo Alto, and built a team with the mission of creating a product that would surpass the original Mosaic. They built Netscape, which was available in 1994, and within weeks was the browser of choice for most web users. It included new HTML tags to give designers greater control and creativity, and by 1996, was used by 75% of web users.
While Netscape has since lost its dominance to Microsoft and other later competitors (in part due to the “browser wars” and in part due to a changing landscape in which Netscape’s price structure became its downfall), it was acquired in 1999 by AOL. Andreessen has gone on to numerous other ventures, founding companies and serving on the board of directors for giants like Facebook, eBay, and HP.

Source: Justin Sullivan/ Getty Images
43. Mark Zuckerberg (1984-)
Mark Zuckerberg famously co-founded Facebook out of his Harvard dorm room. Biography.com reports that at Harvard, fellow students Divya Narendra, and twins Cameron and Tyler Winklevoss sought Zuckerberg out to work on an idea for a social networking site that would use information from Harvard’s student networks to create a dating site. He agreed to help, but soon began work on his own social network with Dustin Moskovitz, Chris Hughes, and Eduardo Saverin.
They created a site that enabled users to create profiles, upload photos, and communicate with others. They ran the site, originally called The Facebook, out of a dorm room until June 2004. After his sophomore year, Zuckerberg dropped out of Harvard and moved to Palo Alto to work on Facebook full-time. By the end of 2004, Facebook had 1 million users. A $12.7 million investment from Accel Partners pushed Facebook’s user base to more than 5.5 million by the end of 2005.
A 2006 legal dispute with Narendra and the Winklevosses — who claimed that Zuckerberg stole their idea — led to an initial settlement of $65 million. Despite criticism of Zuckerberg, following a book and film that allegedly fictionalized aspects of Facebook’s story, Zuckerberg and Facebook continued to succeed. The company announced its acquisition of Instagram in April 2012 and went public in May 2012. It has since launched a multitude of features and apps, including Home, Paper, Nearby Friends, Slingshot, Mentions, Safety Check, and Rooms, along with continual changes and improvements to the Facebook apps and desktop site. As of December 2014, Facebook had 1.39 billion monthly active users.
More from Tech Cheat Sheet:
- Is Apple’s Next Big Product an Electric Car?
- Will Artificial Intelligence Take Your Job?
- Is Tesla’s Elon Musk the New Steve Jobs?
Want more great content like this? Sign up here to receive the best of Cheat Sheet delivered daily. No spam; just tailored content straight to your inbox.
Read the original article from The Cheat Sheet