Выбери любимый жанр

Выбрать книгу по жанру

Фантастика и фэнтези

Детективы и триллеры

Проза

Любовные романы

Приключения

Детские

Поэзия и драматургия

Старинная литература

Научно-образовательная

Компьютеры и интернет

Справочная литература

Документальная литература

Религия и духовность

Юмор

Дом и семья

Деловая литература

Жанр не определен

Техника

Прочее

Драматургия

Фольклор

Военное дело

Последние комментарии
оксана2018-11-27
Вообще, я больше люблю новинки литератур
К книге
Professor2018-11-27
Очень понравилась книга. Рекомендую!
К книге
Vera.Li2016-02-21
Миленько и простенько, без всяких интриг
К книге
ст.ст.2018-05-15
 И что это было?
К книге
Наталья222018-11-27
Сюжет захватывающий. Все-таки читать кни
К книге

The Innovators: How a Group of Inventors, Hackers, Geniuses, and Geeks Created the Digital Revolutio - Isaacson Walter - Страница 33


33
Изменить размер шрифта:

This reconfigured ENIAC, which became operational in April 1948, had a read-only memory, which meant that it was hard to modify programs while they were running. In addition, its mercury delay line memory was sluggish and required precision engineering. Both of these drawbacks were avoided in a small machine at Manchester University in England that was built from scratch to function as a stored-program computer. Dubbed “the Manchester Baby,” it became operational in June 1948.

Manchester’s computing lab was run by Max Newman, Turing’s mentor, and the primary work on the new computer was done by Frederic Calland Williams and Thomas Kilburn. Williams invented a storage mechanism using cathode-ray tubes, which made the machine faster and simpler than ones using mercury delay lines. It worked so well that it led to the more powerful Manchester Mark I, which became operational in April 1949, as well as the EDSAC, completed by Maurice Wilkes and a team at Cambridge that May.85

As these machines were being developed, Turing was also trying to develop a stored-program computer. After leaving Bletchley Park, he joined the National Physical Laboratory, a prestigious institute in London, where he designed a computer named the Automatic Computing Engine in homage to Babbage’s two engines. But progress on ACE was fitful. By 1948 Turing was fed up with the pace and frustrated that his colleagues had no interest in pushing the bounds of machine learning and artificial intelligence, so he left to join Max Newman at Manchester.86

Likewise, von Neumann embarked on developing a stored-program computer as soon as he settled at the Institute for Advanced Study in Princeton in 1946, an endeavor chronicled in George Dyson’s Turing’s Cathedral. The Institute’s director, Frank Aydelotte, and its most influential faculty trustee, Oswald Veblen, were staunch supporters of what became known as the IAS Machine, fending off criticism from other faculty that building a computing machine would demean the mission of what was supposed to be a haven for theoretical thinking. “He clearly stunned, or even horrified, some of his mathematical colleagues of the most erudite abstraction, by openly professing his great interest in other mathematical tools than the blackboard and chalk or pencil and paper,” von Neumann’s wife, Klara, recalled. “His proposal to build an electronic computing machine under the sacred dome of the Institute was not received with applause to say the least.”87

Von Neumann’s team members were stashed in an area that would have been used by the logician Kurt Godel’s secretary, except he didn’t want one. Throughout 1946 they published detailed papers about their design, which they sent to the Library of Congress and the U.S. Patent Office, not with applications for patents but with affidavits saying they wanted the work to be in the public domain.

Their machine became fully operational in 1952, but it was slowly abandoned after von Neumann left for Washington to join the Atomic Energy Commission. “The demise of our computer group was a disaster not only for Princeton but for science as a whole,” said the physicist Freeman Dyson, a member of the Institute (and George Dyson’s father). “It meant that there did not exist at that critical period in the 1950s an academic center where computer people of all kinds could get together at the highest intellectual level.”88 Instead, beginning in the 1950s, innovation in computing shifted to the corporate realm, led by companies such as Ferranti, IBM, Remington Rand, and Honeywell.

That shift takes us back to the issue of patent protections. If von Neumann and his team had continued to pioneer innovations and put them in the public domain, would such an open-source model of development have led to faster improvements in computers? Or did marketplace competition and the financial rewards for creating intellectual property do more to spur innovation? In the cases of the Internet, the Web, and some forms of software, the open model would turn out to work better. But when it came to hardware, such as computers and microchips, a proprietary system provided incentives for a spurt of innovation in the 1950s. The reason the proprietary approach worked well, especially for computers, was that large industrial organizations, which needed to raise working capital, were best at handling the research, development, manufacturing, and marketing for such machines. In addition, until the mid-1990s, patent protection was easier to obtain for hardware than it was for software.V However, there was a downside to the patent protection given to hardware innovation: the proprietary model produced companies that were so entrenched and defensive that they would miss out on the personal computer revolution in the early 1970s.

CAN MACHINES THINK?

As he thought about the development of stored-program computers, Alan Turing turned his attention to the assertion that Ada Lovelace had made a century earlier, in her final “Note” on Babbage’s Analytical Engine: that machines could not really think. If a machine could modify its own program based on the information it processed, Turing asked, wouldn’t that be a form of learning? Might that lead to artificial intelligence?

The issues surrounding artificial intelligence go back to the ancients. So do the related questions involving human consciousness. As with most questions of this sort, Descartes was instrumental in framing them in modern terms. In his 1637 Discourse on the Method, which contains his famous assertion “I think, therefore I am,” Descartes wrote:

If there were machines that bore a resemblance to our bodies and imitated our actions as closely as possible for all practical purposes, we should still have two very certain means of recognizing that they were not real humans. The first is that . . . it is not conceivable that such a machine should produce arrangements of words so as to give an appropriately meaningful answer to whatever is said in its presence, as the dullest of men can do. Secondly, even though some machines might do some things as well as we do them, or perhaps even better, they would inevitably fail in others, which would reveal that they are acting not from understanding.

Turing had long been interested in the way computers might replicate the workings of a human brain, and this curiosity was furthered by his work on machines that deciphered coded language. In early 1943, as Colossus was being designed at Bletchley Park, Turing sailed across the Atlantic on a mission to Bell Laboratories in lower Manhattan, where he consulted with the group working on electronic speech encipherment, the technology that could electronically scramble and unscramble telephone conversations.

There he met the colorful genius Claude Shannon, the former MIT graduate student who wrote the seminal master’s thesis in 1937 that showed how Boolean algebra, which rendered logical propositions into equations, could be performed by electronic circuits. Shannon and Turing began meeting for tea and long conversations in the afternoons. Both were interested in brain science, and they realized that their 1937 papers had something fundamental in common: they showed how a machine, operating with simple binary instructions, could tackle not only math problems but all of logic. And since logic was the basis for how human brains reasoned, then a machine could, in theory, replicate human intelligence.

“Shannon wants to feed not just data to [a machine], but cultural things!” Turing told Bell Lab colleagues at lunch one day. “He wants to play music to it!” At another lunch in the Bell Labs dining room, Turing held forth in his high-pitched voice, audible to all the executives in the room: “No, I’m not interested in developing a powerful brain. All I’m after is just a mediocre brain, something like the President of the American Telephone and Telegraph Company.”89