找回密码
 申请新用户
搜索
热搜: 活动 交友 discuz
查看: 10063|回复: 2

完美Google让我们变傻

[复制链接]
发表于 2008-8-4 10:24:32 | 显示全部楼层 |阅读模式
转自:http://nbweekly.oeeee.com/Print/Article/430,7,5175,0.shtml
网络似乎正在粉碎集中注意力和思考的能力
  “戴夫,停,停,听到了吗?停,戴夫,你停下来了吗?”这是在斯坦利·库布里克的电影《2001年太空漫游》临近结尾的一个著名而刺激的场景里,超级电脑HAL央求坚韧的宇航员戴夫·波曼时说的。在被功能出错的电脑送到死亡的外太空时,波曼镇静冷酷地切断了控制电脑人工智能的存储线路。“戴夫,我的头脑正在消逝”,HAL喊道,“我能感觉到,我能感觉到。”

  我也能感觉到。过去数年来,我一直有一种不舒服的感觉,觉得某些人或某些东西正在摆弄我的大脑,重塑中枢神经系统,重置记忆。我的大脑没有消逝,但它正在变化,我目前的思考方式与过去已经截然不同。当我阅读时,能最为强烈地感觉到这一点。全神贯注于一本书或一篇长文,曾经是易如反掌之事,我的大脑能够抓住叙述的演进或论点的转折,我曾耗费数个小时徜徉在长长的诗行里。但如今不再如此,往往阅读二三页后我的注意力就开始漂移了。我变得焦虑不安,失去了线索,开始寻找其他事情来做。我感觉我一直在力图将自己任性的大脑拽回到书本,过去曾经甘之如饴的阅读业已变成一场战斗。

  我清楚所发生的一切。10余年来,我一直花费大量时间上网,搜索和冲浪,有时候给互联网庞大的数据库增加一些东西。作为一名作家,网络对我是天赐之物。曾经需要数天泡在图书馆书堆和杂志室里的研究,现在在几分钟就能完成。通过几个Google搜索,做几下迅速的超链接点击,我就能获得讲故事的根据或精炼的词语。即便我不工作时,我也一样可能沉浸在网络的信息丛中,阅读和写电子邮件,浏览新闻标题和博客网站,看视频和听音乐,或者干脆就在一个又一个链接间点来点去。

  对我来说,网络正变成无处不在的媒介,大多数信息流穿越我的眼睛和耳朵,进入我的大脑,其他人也是如此。能够快捷地得到一个如此令人难以置信的信息库,其好处多多。《连线》杂志的克利夫·汤普森写道:“硅片记忆的完美,对思想可能是一个巨大的福音。”但是,这样的福音是花代价得到的。正如媒体理论家马歇尔·麦克卢汉在20世纪60年代指出的,媒体并非仅是被动的信息渠道,它们不仅提供了思想的内容,而且也形成着思想的过程。网络如今所做的,似乎就是正在粉碎我集中注意力和思考的能力,我的思想正在按照网络分配信息的方式——迅速流动的粒子流——吸收信息。曾经,我是词汇海洋里自由的冲浪者;现在,我像冲浪板上的人滑行在表面。

网络改变了智力活动的习惯
  我不是唯一这样的人。当我向朋友(他们大多数都从事文字工作)提到我阅读遭遇的麻烦时,他们中许多人也有类似的体验。他们使用网络越多,他们集中注意力于长篇文字上的努力就越艰苦,我追踪的一些博客作者也开始提到这种现象。斯科特·卡普最近就网络媒体写了一篇博客,承认他已不再阅读书籍。“我在大学里主修文学,过去曾是一个贪婪的读书人。到底发生了什么?”他写道。他给自己的问题的回答是:“如果我在网络上的阅读不像过去那么多,那是因为我阅读的方式已经改变了;我寻找到便捷的阅读,但那是因为我思维的方式也改变了么?”

  这些逸闻证明不了太多东西,我们仍然需要长期的神经学和心理学实验,给我们提供网络是如何利用影响力的权威画面。最近出版的由伦敦大学专家主持的上网习惯研究报告表明,我们可能正处于我们阅读和思维方式改变的大海中部。作为这项5年研究计划的组成部分,专家们检查了电脑日志,记录下浏览两个研究网址的访客的行为。这两个网址分别由大英图书馆和不列颠教育协会运营,给访客提供杂志文章、电子书和其他文字信息资源。他们发现,使用这两个网址的人们,表现出“快速浏览的模式”,从一个资料库跳跃到另外一个资料库,很少返回他们业已访问过的任何资料库。他们从这个网址跳跃到另外一个网址之前,所阅读的文章或书从没有超过一二页。有时候,他们储存下一篇长文,但没有证据显示他们回来过并认真阅读过它。研究报告称:“显然,用户没有以传统的方式进行网上阅读;实际上,有迹象表明,新的‘阅读’模式正在显露,用户草草浏览标题、目录和摘要,努力获取快速的收获。看来似乎是,他们上网是为了避免用传统的方式去阅读。”

  这是一种截然不同的阅读,在它背后则是一种截然不同的思维,甚至是一种新的自我感。图夫兹大学发展心理学教授沃尔夫担心,网络促进的阅读方式,可能正在削弱我们进行深度阅读的能力。她说,当我们在网络上阅读时,我们会变成“信息的简单解码器”,我们在深度阅读中全神贯注地进行丰富的智力思考,都很大程度上烟消云散了。

我们的智力思维会变为人工智力
  沃尔夫解释到,阅读并非人类一种出自本能的技能。我们必须告诉我们的大脑,如何将我们看到的象征符号翻译成我们能够理解的语言,我们在学习和实践中所使用的媒介或其他技术手段,在我们大脑里形成神经中枢线路过程中扮演着重要的角色。实验表明,表意文字的读者,如中国人,在阅读中发展出的智力活动线路,与我们使用字母的书写语言进行阅读形成的智力思维线路迥然不同。

  1882年,尼采购买了一台打印机。他的视力坏了,长期将目光集中在书页上使他筋疲力尽,头痛欲裂。他被迫减少写作,但又担心不久将被迫放弃写作。打字机挽救了他,他掌握打字功能后,能够眼睛闭上写作,只使用他的指尖,词汇再次从他的大脑里流向纸页。但打字机对他的著作产生了微妙的影响。尼采说:“我们的写作装备参与到我们的思想形成中。”在打字机的左右下,德国媒介专家弗雷德里克·基特勒说,尼采的散文经历了“从论战到格言警句、从思想到一语双关的变化”。

  人类的大脑是具有几乎无限的延展性的。乔治·梅森大学神经科学教授奥德斯说,即便成年人的头脑也是非常可塑的,神经细胞经常打破旧的关联形式,形成新的关联形式,使大脑改变其运作的方式。

  由于大脑的可塑性,适应性改变也发生在我们大脑内的生物层面上。互联网对我们的认知功能尤其具有深远的影响。网络给我们送来超链接的内容,闪烁的广告和其他数字廉价物,这些东西在吸引我们的同时,也塑造着我们头脑里对网络的形象的认知。网络的影响并没有终止在电脑屏幕的边缘。当人们的思维调适到网络媒介的繁杂画面时,传统的媒介也必须适应观众新的期望,电视载出滚动的文本新闻和流行的广告,杂志和报纸缩短文章,加上简短的标题或简介。

  在历史上,从来没有哪一种交流体系像今天的互联网一样在我们的生活里扮演如此多的角色。它改变着我们的阅读方式,更在改变着我们的思维。Google总部犹如互联网时代的高级教会,几乎让我们的生活与它须臾不可分离。公司宣布开发“完美的搜索引擎”,它能够“正确地理解你的意思,并给予你想要的任何东西”。然而,我们从它那里得到的信息片段越多,我们攫取它的馈赠越快,我们的大脑就越来越深地陷入到这种快捷的模式中,失去的不仅是传统的深读,也失去了缜密思维的逻辑基础。

  我深为电影《2001年太空漫游》的场景所困扰,使它显得如此刺激如此怪异的东西,正是电脑对它的思维解体那种充满情感的反应:当一个接一个线路被切断后它的绝望,它如同孩童般的对宇航员的乞求。在2001年的世界,人们已变成机器人一般,大多数人类的特征变成了机器,这就是库布里克在影片里的黑暗寓言:当我们依赖电脑仲裁我们对世界的理解,那就是我们自身的智力变为毫无光泽的人工智力之时。
 楼主| 发表于 2008-8-4 10:25:31 | 显示全部楼层

Google让我们越变越傻? 专注与沉思能力被粉碎

转自:http://it.people.com.cn/GB/42891/42894/7429622.html
大西洋月刊》刊文剖析互联网一代大脑退化历程,认为新阅读风格使人退回中世纪

  “戴夫,停下。停下好吗?停下,戴夫。你能停下吗,戴夫?”

  这个著名的场景出现在库布里克的电影《2001:太空漫游》的片尾,乃超级电脑HAL恳求宇航员戴夫·鲍曼手下留情,放他一条生路。由于电脑故障,戴夫被送入茫茫外空,前路未卜,目的地不明,只好“视死如不归”。最后,他对HAL下了手,平静而冷酷地切断了它的内存(记忆体)电路。

  “戴夫,我的思想要没了。”HAL绝望地说。“我感觉得到。我感觉得到。”

  网络粉碎专注与沉思的能力

  当尼古拉斯·卡尔想起HAL的哀号,不由得脸皮有些酥麻,手脚略感冰凉。“我也感觉得到。”他说。

  卡尔在2008年7~8月号的《大西洋月刊》撰文,以《Google是否让我们越变越傻》为题,痛苦地剖析自己和互联网一代的大脑退化历程。“过去几年来,我老有一种不祥之感,觉得有什么人,或什么东西,一直在我脑袋里捣鼓个不停,重绘我的‘脑电图’,重写我的‘脑内存’。”他写道。“我的思想倒没跑掉——到目前为止我还能这么说,但它正在改变。”

  他注意到,过去读一本书或一篇长文章时,总是不费什么劲儿,脑袋瓜子就专注地跟着其中的叙述或论点,转个没完。可如今这都不灵了。“现在,往往读过了两三页,我的注意力就漂走了。”

  卡尔找到了原因。过去这十多年来,他在网上花了好多时间,在互联网的信息汪洋中冲浪、搜寻。对作家而言,网络就像个天上掉下来的聚宝盆,过去要在书堆里花上好几天做的研究,现在几分钟就齐活。Google几下,动两下鼠标,一切就都有了。“对我来说,”卡尔写道,“对别人也是如此,网络正在变成一种万有媒介,一种管道,经由它,信息流过我的眼、耳,进入我的思想。”

  信息太丰富了,我们受用不尽,也不忘感恩戴德,却往往忽视了要付出的代价。“网络似乎粉碎了我专注与沉思的能力。现如今,我的脑袋就盼着以网络提供信息的方式来获取信息:飞快的微粒运动。”

  网络新阅读方式:海量浏览

  卡尔不是唯一一个遇到此种问题的人。长期在密歇根医学院任教的布鲁斯·弗里德曼,今年早些时候也在自己的blog上写到互联网如何改变了他的思维习惯。“现在我已几乎完全丧失了阅读稍长些文章的能力,不管是在网上,还是在纸上。”他在电话里告诉卡尔,他的思维呈现出一种“碎读”特性,源自上网快速浏览多方短文的习惯。“我再也读不了《战争与和平》了。”弗里德曼承认,“我失去了这个本事。即便是一篇blog,哪怕超过了三四段,也难以下咽。我瞅一眼就跑。”

  伦敦大学学院以5年时间做了一个网络研读习惯的研究。学者们以两个学术网站为对象——它们均提供电子期刊、电子书及其他文字信息的在线阅读,分析它们的浏览记录,结果发现,读者总是忙于一篇又一篇地浏览,且极少回看已经访问过的文章。他们打开一篇文章或一本书,通常读上一两页,便“蹦”到另一个地方去了。报告说:“很明显,用户们不是在以传统方式进行在线阅读,相反,一种新‘阅读’方式的迹象已经出现:用户们在标题、内容页和摘要之间进行着一视同仁的‘海量浏览’,以求快速得到结果。这几乎可被视为:他们上网正是为了回避传统意义上的阅读。”

  打字机让尼采的写作风格发生变化

  互联网改变的不仅是我们的阅读方式,或许还有我们的思维方式,甚至我们的自我。塔夫茨大学的心理学家、《普鲁斯特与鱿鱼:阅读思维的科学与故事》一书作者玛雅妮·沃尔夫说:“我们并非只由阅读的内容定义,我们也被我们阅读的方式所定义。”她担心,将“效率”和“直接”置于一切之上的新阅读风格,或会减低我们进行深度阅读的能力。几百年前的印刷术,令阅读长且复杂的作品成为家常之事,如今的互联网技术莫非使它退回了又短又简单的中世纪?沃尔夫说,上网阅读时,我们充其量只是一台“信息解码器”,而我们专注地进行深度阅读时所形成的那种理解文本的能力、那种丰富的精神联想(企业库 论坛),在很大程度上都流失掉了。

  沃尔夫认为,阅读并非人类与生俱来的技巧,不像说话那样融于我们的基因。我们得训练自己的大脑,让它学会如何将我们所看到的字符译解成自己可以理解的语言。

  1882年,尼采买了台打字机。此时的他,视力下降得厉害,盯着纸看的时间长了,动不动头疼得要死,他担心会被迫停止写作。但打字机救了他。他终于熟能生巧,闭着眼睛也能打字——盲打。然而,新机器也使其作品的风格发生了微妙的变化。他的一个作曲家朋友为此写信给他,还说自己写曲子时,风格经常因纸和笔的特性不同而改变。

  “您说得对,”尼采复信道,“我们的写作工具渗入了我们思想的形成。”德国媒体学者弗里德里希·基特勒则认为,改用打字机后,尼采的文风“从争辩变成了格言,从思索变成了一语双关,从繁琐论证变成了电报式的风格”。

  卡尔引用神经学家的观点,证明成年人的大脑仍然颇具可塑性,而历史上机械钟表和地图的发明,同样说明了人类如何因此改变了对时间与空间的思维。互联网正是今日的钟表与地图。

  网络影响让传统媒体也零碎化

  当人们的思维方式适应了互联网媒体的大拼盘范式后,传统媒体也会做出改变。电视节目加入了滚动字幕和不断跳出的小广告,报刊则缩短其文章的长度,引入一小块一小块的摘要,在版面上堆砌各种易于浏览的零碎信息。今年3月,《纽约时报》便决定将其第2和第3版改为内容精粹。

  Google首席执行官埃里克·施密特说,该公司致力于将“一切系统化”。Google还宣布,其使命是“将全世界的信息组织起来,使之随处可得,并且有用。”通过开发“完美的搜索引擎,”让它能够“准确领会你的意图,并精确地回馈给你所要的东西。”问题是,它会使我们越变越蠢吗?

  “我感觉得到。”卡尔最后说,库布里克黑色预言的实质在于:当我们依赖电脑作为理解世界的媒介时,它就会成为我们自己的思想。

  上网阅读时,我们充其量只是一台“信息解码器”。

  当我们依赖电脑作为理解世界的媒介时,它就会成为我们自己的思想。
回复

使用道具 举报

 楼主| 发表于 2008-8-4 10:30:50 | 显示全部楼层

Is Google Making Us Stupid?

按:原文在这里。
转自:http://www.theatlantic.com/doc/200807/google


"Dave, stop. Stop, will you? Stop, Dave. Will you stop, Dave?” So the supercomputer HAL pleads with the implacable astronaut Dave Bowman in a famous and weirdly poignant scene toward the end of Stanley Kubrick’s 2001: A Space Odyssey. Bowman, having nearly been sent to a deep-space death by the malfunctioning machine, is calmly, coldly disconnecting the memory circuits that control its artificial »
brain. “Dave, my mind is going,” HAL says, forlornly. “I can feel it. I can feel it.”
I can feel it, too. Over the past few years I’ve had an uncomfortable sense that someone, or something, has been tinkering with my brain, remapping the neural circuitry, reprogramming the memory. My mind isn’t going—so far as I can tell—but it’s changing. I’m not thinking the way I used to think. I can feel it most strongly when I’m reading. Immersing myself in a book or a lengthy article used to be easy. My mind would get caught up in the narrative or the turns of the argument, and I’d spend hours strolling through long stretches of prose. That’s rarely the case anymore. Now my concentration often starts to drift after two or three pages. I get fidgety, lose the thread, begin looking for something else to do. I feel as if I’m always dragging my wayward brain back to the text. The deep reading that used to come naturally has become a struggle.
I think I know what’s going on. For more than a decade now, I’ve been spending a lot of time online, searching and surfing and sometimes adding to the great databases of the Internet. The Web has been a godsend to me as a writer. Research that once required days in the stacks or periodical rooms of libraries can now be done in minutes. A few Google searches, some quick clicks on hyperlinks, and I’ve got the telltale fact or pithy quote I was after. Even when I’m not working, I’m as likely as not to be foraging in the Web’s info-thickets—reading and writing e-mails, scanning headlines and blog posts, watching videos and listening to podcasts, or just tripping from link to link to link. (Unlike footnotes, to which they’re sometimes likened, hyperlinks don’t merely point to related works; they propel you toward them.)
For me, as for others, the Net is becoming a universal medium, the conduit for most of the information that flows through my eyes and ears and into my mind. The advantages of having immediate access to such an incredibly rich store of information are many, and they’ve been widely described and duly applauded. “The perfect recall of silicon memory,” Wired’s Clive Thompson has written, “can be an enormous boon to thinking.” But that boon comes at a price. As the media theorist Marshall McLuhan pointed out in the 1960s, media are not just passive channels of information. They supply the stuff of thought, but they also shape the process of thought. And what the Net seems to be doing is chipping away my capacity for concentration and contemplation. My mind now expects to take in information the way the Net distributes it: in a swiftly moving stream of particles. Once I was a scuba diver in the sea of words. Now I zip along the surface like a guy on a Jet Ski.
I’m not the only one. When I mention my troubles with reading to friends and acquaintances—literary types, most of them—many say they’re having similar experiences. The more they use the Web, the more they have to fight to stay focused on long pieces of writing. Some of the bloggers I follow have also begun mentioning the phenomenon. Scott Karp, who writes a blog about online media, recently confessed that he has stopped reading books altogether. “I was a lit major in college, and used to be [a] voracious book reader,” he wrote. “What happened?” He speculates on the answer: “What if I do all my reading on the web not so much because the way I read has changed, i.e. I’m just seeking convenience, but because the way I THINK has changed?”
Bruce Friedman, who blogs regularly about the use of computers in medicine, also has described how the Internet has altered his mental habits. “I now have almost totally lost the ability to read and absorb a longish article on the web or in print,” he wrote earlier this year. A pathologist who has long been on the faculty of the University of Michigan Medical School, Friedman elaborated on his comment in a telephone conversation with me. His thinking, he said, has taken on a “staccato” quality, reflecting the way he quickly scans short passages of text from many sources online. “I can’t read War and Peace anymore,” he admitted. “I’ve lost the ability to do that. Even a blog post of more than three or four paragraphs is too much to absorb. I skim it.”
Anecdotes alone don’t prove much. And we still await the long-term neurological and psychological experiments that will provide a definitive picture of how Internet use affects cognition. But a recently published study of online research habits, conducted by scholars from University College London, suggests that we may well be in the midst of a sea change in the way we read and think. As part of the five-year research program, the scholars examined computer logs documenting the behavior of visitors to two popular research sites, one operated by the British Library and one by a U.K. educational consortium, that provide access to journal articles, e-books, and other sources of written information. They found that people using the sites exhibited “a form of skimming activity,” hopping from one source to another and rarely returning to any source they’d already visited. They typically read no more than one or two pages of an article or book before they would “bounce” out to another site. Sometimes they’d save a long article, but there’s no evidence that they ever went back and actually read it. The authors of the study report:
It is clear that users are not reading online in the traditional sense; indeed there are signs that new forms of “reading” are emerging as users “power browse” horizontally through titles, contents pages and abstracts going for quick wins. It almost seems that they go online to avoid reading in the traditional sense.

Thanks to the ubiquity of text on the Internet, not to mention the popularity of text-messaging on cell phones, we may well be reading more today than we did in the 1970s or 1980s, when television was our medium of choice. But it’s a different kind of reading, and behind it lies a different kind of thinking—perhaps even a new sense of the self. “We are not only what we read,” says Maryanne Wolf, a developmental psychologist at Tufts University and the author of Proust and the Squid: The Story and Science of the Reading Brain. “We are how we read.” Wolf worries that the style of reading promoted by the Net, a style that puts “efficiency” and “immediacy” above all else, may be weakening our capacity for the kind of deep reading that emerged when an earlier technology, the printing press, made long and complex works of prose commonplace. When we read online, she says, we tend to become “mere decoders of information.” Our ability to interpret text, to make the rich mental connections that form when we read deeply and without distraction, remains largely disengaged.
Reading, explains Wolf, is not an instinctive skill for human beings. It’s not etched into our genes the way speech is. We have to teach our minds how to translate the symbolic characters we see into the language we understand. And the media or other technologies we use in learning and practicing the craft of reading play an important part in shaping the neural circuits inside our brains. Experiments demonstrate that readers of ideograms, such as the Chinese, develop a mental circuitry for reading that is very different from the circuitry found in those of us whose written language employs an alphabet. The variations extend across many regions of the brain, including those that govern such essential cognitive functions as memory and the interpretation of visual and auditory stimuli. We can expect as well that the circuits woven by our use of the Net will be different from those woven by our reading of books and other printed works.
Sometime in 1882, Friedrich Nietzsche bought a typewriter—a Malling-Hansen Writing Ball, to be precise. His vision was failing, and keeping his eyes focused on a page had become exhausting and painful, often bringing on crushing headaches. He had been forced to curtail his writing, and he feared that he would soon have to give it up. The typewriter rescued him, at least for a time. Once he had mastered touch-typing, he was able to write with his eyes closed, using only the tips of his fingers. Words could once again flow from his mind to the page.
But the machine had a subtler effect on his work. One of Nietzsche’s friends, a composer, noticed a change in the style of his writing. His already terse prose had become even tighter, more telegraphic. “Perhaps you will through this instrument even take to a new idiom,” the friend wrote in a letter, noting that, in his own work, his “‘thoughts’ in music and language often depend on the quality of pen and paper.”

Also see:

Living With a Computer(July 1982)
"The process works this way. When I sit down to write a letter or start the first draft of an article, I simply type on the keyboard and the words appear on the screen..." By James Fallows



“You are right,” Nietzsche replied, “our writing equipment takes part in the forming of our thoughts.” Under the sway of the machine, writes the German media scholar Friedrich A. Kittler, Nietzsche’s prose “changed from arguments to aphorisms, from thoughts to puns, from rhetoric to telegram style.”
The human brain is almost infinitely malleable. People used to think that our mental meshwork, the dense connections formed among the 100 billion or so neurons inside our skulls, was largely fixed by the time we reached adulthood. But brain researchers have discovered that that’s not the case. James Olds, a professor of neuroscience who directs the Krasnow Institute for Advanced Study at George Mason University, says that even the adult mind “is very plastic.” Nerve cells routinely break old connections and form new ones. “The brain,” according to Olds, “has the ability to reprogram itself on the fly, altering the way it functions.”
As we use what the sociologist Daniel Bell has called our “intellectual technologies”—the tools that extend our mental rather than our physical capacities—we inevitably begin to take on the qualities of those technologies. The mechanical clock, which came into common use in the 14th century, provides a compelling example. In Technics and Civilization, the historian and cultural critic Lewis Mumford described how the clock “disassociated time from human events and helped create the belief in an independent world of mathematically measurable sequences.” The “abstract framework of divided time” became “the point of reference for both action and thought.”
The clock’s methodical ticking helped bring into being the scientific mind and the scientific man. But it also took something away. As the late MIT computer scientist Joseph Weizenbaum observed in his 1976 book, Computer Power and Human Reason: From Judgment to Calculation, the conception of the world that emerged from the widespread use of timekeeping instruments “remains an impoverished version of the older one, for it rests on a rejection of those direct experiences that formed the basis for, and indeed constituted, the old reality.” In deciding when to eat, to work, to sleep, to rise, we stopped listening to our senses and started obeying the clock.
The process of adapting to new intellectual technologies is reflected in the changing metaphors we use to explain ourselves to ourselves. When the mechanical clock arrived, people began thinking of their brains as operating “like clockwork.” Today, in the age of software, we have come to think of them as operating “like computers.” But the changes, neuroscience tells us, go much deeper than metaphor. Thanks to our brain’s plasticity, the adaptation occurs also at a biological level.
The Internet promises to have particularly far-reaching effects on cognition. In a paper published in 1936, the British mathematician Alan Turing proved that a digital computer, which at the time existed only as a theoretical machine, could be programmed to perform the function of any other information-processing device. And that’s what we’re seeing today. The Internet, an immeasurably powerful computing system, is subsuming most of our other intellectual technologies. It’s becoming our map and our clock, our printing press and our typewriter, our calculator and our telephone, and our radio and TV.
When the Net absorbs a medium, that medium is re-created in the Net’s image. It injects the medium’s content with hyperlinks, blinking ads, and other digital gewgaws, and it surrounds the content with the content of all the other media it has absorbed. A new e-mail message, for instance, may announce its arrival as we’re glancing over the latest headlines at a newspaper’s site. The result is to scatter our attention and diffuse our concentration.
The Net’s influence doesn’t end at the edges of a computer screen, either. As people’s minds become attuned to the crazy quilt of Internet media, traditional media have to adapt to the audience’s new expectations. Television programs add text crawls and pop-up ads, and magazines and newspapers shorten their articles, introduce capsule summaries, and crowd their pages with easy-to-browse info-snippets. When, in March of this year, TheNew York Times decided to devote the second and third pages of every edition to article abstracts, its design director, Tom Bodkin, explained that the “shortcuts” would give harried readers a quick “taste” of the day’s news, sparing them the “less efficient” method of actually turning the pages and reading the articles. Old media have little choice but to play by the new-media rules.
Never has a communications system played so many roles in our lives—or exerted such broad influence over our thoughts—as the Internet does today. Yet, for all that’s been written about the Net, there’s been little consideration of how, exactly, it’s reprogramming us. The Net’s intellectual ethic remains obscure.
About the same time that Nietzsche started using his typewriter, an earnest young man named Frederick Winslow Taylor carried a stopwatch into the Midvale Steel plant in Philadelphia and began a historic series of experiments aimed at improving the efficiency of the plant’s machinists. With the approval of Midvale’s owners, he recruited a group of factory hands, set them to work on various metalworking machines, and recorded and timed their every movement as well as the operations of the machines. By breaking down every job into a sequence of small, discrete steps and then testing different ways of performing each one, Taylor created a set of precise instructions—an “algorithm,” we might say today—for how each worker should work. Midvale’s employees grumbled about the strict new regime, claiming that it turned them into little more than automatons, but the factory’s productivity soared.
More than a hundred years after the invention of the steam engine, the Industrial Revolution had at last found its philosophy and its philosopher. Taylor’s tight industrial choreography—his “system,” as he liked to call it—was embraced by manufacturers throughout the country and, in time, around the world. Seeking maximum speed, maximum efficiency, and maximum output, factory owners used time-and-motion studies to organize their work and configure the jobs of their workers. The goal, as Taylor defined it in his celebrated 1911 treatise, The Principles of Scientific Management, was to identify and adopt, for every job, the “one best method” of work and thereby to effect “the gradual substitution of science for rule of thumb throughout the mechanic arts.” Once his system was applied to all acts of manual labor, Taylor assured his followers, it would bring about a restructuring not only of industry but of society, creating a utopia of perfect efficiency. “In the past the man has been first,” he declared; “in the future the system must be first.”
Taylor’s system is still very much with us; it remains the ethic of industrial manufacturing. And now, thanks to the growing power that computer engineers and software coders wield over our intellectual lives, Taylor’s ethic is beginning to govern the realm of the mind as well. The Internet is a machine designed for the efficient and automated collection, transmission, and manipulation of information, and its legions of programmers are intent on finding the “one best method”—the perfect algorithm—to carry out every mental movement of what we’ve come to describe as “knowledge work.”
Google’s headquarters, in Mountain View, California—the Googleplex—is the Internet’s high church, and the religion practiced inside its walls is Taylorism. Google, says its chief executive, Eric Schmidt, is “a company that’s founded around the science of measurement,” and it is striving to “systematize everything” it does. Drawing on the terabytes of behavioral data it collects through its search engine and other sites, it carries out thousands of experiments a day, according to the Harvard Business Review, and it uses the results to refine the algorithms that increasingly control how people find information and extract meaning from it. What Taylor did for the work of the hand, Google is doing for the work of the mind.
The company has declared that its mission is “to organize the world’s information and make it universally accessible and useful.” It seeks to develop “the perfect search engine,” which it defines as something that “understands exactly what you mean and gives you back exactly what you want.” In Google’s view, information is a kind of commodity, a utilitarian resource that can be mined and processed with industrial efficiency. The more pieces of information we can “access” and the faster we can extract their gist, the more productive we become as thinkers.
Where does it end? Sergey Brin and Larry Page, the gifted young men who founded Google while pursuing doctoral degrees in computer science at Stanford, speak frequently of their desire to turn their search engine into an artificial intelligence, a HAL-like machine that might be connected directly to our brains. “The ultimate search engine is something as smart as people—or smarter,” Page said in a speech a few years back. “For us, working on search is a way to work on artificial intelligence.” In a 2004 interview with Newsweek, Brin said, “Certainly if you had all the world’s information directly attached to your brain, or an artificial brain that was smarter than your brain, you’d be better off.” Last year, Page told a convention of scientists that Google is “really trying to build artificial intelligence and to do it on a large scale.”
Such an ambition is a natural one, even an admirable one, for a pair of math whizzes with vast quantities of cash at their disposal and a small army of computer scientists in their employ. A fundamentally scientific enterprise, Google is motivated by a desire to use technology, in Eric Schmidt’s words, “to solve problems that have never been solved before,” and artificial intelligence is the hardest problem out there. Why wouldn’t Brin and Page want to be the ones to crack it?
Still, their easy assumption that we’d all “be better off” if our brains were supplemented, or even replaced, by an artificial intelligence is unsettling. It suggests a belief that intelligence is the output of a mechanical process, a series of discrete steps that can be isolated, measured, and optimized. In Google’s world, the world we enter when we go online, there’s little place for the fuzziness of contemplation. Ambiguity is not an opening for insight but a bug to be fixed. The human brain is just an outdated computer that needs a faster processor and a bigger hard drive.
The idea that our minds should operate as high-speed data-processing machines is not only built into the workings of the Internet, it is the network’s reigning business model as well. The faster we surf across the Web—the more links we click and pages we view—the more opportunities Google and other companies gain to collect information about us and to feed us advertisements. Most of the proprietors of the commercial Internet have a financial stake in collecting the crumbs of data we leave behind as we flit from link to link—the more crumbs, the better. The last thing these companies want is to encourage leisurely reading or slow, concentrated thought. It’s in their economic interest to drive us to distraction.
Maybe I’m just a worrywart. Just as there’s a tendency to glorify technological progress, there’s a countertendency to expect the worst of every new tool or machine. In Plato’s Phaedrus, Socrates bemoaned the development of writing. He feared that, as people came to rely on the written word as a substitute for the knowledge they used to carry inside their heads, they would, in the words of one of the dialogue’s characters, “cease to exercise their memory and become forgetful.” And because they would be able to “receive a quantity of information without proper instruction,” they would “be thought very knowledgeable when they are for the most part quite ignorant.” They would be “filled with the conceit of wisdom instead of real wisdom.” Socrates wasn’t wrong—the new technology did often have the effects he feared—but he was shortsighted. He couldn’t foresee the many ways that writing and reading would serve to spread information, spur fresh ideas, and expand human knowledge (if not wisdom).
The arrival of Gutenberg’s printing press, in the 15th century, set off another round of teeth gnashing. The Italian humanist Hieronimo Squarciafico worried that the easy availability of books would lead to intellectual laziness, making men “less studious” and weakening their minds. Others argued that cheaply printed books and broadsheets would undermine religious authority, demean the work of scholars and scribes, and spread sedition and debauchery. As New York University professor Clay Shirky notes, “Most of the arguments made against the printing press were correct, even prescient.” But, again, the doomsayers were unable to imagine the myriad blessings that the printed word would deliver.
So, yes, you should be skeptical of my skepticism. Perhaps those who dismiss critics of the Internet as Luddites or nostalgists will be proved correct, and from our hyperactive, data-stoked minds will spring a golden age of intellectual discovery and universal wisdom. Then again, the Net isn’t the alphabet, and although it may replace the printing press, it produces something altogether different. The kind of deep reading that a sequence of printed pages promotes is valuable not just for the knowledge we acquire from the author’s words but for the intellectual vibrations those words set off within our own minds. In the quiet spaces opened up by the sustained, undistracted reading of a book, or by any other act of contemplation, for that matter, we make our own associations, draw our own inferences and analogies, foster our own ideas. Deep reading, as Maryanne Wolf argues, is indistinguishable from deep thinking.
If we lose those quiet spaces, or fill them up with “content,” we will sacrifice something important not only in our selves but in our culture. In a recent essay, the playwright Richard Foreman eloquently described what’s at stake:
I come from a tradition of Western culture, in which the ideal (my ideal) was the complex, dense and “cathedral-like” structure of the highly educated and articulate personality—a man or woman who carried inside themselves a personally constructed and unique version of the entire heritage of the West. [But now] I see within us all (myself included) the replacement of complex inner density with a new kind of self—evolving under the pressure of information overload and the technology of the “instantly available.”

As we are drained of our “inner repertory of dense cultural inheritance,” Foreman concluded, we risk turning into “‘pancake people’—spread wide and thin as we connect with that vast network of information accessed by the mere touch of a button.”
I’m haunted by that scene in 2001. What makes it so poignant, and so weird, is the computer’s emotional response to the disassembly of its mind: its despair as one circuit after another goes dark, its childlike pleading with the astronaut—“I can feel it. I can feel it. I’m afraid”—and its final reversion to what can only be called a state of innocence. HAL’s outpouring of feeling contrasts with the emotionlessness that characterizes the human figures in the film, who go about their business with an almost robotic efficiency. Their thoughts and actions feel scripted, as if they’re following the steps of an algorithm. In the world of 2001, people have become so machinelike that the most human character turns out to be a machine. That’s the essence of Kubrick’s dark prophecy: as we come to rely on computers to mediate our understanding of the world, it is our own intelligence that flattens into artificial intelligence.
回复

使用道具 举报

您需要登录后才可以回帖 登录 | 申请新用户

本版积分规则

守望轩 ( 湘ICP备17013730号-2 )|网站地图

GMT+8, 2024-4-20 05:18 , Processed in 0.026553 second(s), 16 queries .

Powered by Discuz! X3.5

© 2001-2023 Discuz! Team.

快速回复 返回顶部 返回列表