Tom Chatfield, writer based in the U.K. and the author of Netymology: A Linguistic Celebration of the Digital World reports, "The stories behind the digital age’s most iconic terms show the human side of technology."
It’s a little-known fact that part of Wikipedia’s name comes from a bus in Hawaii.
In 1995, six years before the famous online encyclopedia launched, a computer programmer named Ward Cunningham was at Honolulu International Airport on his first visit to the islands. He was in the middle of developing a new kind of website to help software designers collaborate, one that users themselves could rapidly edit from a web browser. It was a striking innovation at the time. But what to call it?
“I wanted an unusual word to name for what was an unusual technology,” Cunningham told a curious lexicographer in 2003. “I learned the word wiki … when I was directed to the airport shuttle, called the Wiki Wiki Bus.”
Wiki means
quick, and Hawaiian words are doubled for emphasis: the very quick bus.
With that, Cunningham’s software had the distinctive sound he was
looking for: WikiWikiWeb.
Wikipedia, whose development
Cunningham wasn’t involved with, is one among countless websites based
on his work. The second half of its name comes from the word
encyclopedia, with pedia being the Greek term for knowledge:
“quick knowledge.” Yet now the site is so successful that its fame has
eclipsed its origins—along with the fact that a chance visit to an
island gifted the digital age one of its most iconic terms.
I love delving into the origins of new words—especially
around technology. In a digital age, technology can feel like a natural
order of things, arising for its own reasons. Yet every technology is
embedded in a particular history and moment. For me, etymology
emphasizes the contingency of things I might otherwise take for granted.
Without a sense of these all-too-human stories, I’m unable to see our
creations for what they really are: marvelous, imperfect extensions of
human will, enmeshed within all manner of biases and unintended
consequences.
I give talks about technology to
teenagers, and often use Wikipedia as a prompt for discussion. Let’s
find and improve an article to make Wikipedia better, I suggest, and in
the process, think about what “better” means. My audience’s reaction is
almost always the same. What do I mean by improving an article? Aren’t
they all written by experts? No, I say. That’s the whole point of a
wiki: The users themselves write it, which means no page is ever the
last word. There are no final answers, and no ownership beyond the
community itself...
Similarly, to speak about technology is to assume: It demands shared
notions of sense and usage. Yet there are some terms that deserve more
skepticism than most. Sixty years ago, a group of scientists drew up a
conference agenda aimed at predicting and shaping the future—at
establishing a field they believed would transform the world. Their
mission was to use the young science of digital computation to recreate
and exceed the workings of the human mind. Their chosen title? The
Dartmouth Summer Research Project on Artificial Intelligence.
The assumptions of the Dartmouth Conference, set out in a 1955 proposal,
were explicitly immodest: “[T]he study is to proceed on the basis of
the conjecture that every aspect of learning or any other feature of
intelligence can in principle be so precisely described that a machine
can be made to simulate it.” Yet today, the very word “intelligence”
continues to lie somewhere between a millstone and a straw man for this
field. From self-driving vehicles to facial recognition, from mastery of
Chess and Go to translation based on processing billions of samples,
smarter and smarter automation is a source of anxious fascination. Yet
the very words that populate each headline take us further away from
seeing machines as they are—not so much a mirror of human intellect as
something utterly unlike us, and all the more potent for this.
As Alan Turing himself put it in his 1950 paper on computing machinery and intelligence,
“we can only see a short distance ahead, but we can see plenty there
that needs to be done.” If we are to face the future honestly, we need
both a clear sense of where we are coming from—and an accurate
description of what is unfolding under our noses. AI, such as it is,
sprawls across a host of emerging disciplines for which more precise
labels exist: machine learning, symbolic systems, big data, supervised
learning, neural networks. Yet a 60-year-old analogy fossilized in words
obfuscates the debate around most of these developments—while feeding unhelpful fantasies in place of practical knowledge.
Read more...
Source: The Atlantic
Read more...
Source: The Atlantic