Site icon Modish Project

Min(d)ing English language data on the Web : What can Google tell us?

1 Introduction As commonly recognized, the era of modern corpus linguistics is approaching the half-century mark. During the past 50 years, we have witnessed a series of important landmark events in this field, ranging from the early attempts at mechanolinguistics by Juiland and Busa in the 1950s, to the pioneering work on computerized corpora in the 1960s and 1970s, involving first written material in terms of the Brown Corpus and the LOB Corpus, and later spoken material in connection with the London-Lund Corpus; in the 1980s, we have experienced the large-scale corpus projects of Cobuild and the Having now entered the 21 st century, it is clear that there are new challenges ahead for the corpus linguist. In terms of standard corpora, for example, we know that the American National Corpus (ANC) is under development, a parallel to the BNC with 100 million words of transatlantic English (e.g. Ide et al. 2002), and there is also a great deal of work going on with sophisticated varieties of learner corpora and multilingual (parallel) corpora (e.g. Botley et al. 2000; Granger 2004). However, the biggest challenge of today is undoubtedly the growing body of text-based information available on the World Wide Web (henceforth the Web). While originally intended as a pure information source only, this material forms in fact the largest store of textual data in existence, and as such it constitutes a tantalizing resource for various linguistic purposes. Let us look at some initial figures. As regards the size of the material on the Web, a rough estimate indicates that there are currently (December 2004) about eight billion Web pages available (cf. containing perhaps as much as 50 terabytes of text: at a generous average of 10 bytes per word (cf. Kilgarriff and Grefenstette 2003), these figures suggest a body of no less than five trillion (5 000 billion) words in one form or another. 26 Out of this massive multilingual collection of texts and text fragments, it appears that about two thirds are written in English (e.g. Xu 2000), although the proportion of non-English texts seems to have increased in recent years (e.g. Grefen-stette and Nioche 2000). This means that there is probably something in the range of 3 000 billion words of English to be found on the Web, forming a virtual English supercorpus ready for use by enterprising linguists in all manner of language research (cf. Bergh et.

Exit mobile version