So let’s say Google accomplishes this, and most of the great, and not so great works of literature are instantly available to you via Google. What really changes in the worlds of academia, research, etc. other than the availability of a giant digital library card?
And it’s not exactly ‘adding the contents of major libraries’ to the DB, either. They’re archiving older works that are out of copyright or that they can get permission from the rights holder. So it’s not a sweep.
Short Term: Little
Longer Term: The further establishment of respect for ‘virtual’ publishing as a viable medium
What worries me is a repeat of the microfilm debacle of the 1960s and '70s, when libraries microfilmed their huge collections of bound newspapers and magazines, then discarded the originals. Not only were the microfilm copies dark or blurry, but now they’re deteriorating into blank sheets of plastic, and we no longer have the source material.
Too many libraries are fond of “de-acquisitoning” (e.g., “throwing into the Dumpster”) things that are taking up precious shelf space that might better be used by the latest best-seller, that I fear they will scan things onto Google, then toss them.
Then, in 50 or 100 or 500 years, when current technology is no longer accessible . . .
With a bit of luck they’ll have learnt from that Eve. All right, a lot of luck.
One thing that will be easier even in the short term is “history of thought”-type scholarship. I have a mate who does this sort of thing and it’s changed a great deal in the last five years. Suppose you want to know how much and how fast a particular article influencd economists. Tracing through who cited the work, who cited those works, who studied under people who cited the work etc on paper is a big task. Doing a bunch of searches on scholar.google or whatever it becomes is easy.
Putting more and more original sources online has two rather contradictory effects.
From the point of view of good professional scholars, the advantage is not so much that they do more research as that it allows them to consult more items in the time they have available. That is why, as hawthorne points out, it becomes much easier to follow up references. This should encourage them to check whether the works cited by other scholars (past and present) say what they claim they say. That is a good thing.
The disadvantage is that it also allows amateur scholars - or rather scholars who don’t know what they’re doing, whatever their status - to cite obscure and recondite works to support whatever it is they want to believe. Expect to see a continued expansion in the field of pseudohistory, with even more bad books and journal articles padded out with impressive-looking but irrelevant footnotes. Once any old book (or, indeed, any-old-book) becomes instantly available, good scholarship becomes even more a case of knowing which sources matter most. Yet that can be one of the most difficult things for the non-specialist, whether as a writer or as a reader, to grasp.
These two processes are not independant of each other. All scholars, if they have any sense, should benefit from the first and that can serve as a check on the second. But, as every Doper knows, you should never underestimate the power of ignorance. Especially ignorance that comes with footnotes.
Maybe I’m a little optimistic, but I don’t think that’s a real risk with the libraries involved. The space pressures on academic libraries are great, but they are seriously proud of their collections and commited to their roles as the repositors and conservators of published knowledge. I can’t see them giving books the heave-ho. What they might be able to do is put them in deeper storage. These sources might be less accessible, but I doubt the libraries would actually jettison them.
Google spends $100 million or whatever to do it. How and who is going to pay google? I’m presuming an organization doesn’t just spend a large sum of money without some concept of payback.
I, for one, am delighted about this, primarily because of the documents.
I am hoping that, eventually, this will extend to smaller libraries. It probably won’t happen in my lifetime, but I’ve been saying for years that it needed to be done.
I work in a museum which has an extensive library. Not only do we have an incredible collection of priceless books, but we have amazing documents, letters, diaries, and other papers which are of great interest to scholars.
Unfortunately, we’re in a little out-of-the way community. Few would suspect that there would be such an extraordinary collection in such a small, uninteresting place. Generally, knowledge of our collection is passed by word of mouth. It’s really a shame, because there are many scholars, authors and researchers who could greatly benefit from the items we have, but they simply don’t know about them.
What I have always dreamed of is a national database of documents and books held by museums and research libraries so that those needing them could do a search and find where they’re kept. It’s sometimes kind of odd where things turn up. Who would have suspected, for example, that a book written in 12th century France (and rumored to have belonged to Eleanor of Aquataine’s father) would have ended up in a one-horse town in the Midwest? Or letters from George Washington and other founding fathers? Or papal bulls? One usually doesn’t think of such places when searching for items like these.
I hope that in the future, researchers will have more access to these items. Even if libraries don’t want to scan all of them into the system, just having searchable records so that scholars could find them would be wonderful.
Out of curiosity I wonder if this will destroy the economic viability of the (expensive), specialized literary databases that samclem and other language researchers use when researching or tracking down language use and history?
I’m sure it became obvious to any computer professional years ago, as it did to me, that it is possible to store ALL the world’s data source material and keep including ALL new material like daily papers, movies, radio & TV shows, works of art, and make it available to anyone in the world at anytime, anyplace.
Just think – being able, from your personal computer, to access any work of art, book, movie, music ever performed or created. Ever. Technically this can be done now.
The only things preventing that from having happened already are:[ol][li]Cost of preparation and conversion. This is becoming cheaper by leaps & bounds.[]**Cost of storage.**Ditto.[]**Cost and speed of access.**Double ditto.Copyright concerns. Now this one is different. Although there are many public domain works now and there will be more in the future, recent U.S. laws and court cases have made pubic access to old works more difficult and maybe impossible.[/ol]My hat is off to Google for taking the lead in this noble quest.[/li]
As far as the problem of unreadable data due to technology changes, the conversion to digital data, unlike microfilm, will make it easier to “migrate” to newer kinds of storage. And with the trend of declining costs and increasing speeds, there should be more encouragement to maintain data in a readable form.