Next generation search engine development is basically concerned with contex and meaning.
Current search engines pretty much only do character string matching, and if those character strings don’t appear, you don’t get it. The Semantic web development looks to encase those character strings with meaning, and search for meaning.
For example if you type “Koko the monkey” today it will look for sites with those words. But since actually Koko is an ape, not a monkey you might not get the best and most scientific sites, because they don’t use the word monkey. But a semantic search will realize that Monkey and ape or gorrila are interchangable in the context of the search and return a more complete list of better sites.
Also web crawlers are getting much better, at searching deep web databases that arn’t available to surface web searches such as Google and Yahoo. The crawlers are designed to know where the databases are and search them. They arn’t all that good yet, however, and typically only have a few of the deep web resources in thier world. The ultimate goal is to have a personal web agent/bot. You could look for a search on the effects of second-hand smoke, but only if it has numbers, only if it is in a peer-reviewed paper, and only in the last 5 years. Then the agent would go on it’s way scanning through the thousands of deep-web databases, knowing enough to start at medical research databases, and only return exactly what you asked for. As much of the development on the semantic web is being supported by universities, and you can see how that last example would make research vastly more efficient, the web agents are being developed simultaneously, to make that happen. But there is much work to be done.