The search engines don’t work on the pages that they return. This is key, *they don’t access the actual pages when you do a search. * All that work is done beforehand, building the massive databases and indexes and hash tables. When you ask for a search of “lake ferdale” it may simply look for pages where the tokens “lake” and “ferdale” are next to each other. Since the ‘,’ and ‘-’ were never tokenized, it simply doesn’t see them.
Text searches are very slow, that’s why all of this other stuff was created. To search all those pages it would have to store the pages intact in memory, fetch them, and then perform the search. If you actually went to the pages online you would have to deal with network delays; your search might never return.
Maybe the search engines will start storing intact pages someday, but they’ll be out-of-date very quickly so it’s not clear that it pays.