Gazpacho’s method for determining an upper bound on sensory input seems to be the most promising one yet.
I don’t have any answers, but I’d like to respond to a few things in other posters’ replies.
Adam Yax seems to have confused values with data rate. It doesn’t take any more information to describe an extremely bright scene than it does any other scene, the “overload” there is not of the senses, but actual damage to the mechanism. As an example of overloading your visual system, I’d say trying to track 30 or so erraticly moving objects in a room lit only by a strobe light might do it.
While Attrayant is correct as far as I know in that there are no existing digital formats for smell or taste, that doesn’t mean we can’t estimate the amount of information we get from those senses. We could estimate the number of distinct molecules that we can detect (which is different from the information necessary to describe those molecules), then multiply that by the resolution (as in parts per million) and get a value that way. I do not have numbers, but we can estimate the total number of bits necessary for one signal. Of course, the number of signals per time is still up in the air. It’s hard to imagine rapidly changing scents or tastes.