I am scientifically dumb. I also have a speculative science question.

You’re off by a decade – they had them in the 1960s. They had regular performanes at the 1964-1965 World’s Fair in New York. Plus, it showed up in the James Bond film Thunderball. Sean Connery’s scenes are clearly shot in front of a bluescreen, but the loong shots are of a stuntman actually flying a jet-pack (not special effects, or a guy susdpended on a wire by an out-of-shot crane).
The problem, as Jake noted, was its prodiguious fuel consumption. One load got you an embarassingly short run – about enough time to go up and come down. The Rocketeer it wasn’t, and ain’t.

Heh, So you get the video mobile phones where you can just yank out a photograph sized (4x3, right?) video screen to talk to the person on, instead of the half-dollar sized thing I was thinking of. That could be cool, but knowing me as well as I do, I’d probably end up tangling or ripping it on accident.

Everything said about this is true, and beaten to death over the years here, but one last nail in the coffin.

There is no need for anything like this. It’s utility is dubious at best. Noise, cost and safety are stumbling blocks and without any need besides the “because we can” no one is going to output the research to do it. Honestly i never thought they were all that cool or useful looking anyways.

But today it would be kind of expensive. Ten years from now, costs will drop to the point where setting up surveillance cameras can be done with the money you find in the couch cushions. Just like “new” technology that just became widespread in the last few years was available long before, just very expensive and unreliable. Another big change will be size of the camera, and size of the data storage medium. A video camera that can store 24 hours of data that weighs only a few ounces and costs $5.00 is a different thing than today’s video camera that can store 1 hour of data, weighs a pound and costs $200.00. Dirt cheap tiny sensors that you can buy in bulk lots will be a big change, even if they do exactly the same thing today’s sensors do.

Right now ubiquitious surveillance would require a pretty substantial cost outlay…not gigantic, but enough that you’d have to budget for it. What about when individual citizens or impoverished not-for-profit organizations can put cameras everywhere? I’m thinking of things like environmentalist groups putting sensors all over to catch polluters, neighborhood watch groups putting cameras everywhere, parents putting cameras on their kids, stores putting cameras everywhere, scientists putting sensors every few feet to study climate change.

Even if we never decide as a society that we want total surveillance, sensors will become so cheap that a small number of people who want surveillance for their own purposes will be able to effectively blanket the earth with sensors. It’s already legal to take pictures of public areas, it’s legal to carry a video camera around with you wherever you go, it’s legal to monitor air quality, etc etc. I don’t think we’ll ever decide to videotape everything, it’s just that one day we’ll wake up and realize that hey, you ever notice how it’s almost impossible to avoid surveillance of one kind or another nowadays?

A quick note for Lemur866, all sociological comments aside, there’s a couple details you’re overlooking.

First, the real limiter right now isn’t directly cost and capability, but power. Batteries are too heavy, bulky and expensive to be be used in the ubiquitous manner you’ve described. A camera with the resolution to usably film live action on an ongoing basis to be used for true surveilance would take alot of power over long periods of time. This problem is much less likely to be solved than miniaturization and/or storage in the near future.

Secondly, your fictional world doesn’t take into account the costs involved in reviewing all this information. Time, most notably, is far too valuable to be dedicating it reviewing several dozen video feeds for things like this. In isolated cases, like a crime or something, this will be a factor. But on the whole, people don’t have the time or inclination to spend even a middling amount of money and significant amounts of time to make your proposal reality. I don’t see that happening any time soon either, as time becomes more valuable.

The big brother story makes a very interesting read, no noubt, and one day the technology will be there (but that power problem will make it alot farther off than I think you realize) to do it. The fact remains however, the cost-benefit simply doesn’t support it.

Thanks everyone. This is fascinating. Keep it coming.

And I disagree with this. Copper wires and other metal cabling is going the way of the dinosaur, and fiber-optic is taking its place. That’s been happening at least since the 1990s. Quantum cryptography will give secure data connections to everyone who wants one, and given that banking is already taking place on line I think a lot of people will.

Even if it never penetrates the average person’s home, corporations and governments will still be unable to eavesdrop on secure communications. Tell me that won’t change how the CIA and NSA operate.

We’re only scratching the surface now, but one thing I expect to take some significant strides in the next ten years is so-called “genomic medicine”, or “pharmacogenomics” for the more wonky. The goal: Robust and rational approaches to customized therapies. The present problem, to give one example focused on small-molecule pharmaceuticals: Different people respond differently to the same drug. For some people, one drug in a particular class (having the same mechanism of action) doesn’t work, while another one does. Why? That’s largely mysterious, but becoming less so. Even if the precise reason why people respond so differently to the same thing isn’t known, there may still be characteristic differences between a responder or non-responder that will serve as effective markers.

So the idea will be to screen patients before they start a course of therapy to increase the level of confidence that they will be treated efficaciously. For a lot of diseases, and the drugs used to treat them, it’s a crapshoot. Doctors have means at their disposal, beyond a certain level of artistic intuition, what is the right pill to prescribe to a particular patient to treat what ails them, so the protocol is essentially an algorithmatized version of “keep throwing things at it and see what sticks”. One exellent example of this problem in action is psycopharmacology (of which my mother is a practitioner). A patient walks in, and is diagonosed with depression. Beyond that, it’s widely accepted now that a speedy remission is largely a matter of luck. My mother regularly sees patients who fail to respond to three SSRIs, and go into complete remission with the fourth. Some don’t respond to any SSRI, but will to an SNRI. Some don’t respond to anything, and need an MAOI, or a tricyclic. Some respond to all the SSRIs, but have far fewer side-effects on one versus another. It’s a mess. She really has no concrete idea of what she should do when confronted with each new patient. Sometimes one can spare the six months or a year it might take to find an effective treatment. Sometimes, time is literally a life-or-death limiting factor.

The promise of pharmacogenomics is to remove the guesswork, and while I don’t expect psychopharmacology to come around as quickly as other specialties, we’ll start to see real advances in treating things like infectious diseases (knowing what drugs the bugs are resistant to, for instance), or cancer (what are the genetic changes in this particular variety of breast cancer that will make it responsive or unresponsive to a particular chemotherapeutic agent). As more and more treatments become available, the ability to customize a particular therapy or comination of therapies will be more and more customized to minimize unwanted side-effects, and maximize efficacy, for each individual patient.

Molecular Imaging has some interesting potential in healthcare. Combined with high-slice count CT and/or high T MR, cell-level (and beyond) biology can be non-invasively studied. This also opens up treatment options, not just detection methods.

Electronic forms - eforms - are likely uses for o-LEDs. Right now, if you want casual data entry by a random person, you have to hand them either a clipboard, paper, and pencil, then transpose their entries; or a laptop/touchscreen computer. I can see sending an eform to their o-led device, wirelessly, and receiving back a completed form.

I’m still waiting for my picture-phone, BTW. I can have a phone with a wireless headset, and it’s not a picture-phone? How lame is that?

Geek - Barak Obama is the junior senator from Illinois who made a widely covered speech at the Democratic convention. Not that I’d expect people to know all 100 senators by name, but this guy’s been extremely prominent. Time to read something other than computer magazines.

to the OP - I’m guessing that in 10 years, there will be the equivalent of wrist-radios that have tv and computer/internet capabilities, as well as phones. I guess we’re about there now, but I think in 10 years, it will be widespread that folks everywhere have “phones” that show movies, tv channels, video conferencing, and get you to the internet. That will totally unplug us from even laptops. Also, while you’re at it, Google “space elevators.” Way cool.

xo, C.

[picking nits]
It’s Barack Obama, not Barak.
[/picking nits]

I was going to say this–definitely going to see flexible displays, and, within 15 years maybe, reasonably disposable displays (on cereal boxes, etc.).

I’d be very surprised if we saw fusion anytime soon. Too huge, expensive, slow-moving a field.

I think we will see advances in photovoltaics soon–they’ll make solar power cheaper, but still not dirt cheap, and only put a dent in our energy problems. We may also see more use of flexible/sprayable photovoltaics, so you could see PV technology stuck into many many surfaces (they currently have PV roofing that replaces standard asphalt roofing), though again this will only put a ding in our energy issues.

That’ll take longer than ten years, I think… First, we need to develop a practical method for producing nanofiber in bulk, and even once we have that, the first things built with it won’t be nearly as big as a space elevator. We might see nanofiber fishing line and golf clubs, for the ultra-top-end, money is no object, market within ten years. Maybe another decade after that, it’ll become cheap enough for more mainstream items, and we’ll start seeing some large-scale things like suspension bridges. And perhaps another decade for a space elevator. I hope to see it in my lifetime, but no sooner than 30 years at best, and quite possibly not for 50 or 60.

'K, this sentence alone is a good example of how cybernetic brain agumentation might be a really handy thing to have for a guy like me, but likely I’ll be too far gone by the time it’s mass-marketed to benefit.

That said, it appears “cyborgs” are going to be more and more common as time passes, and we’ll all probably live to see them doing some pretty amazing things. I’ve often wondered if some of the advances being made by researchers and engineers in the fields of prostetics and fully artificial organs will make advances in stem cell research redundant, or even an inferior alternative. We’re seeing some pretty rapid developments in the “meatware”-to-hardware interface, such that I think some prostheses are going to be better-than-serviceable. Take synthetic retinas, for instance, for things like macular degeneration, or retinitis pigmentosa. Basically you stick a light-sensitive chip in the back of someone’s eye, interface it with the optic nerves, and a rudimentary kind of sight can be restored. Like every other device of the sort, chances are good the technology will improve exponentially. Why couldn’t it improve to the point that the limiting portion of the system isn’t the chip/implant, but the person it interfaces with? Lenses and retinas could possibly be constructed to provide focus, contrast, magnification, and light-sensitivity that exceeds the best a human eye can provide, as well as receptivity to wavelengths just either side of the visible band of the EM spectrum. Night vision eyes with 5x magnification! I don’t think it’s impossible, nor even improbable. It’s a “when”, not “if” question, AFAICT.

In ten years? Maybe we’ll see some paraplegics walking haltingly on powered prosthetic legs (we need better batteries!..fuel-cells in self-powered prosthetics?) that respond to mental commands. Some forms of deafness have already been well-treated with cochlear implants, and that should improve. Retinal implants will also just be hitting the market, perhaps. Ditto for more dextrous synthetic hands that, like the “bionic” legs, respond to thought commands transmitted somehow by nerves distal to the stump. Physical therapists will also be prosthetic trainers, helping amputees, etc. to learn to work with their prostheses to walk, or touch, or see again.

They’ve been around for a couple of years in Japan. Frankly, I doubt they’d catch on very well in the US because everyone drives everywhere. That’s one of the reasons text messaging isn’t as widespread in the US as in Japan. The tech for text is even simpler than for voice communication, but no one used it much when it came out several years ago.

Wearable computers are a possibility. Ever notice how the biggest parts of the computer are the human interface elements? Take away the need for a screen, keyboard, and mouse or trackpad, and how big is the computer? The main problem is the display. As far as I know, no one has anything better than those big ol’ bulky display goggles they came out with a few years ago that disappeared almost as quickly as they appeared. With good software for gesture recognition, fiber optic gloves could stand in for a keyboard and mouse.

Something weird I ran across several years ago when I worked as a clerk for a head-injury study was that scientists had tested a mind-controlled computer interface on a disabled person. With a little practice, he was able to move a pointer around and select objects just by thinking about it. Wish I had a copy of that journal for reference.

Fuel cell cars will be around in less than 10 years. It won’t make much of a difference for energy problems, but it will make huge differences in air quality and sound pollution. The entire auto industry and associated industries will have to change, and one of the problems they’ll be running into is that electrical engines, having fewer moving parts than internal combustion ones, will not break down very much. They’ll have to find other ways of getting people to buy new cars than reliability issues. This will impact the industries like steel fabrication also since the new cars will probably use much less traditional material than present cars do.

I’ll throw in on the wired society idea. I thought that we’d be further along in our integration of entertainment and communication technologies than we are even now. This integration will eventually extend to things like surveillance technology. David Brin talks about the side-effects of cheap and ubiquitous surveillance tech in his Transparent Society. I’m not sure I agree with his conclusions or hopes, but he does bring up a lot of relevent points about how society might have to change to deal with it.

Organic LEDs will do more than make cheap displays, they may change lighting technology significantly too. Power needs would be reduced a lot even over flourescent lighting in some cases and the light will be of better quality than flourescents.

Nanotech is, unfortunately, something that I just don’t see making a big difference for quite a while, maybe 20-50 years or more. Unless we figure out better ways of working with stuff that small, fabrication techniques will continue to keep nanomachines expensive, simple, and therefore extremely limited.

I have this idea that, eventually (~50-100 years), humans will be able to have the internet integrated in their bodies, with the display being projected (transparently, as an option) on the eye somehow. You’ll move the mouse with your brain, as you mention, but I’m not sure yet how you’ll type. Maybe a GUI keyboard or something, or maybe voice-commands will catch on.

My friend, who is a 7th-year med student, says it’s medically impossible. His opinion comes from scientific knowledge of how the body works though, while mine comes from fantastical daydreaming. So, naturally, I believe I’m right :D.

I can’t go into much detail as there are potential IP issues involved but to fill you in about some new stuff that’s being developed in labs right now, expect automatic content generation to explode in the next couple of years. All those CG actors you see in movies, all those digital objects you see in computer games, all the architectural walkthroughs etc, the bulk of them are painstakingly made by humans assembling bits piece by piece and manually applying texturing and lighting. This is an immensely costly process and gets more expensive the more realistic you want to make it. It’s part of the reason why blockbuster games now routinely cost more than movies.

There’s been some really amazing work done in just the last couple of years to automate this process and be able to generate huge, high resolution, realistic data in a very fast manner. Check out this face-tracking demo (movie file) for an idea of what’s possible now.

What?

Intellectual Property. I’m a bit unclear about exactly how much I’m allowed to reveal. But everything I’m talking about now is all in public domain so it should be okay.