Where does technology go next?

Digital technology has arrived remarkably quickly. 80 years ago there were no electronic computers, nor were most futurists expecting a computer revolution. They promised us vacations on the moon and that sort of thing, but not iPhones. The first computers were invented in the 40’s and were the size of furniture, or buildings in some cases. Since that time, size has steadily decreased and convenience has increased. We’ve been through primitive IBM machines from the 70’s to desktops, labtops, tablets, and finally smart phones.

Thinking about it, though, it doesn’t seem that we can go much further in that direction. Presumably a useful device needs both output and input. The screen can’t get much smaller or it becomes useless for humans. One can imagine a device where the input comes solely by voice, but it seems there are some applications where a keyboard of some sort will always be more useful, and a keyboard requires some reasonable size.

So if there doesn’t seem all that much chance of greatly improving the hardware, the obvious question is the software. We now have a devices that do things one couldn’t have imagined a short while ago, such as streaming video. Those devices also incorporate many other appliances that were formerly separate: phone, camera, video camera, sound recording, alarm clock, and even flashlight. But is there that much more that a hand-held device can do as far as software is concerned?

Interesting OP. A hand held device can integrate with more and more devices so that it becomes a universal remote control. It can control what is on tv, home appliances and your car. Voice recognition can still be improved a lot so you can have a conversation with the phone, and get back useful results other than “there are 5 coffee shops within a few minutes drive from you.”

Generally though, things seem to be stagnating till the next big innovation comes along.

It seems likely that future input will involve a personal network of sensors and future output will use some kind of augmented reality overlay.

We’re now starting to see small devices that use low energy bluetooth to communicate with a smartphone and monitor your movement. Think FitBit and Nike+ for fitness. There will be many more. And they can keep getting smaller and lower powered, because the I/O will all be wireless networking, and you can make antennas quite small. I expect that at some point they’ll replace the keyboard as input, too. You can wear (or have implanted) small devices on your fingers that act as a chording keyboard without any physical keys to touch.

Google Glass is the first iteration of future output. Yeah, it’s clunky and heavy and runs out of battery and makes you look like a Star Trek convention attendee, but it’s v1.0. Screens can keep getting smaller and lower powered if they keep getting closer to your eye. The end result is probably something like a contact lens with a screen embedded in it.

I think direct integration with humans is the next step. You can see the hints of this in things like Google Glass and the research being done on neural integration and augmented reality. Not only is there a lot further we can go, but IMHO we are merely at the crawling stage with this technology…it won’t fully mature for decades yet, and when it does it will be incredible.

Pedant

Computers are not “technology”. They’re a subset of “technology”.

/Pedant

To be honest, when I read the thread title, my first thought was “graphene”, then “outer space.” :slight_smile:

Xt, nailed it.
Integration and human direct contact.
saw the glasses and what a “wink” with the human eye will be able to do in the future.

The big improvement, which we are already beginning to see, are PDAs that are actually PDAs. Ten years ago or so trade magazines talked of convergence, since the average techie carried around a cellphone, a music player a PDA, a digital camera, a GPS in the car, etc. Now they have all converged into smartphones. The next thing to converge are all those lists and things you have at home. While you can put them on your phone, your phone should understand context.
Here is the scenario. Your phone tells you five minutes before your next meeting. Your phone tells you that traffic is now good on your way home, or it is bad and you can use an alternate route. If you are in a store your phone can remind you of what is on your list in this store, prices, and prices at other places.
The basic change from better battery life and more processing power is that your phone will be active, not passive, it will know exactly where you are, and it can talk to the computers of the places where you are.
We have a lot of progress to make.

As for hardware, we’re at 20 nanometer already, and we have a few more process nodes to go before we run out of room in Moore’s Law. I know of all the successor technologies, but none of them seem even close to productization, and time is running out. We’ll do more with parallel computing on chip and stacking chips into smaller footprints. The big push now is 3D hardware, where you put the memory chip on top of the processor. There was a startup, which died, making a chip with 100 cores, but standard processors will get there before too long.

I tell new PhDs I interview or hire that they had better be flexible, since the world will change a lot by the time they get 20 years of work in.

I’ve (seriously) always been interested in technology in which you can “download” or learn information in much the same way you can install or move information to a computer.

I hadn’t heard of that before, but I just looked it up. Oddly enough, when I was in high school (circa 1996) we saw a video about a group that was developing a similar type of thing. Their headbands were much larger and clunkier, but the basic idea was the same. Still I wonder whether it can catch on. Perhaps even the most geeky will find it a bit annoying to have a screen always hovering half an inch from their eyeball.

I agree with all of what’s been said and I’ll only add on marginally by offering:

  • Internet of things (read this term somewhere and like it a lot). This means the biometrics, the traffic data, the household functions, books, appliances, etc. all are hooked to the web and have features derived thereof. Deliver milk when your bottle gets low. Location-sensitive climate control. Brew the coffee and prep the shower according to your REM cycle. Everything talks to each other and does so without any buffering.

  • Automation. Automated driving/piloting/sailing. Automated delivery. Automated farming. Automated mining. Automated policing (speed cameras?). It’s going to be an uncomfortable transition but the safety statistics will be too much to argue against.

  • “Cybertronic” health care. To expand off the internet of things, having personal, constant, non-invasive monitors keeping our body as healthy as possible is going to have shades of cyborgian dystopia to it but really it’s just to tell you how much fat is in your poop and how much sleep you’re getting.

I’d like to see wireless energy transfer, along with wireless data. Removing batteries and cord froms things would be fantastic. As technology progressed, electric cars, powered by wireless energy, would also be nice to have.

It’s not going to be merely for the geeky…it’s going to be a technology that has every bit as much penetration as the smart phone. Maybe more. There are huge waiting lists for the thing already…and there is already backlashes against the tech, with folks freaking out about how pervasive it’s GOING to be, as well as the legal aspects (and safety aspects).

All of these technologies are converging right now, and wearable tech is going to be the next big thing. But that’s just the beginning, as we are going to more tightly integrate the tech with ourselves. Already there is tech to enhance hearing that integrates with the brain…tech that enhances sight and other senses, expands your memory abilities is down the road, but will probably be here in my own lifetime. Certainly we will all see more tightly interfaced tech (like Google Glass for a start, but also you’ve got new VR tech coming out soon) in the next decade. We are far, far from the apex of the current run up of tech…in fact, as I said, I think we are only at the beginning of the next big wave, like the PC was in the early 80’s. Going to be quite a ride IMHO.

(The other thing I see is bio-tech, which is another of these converging tech, but not specifically what the OP was asking about so I’ll leave that one for another discussion)

Like, say, kung fu? Via a large headphone jack in the back of your skull?

I saw a show where they were using certain brain wave patterns to put novice archers in the same ‘zone’ as an expert, and it enhanced their abilities to hit a target with a bow by large amounts. No headphone jack required. :stuck_out_tongue:

I’m sure such tech is way down the road, if it every happens at all, but it seems like we are seeing glimmers that it COULD be possible. You’d still have to train muscle memory and such, but it’s possible that there will be tech that will enhance the ability to at least learn a skill more rapidly than you can doing it the old fashioned way.

As far as input/output, yeah your phone can only have a screen so big before it becomes a tablet, and your tablet can only have a screen so big before it becomes a laptop, and your laptop can only have a screen so big before it becomes a desktop.

But the screen isn’t the device. Yes, the device you carry around with you will have a built-in screen. But that’s only for when you’re walking around. When you’re anywhere civilized, you wirelessly connect your device to the nearest, biggest, most convenient screen, like the tabletop at the Starbucks, or the wall of your living room.

Or your display is a projector–you point the thing at a surface, it maps the angle and reflectivity with a handy-dandy laser scanner and projects a clean image for you. Or it projects the image directly into your eyeball, no screen at all.

And you either have on hand, or carry around with you, or find the nearest input device that is most convenient for you. If you want to send a text a small virtual keyboard on a tiny device is fine. Or you use voice, or gesture. Or you have a keyboard you carry around with you.

At a certain point the “device” that you carry around consists of your preferred input mechanism, and your preferred output mechanism. The processor and memory and so on are small enough and cheap enough as to be nearly irrelevant. So the input and output mechanisms can also be cheap and ubiquitous enough that they can be built into everything, or carried around in your pocket.

“Type faster–ZZZZZZZT!”

“Aaagh, I’m typing…”

“Type faster–ZZZZZZZT!”

“Aagh!”

“Type faster–ZZZZZZZT!”

“Ah!”

“Type faster–ZZZZZZZT!”

POP (Package on Package)? I’m not sure if that’s a big push. It was in phones I worked on going back to 2005-era. I see more multi chip package options than there used to be, but that’s not really the same thing. then again, I’m historically terrible at telling what the upcoming trends are, so maybe there is a push coming there.

This is available within limits, now. Distance is an issue. Inductive charging transfers power without a cable, but is mostly used to charge phone batteries when sitting on a mat or cradle. RFID scavenges power from an RF signal, and uses it to respond.

Still anyone’s guess when we’ll see something that truly deserves to be called AI, but incremental progress is being made on expert systems. Programs “smart” enough to exhibit quasi-intelligent behavior will get reduced from occupying customized main frames to apps on user devices.

What do you mean they were “using” brain wave patterns? Are you just talking about monitoring and telling them to relax more or actively forcing the proper waves? Cuz the latter would have to do Matrix style learning and the former less so.

Self-improving, non-True AI software?