Why do we need USB, Firewire, VGA, MI, etc., etc.
Why can’t we just use a form of fiber (for data and power you’d need a seperate power wire(s)).
With cat 5 you can push power and data power down the same cable.
Why the need for all the different specialty cables?
VGA uses 13 wires (well, at least it uses a 13-pin connector, so 13 wires is assumed) In this case, Cat5 does not have as many wires as required.
USB carries power and data, but only uses half the number of wires in Cat5 - here, Cat5 as twice as many wires as needed, so half of it is wasted. Firewire also uses fewer wires than what’s in Cat5, so again, it’d be a waste of wire. Not to mention that Cat5 is thicker, heavier and bulkier than USB/Firewire.
A physical world analogy would be “Why not use 12-passenger vans for all driving?” It would be a waste when two people are driving to work, and all they need is a small car, and insufficient if you had 13 people that need to go somewhere.
With the cabling, the wires have been engineered to be exactly suitable for that particular use - no less and no more.
Thanks for the info.
The OP needs to be cleared up a little.
My main concern is that Cat 5 is cheap. It’s also everywhere.
Why not standardise all connections (until impractical) so that you can use cat 5 to connect all your peripherals?
Wouldn’t it be nice to be able to plug the monitor, the mouse and keyboard into a hub or switch and just have one cable go back to the computer?
Same with the external drives and such. (for example) - you could put your remote (usb) drive in the garage and if there was a fire you could yank it out on the way out of the house.
Well I think your question was already answered: it would be a waste of cable/materials and not efficient for the manufacturers. Why use a cable that costs them .03 cents rather than a cheaper one with less materials for .02 cents?
I don’t think that it would work at the driver level, to be honest. USB devices and network devices(for instance) are treated very differently by the Linux kernel, the only kernel I’m familiar with. It seems to me that it would be a huge pain in the ass to code the drivers and driver support layer in the kernel so that it can handle both a USB device and a network device on the same physical port.
Doing honest-to-god networking of peripherals would be a gigantic pain in the ass and would be less efficient as you’d have the overhead of sending a network packet every time you wanted to talk to a peripheral. IIRC, monitors usually refresh at about 60Hz, so that’s 60 packets/second on your network for each monitor. Plus, you’d have to have networking firmware in every device to deal with network contention and collisions. Input devices like keyboards and mouses can just “fire and forget” under the current system, but that wouldn’t be advisable on a real network.
gotpasswords, regarding your other argument that the cables are designed for an exact purpose…
Which devices use all four pairs on a cat5 cable anyway? Phones certainly don’t, yet most commercial points of use call for a cat5 cable. I don’t believe that your typical network uses all four pairs either (correct me). And even if you wasted a pair or two, who cares. The stuff is so cheap and this way you’d be ready for the next device that you may want to place in that location (by having extra pairs).
VGA; Does that really need to have 14 pins? Couldn’t that technology be simplified? Or the data transfer from the video card to the monitor be altered in such a way as to be digitised and then re-assembled at the monitor?
I guess my point is why can’t we standardise using the most common and cheapest method availalbe.
Many devices have the capability of using cat5 (such as remote security cameras instead of coax and a twisted pair, and other smart devices you may find in a smarthome type application). They do make usb extenders that use cat5 cabling, why not just to with straight cat5?
I ran into a situation where we have all this video and audio aquipment connected to the amps and video switches with about two fistfulls of cables. My thinking is that there should be some way to transfer data to the amps from the mics, the PC, the VCR, the DVD player, to the AV rack and then to the projector without having to use all these specialised cables and 70 different kinds of connectors. I think you could dumb this stuff down substantially if you could just transfer the data streams to the proper locations with a universal type of cable.
Is that an unreasonable request?
I know, lots of technical questions, but I think you guys are up to the challenge.
You really wouldn’t have to have this stuff on the network, just a network. IOW, connect the devices to each other with the cabling (mouse, keyborad, monitor, camera, phone, external drive, etc.) and then connect the network card of the computer to the network (internet), thereby keeping the peripherals off the main network - PC handles all the data stream within the ‘peripheral’ network.
Same thing with my AV example in the above post.
Sorry, I didn’t finish my thought here. So now we’d have to put memory in all of our peripherals to buffer input until the computer confirms that it’s seen it. Furthermore, input devices like keyboards and mice don’t generate much data, meaning that you’ll have a lot of overhead from the networking protocols.
On a moderately large network, this would be completely unworkable – can you imagine the traffic that would be generated by 100 keyboards, monitors and mice? And surely buying a switch for each computer to make a seperate network would cost way more than the amount it costs to have different kinds of cables and connectors.
I think you need cat 6 (maybe 5e works) for gigabit ethernet, so I’ve lost interest already…
I bet 1000’ of cat5 is waaayyyyy less than a 1000’ of usb. And as I already pointed out, they make usb extedners that go out over cat5 cable, so the technology is there.
Plus users can install their own ends (cat5 is easy to terminate) and the manufacturers would save by standardising terminals and connectors.
Fine, we’ll go with cat7.
See post 7
Pretty much correct for anthing under gigabit.
10 MB Ethernet only uses 2 pairs. 100 MB came in 100BaseT witch only used 2 Pair and 100BaseT4 witch used all 4 pairs but 1000BaseT is the widley expected format . 1000MB does use all 4.
When I worked in consumer electronics repair, I dealt with quite a lot of problems caused by plugging in things where they weren’t supposed to be plugged in. Lots of headphone drivers blown-up because somebody plugged in the DC power adapter instead (yes, some manufacturers used 1/8 inch phono jacks for their power connections).
You can’t standardize to the extent of the OP because of the wide variety of pin-out arrangements and the types of signals and power that could be present on any given RJ-48 plug. Taking your suggestion to its logical conclusion, shouldn’t we use Cat-5 for AC power cords too?
We need a way to differentiate network and signal lines from peripheral and power lines. Designing the cable to fit the need seems to be the best answer.
No it’s not, it’s expensive to implement. External HDDs with Ethernet are much more expensive than USB HDDs. My network webcam cost about $180, while a comparable USB webcam is about $130.
We don’t. I think we use different cables because we are stupid, not because of any technical limitations.
Witness:
DVI over Cat5
USB over Cat5
VGA over Cat5
Firewire over Cat5
Conclusion:
You could put all of these devices inside your computer case (or even integrate them onto the motherboard) and do the same thing with each peripheral. Just because it’s Ethernet cable doesn’t mean that it has to all be networked together, or know how to talk over TCP/IP. Everything could use the same protocols, but connect to your computer via ordinary, cheap, and noise-resistant Cat5. I would buy a computer like that. But the existing standards have so much inertia, it would be hard to get everyone to change over.
Just these two add up to $800.
And a VCR cost that much back in '78.
Imagine how much cheaper everything would be if it all had the same type of cabling and end terminations. It may take a few years, or the way things are going these days with technology - a few months, but wouldn’t the payoff for everyone involved be huge?
I thinkg Emerald Hawk caught the spirit of the OP, thanks.
Maybe the OP should have been worded;
How could we convert all our peripherals conections to cat5?
That way we could ignore the costs for now and discuss how such a change could actually occur.
scr4, you have to figure the cost of intallation in the installation of a camera, not just the initial cost of the camera. Running 350 feet of cat5 is way cheaper than running 350 feet of coax and a twisted pair.
Pati O’Furniture, I realise you can’t push much power over a cat5 cable (maybe 48volts, but that would be enough for lots of devices), but it’s the data stream is what I’m concerned with. You can always plug in the peripheral at the point of use for the power source (like wireless speakers).
This deserves repeating. A cable form factor that denotes use and proper connection is worth quite a bit in useability and fewer broken parts. It’s awefully nice to be able to plug cables in by feel—because there’s only one place for them to go. All USB cables plug into any USB slot. Monitor cables plug into the VGA or DVI socket, and there
Making all the cables the same would inevitably lead to people plugging their speakers into their network card and their power cable into their video card.
Form factor is also an issue. Try putting an RJ-45 jack on a sleek little media player and watch how cool it stops being.
It would always be more expensive than USB because USB is an inherently simpler protocol, requiring less processing power and simpler software.
Look at SCSI. It’s a standard, used in all sorts of devices. Its price did come down dramatically over the past couple of decades. But for hard drives, IDE is and always has been cheaper than SCSI, which made it far more popular.
Also, are we talking about using Cat-5 cables, or TCP/IP networks? The ???-over-Cat5 devices Emerald Hawk linked to don’t use TCP/IP, they use propriatory protocols over dedicated cat-5 cables. So you can’t use a hub to multiplex those. And you run the risk of breaking something (or at least things not working properly) if you plug a cable into the wrong cat-5 socket. If we’re talking TCP/IP, all technical hurdles mentioned by Rysto apply, even if you’re using a separate network which only connects several peripherals to one computer.