I see this term tossed around a lot, often seemingly interchangably with megabyte. What’s the deal? Shouldn’t a megabit be 1,000,000 bits or 122 kilobytes? (8 bits in a byte, 1,024 bytes in a kb. 1,000,000 divided by 8, divided by 1,024.)
Nope, a megabit and a megabyte are never interchangeable. Two different things. When talking about RAM, etc you are almost always talking megabytes whereas when talking communication speeds it would more likely be megabits (per second).
A true megabyte is 2^20 = 1048576 but hard disk manufacturers, in their quest to confuse the voters, normally use 1 MB = 10^6 = 1000000
I believe there are 8 bits per byte.
[slight hijack]
And while on the subject, why is it that 1024 bytes=1 kilobyte, 1024 kilobytes = 1 megabyte, but 1000 megabytes = 1 gigabyte?
[/slight hijack]
As sailor alluded to above, it’s basically the computer storage industry trying to confuse people. Take the lowly 3.5" floppy disk, for example: it’s 1.44 megabytes, right? It says so right on the package! But what does that “megabyte” mean? You’d think it would be either 2[sup]20[/sup] bytes or 10[sup]6[/sup] bytes, but in fact it’s neither. Their “mega” consists of two "kilo"s, one of which is 2[sup]10[/sup], and the other 10[sup]3[/sup]. So a floppy disk contains 1474560 bytes, which is actually 1.40 mega-(2[sup]20[/sup])-bytes, or 1.47 mega-(10[sup]6[/sup])-bytes. There is no standard definition of kilobyte, megabyte, gigabyte, etc. Well, there is but nobody uses it–kilo, mega, etc. are supposed to be powers of 10, and their power-of-two equivalents are supposed to be kibi-, mebi-, gibi- etc. But “mebibyte” sounds too stupid for anyone to use it, so we’re stuck with confusion for now.
As for the OP, when something is measured in megabits or kilobits it’s usually becase it can’t be measured in bytes. For instance, a modem might be 56 kilobits per second, but the number of bits in a byte varies depending on modem settings, so you can’t say it’s equivalent to X number of kilobytes per second.
Strictly speaking, there aren’t usually 8 bits in a byte on a hard disk or CD-ROM either, but the number of bits per byte can’t be changed so they report size in bytes. For example, a CD-ROM can hold 650MB of data but if you examined it with a microscope you’d find something like 16 billion bits.
Bobort, I was confused by your post. AFAIK, a byte always has eight bits. I think you’re confusing the raw number of bits on a CD with the number of bits it presents to the computer after all the error-checking and correcting has been done.
With older modems that were measured in baud (I thought I was hot stuff when I got my 2400 baud modem), the baud rate was different from the rate that you could get data bits through it because of the overhead. I think that modems nowadays (since 28.8 kbps anyway) specify rates in bits per second, and there are eight bits per byte, so a 56 kbps modem can deliver 7 kBytes/sec. However, there’s also a layer of data compression that is usually used, so you could get more data through it that this number if the data is compressible.
When stored uncompressed in memory, on a hard disk or floppy, yes, a byte has 8 bits. However, bytes are manipulated in different ways when moved around.
For example, as CurtC alluded to regarding his 2400 baud modem, the actual number of bytes per second would have been 240, because there is a start and stop bit for each byte. As modem speeds increased and modem manufacturers started to use different tricks to cram more bytes into the same noisy phone line (such as compression, removing start and stop bits, etc) it’s not as easy to determine.
Another example is network speed. Standard 10BT ethernet is rated at 10 Mbit per second. That does not mean that you can send actual data at 1.25 MBytes per second, because that depends on the network protocol you use. Each protocol has a bit of overhead. A decent rule of thumb for network speeds is Max theoretical Bytes/sec = 1/10 * Max theoretical bits/sec.
BTW, if you’re confused by abbreviations:
k=1,000, or 10^3
K=1,024, or 2^10
m=1,000,000, or 10^6, or 1000k
M=1,048,576, or 2^20, or 1024K
b=bit
B=byte
A byte is always 8 bits. When a modem gives a speed rating this is how many bits per second that can be transmitted. A byte is still 8 bits but there is other communication overhead involved in the transmition. Stop bits, start bits, parity bits, error checking, ect. Therefore at 56kps you do not transmit 7 kilo bytes per second, more like 80% of that depending on the communication protocol used.
Thanks for the answers, y’all. Now I’m slightly less confused than I was when I posted.
They’re not trying to confuse anybody. Memory space is addressed in binary, and as such, sizes are in base 2. 2[sup]10[/sup] = 1024. That’s close to 1000, so the word they use is “kilo.” It’s not an exact match, but they didn’t feel like inventing new words. 1024 “kilo” bytes is one megabyte, or 1,048,576 bytes. See? Close to a million.
First of all, I don’t know how you get your numbers. 1,048,576 x 1.44 = 1,509,949 bytes and change. Secondly, it’s accepted in computer science the “kilo,” “mega,” “giga,” and “terra” are powers of 2, not powers of 10. I agree that it would have been less confusing to invent new words, but that’s the way it is.
As such, the powers of two that are used (kilo = 2[sup]10[/sup], mega = 2[sup]20[/sup] giga = 2[sup]30[/sup] etc) are close enough to the base ten equivelents that it’s hardly confusing. If someone says “gigabyte” you know the person means “about a billion bytes.”
If you don’t believe me, check your floppy drive. A “1.44MB” floppy contains 14401024=1474560 bytes, which is neither 1.442[sup]10[/sup] nor 1.44*10[sup]6[/sup]. On a Unix system, I can put a floppy in the drive and count the number of characters on the raw device:
638 [bobort@abacus ~]$ wc -c /dev/fd0
1474560 /dev/fd0
Like I said, 1474560 bytes.
Similarly, if you check on the actual capacity of hard disks you’ll find that the numbers used on the box are also a weird mixup of power-of-two and power-of-ten. It might be somewhat of a stretch to say they’re deliberately trying to confuse people, but I can’t think of any good reason to use such bogus nomenclature.
Yes, the real sizes of hard disks are in binary, and binary is the standard in the computer world. The problem is that manufacturers will call a 1000 megabyte disk a 1 gigabyte disk, when it’s actually smaller than one (computer) gigabyte.
I think that rather than saying that they’re trying to confuse us, a simpler explanation is that they’re confused, too. Of course, the technical people know better, but the marketing folks probably don’t.
A couple months ago, I bought a Toshiba laptop that was advertised as having a “6 billon byte hard drive.” And that was the number they kept quoting on the box and all through the owners manual. Not once did they say what it really was - a 5.7 gb drive.
Wasn’t MegaByte the bad guy on ReBoot. I don’t want to mess with him he was one bad dude.
friedo wrote:
Now that’s obfuscated. And who’s he calling lazy?
Bobort and Chronos, you are correct; I sit corrected.
I think that one falls under the “hubris” category.
A byte does not always have 8 bits. The number of bits in a byte is an arbitrary choice made by computer and operating system designers. Eight is widely used because it is convenient.
In the early days of computers, there were many models that used 6 bit bytes. They did it by not allowing certain special characters to be printable and no doubt some obscure methods. One of the big savers was simply not using lowercase letters, only uppercase like old teletypes.
There are still vestiges of this in modern systems. Sometimes when files are transferred over the internet, say as email attachments, they are converted before sending using a program called UUEncode and then UUDecoded at the receiving end. The purpose is to convert the 8-bit bytes of the sending operating system into a form that can be handled by any 6-bit byte machines that the file might pass through on its way to the destination. I doubt whether there are actually any such machines still in operation, but the practice continues because the old programs still work.
Well, I do not think their ultimate aim is to confuse anybody. Their ultimate aim is to sell you stuff and larger numbers look more impressive. There is really no excuse for this because disk sector clusters are addressed in binary just like RAM. They should stick with binary Megabytes but they didn’t.
I am talking from memory since I have not messed with disks at the byte level for some time now but let me see if I remember correctly:
A floppy has two sides, 80 tracks, 18 sectors (of 512 bytes): 2880 sectors. 33 sectors are used by the FATs, root directory etc and this leaves 2847 sectors free = 1457664 bytes = 1.39 MB
MS came out with disks that had more sectors and could fit up to 1.6 MB but they never caught on.
To find out the capacity of a disk just multiply the number of heads by number of tracks per head by number of clusters per track by number of sectors per cluster by number of bytes per sector. (2 x 80 x 18 x 1 x 512)
Of course the new hard drives, to get more space, do not give the same number of sectors to the tracks but they have an internal conversion that simulates for the outside the same parameters.
Aramis, I think you are mistaken. While there may be many cases where registers, codes etc use something other than 8 bits, the definition of byte is “eight bits”.
What you are saying is equivalent to saying kilometers in America are longer (1608 m) than in Europe. I disagree. Kilometers in America are the same as in Europe, it is that they use another unit of measurement (the mile).
I have never seen anything called a byte but 8 bits.