"NEW YORK (Reuters) - Time Warner Cable Inc said on Wednesday it is planning a trial to bill high-speed Internet subscribers based on their amount of usage rather than a flat fee, the standard industry practice. "
Umm. Is this history repeating itself? Didn’t AOL start out something like this? And soon enough moved to straight fees?
AOL billed for time connected, not bandwidth used.
Last time I checked, most phone companies do charge by usage. Especially for long distance.
Couch potatoes don’t use any more network infrastructure than the occasional TV watcher. (Exception: on-demand video.) So it doesn’t make much sense to charge by usage for television.
Sorry- I was referring to local calls, which I have always paid a flat monthly rate for unlimited local calls. If you’re saying you pay by the number of local calls, I’ve never heard of that.
Actually, I do. Since I rarely use my home phone, instead of a flat rate of about $20 a month, I pay about $8 a month for local service plus about 5 cents or so for each call, regardless of length. I’m pretty sure those kinds of plans are offered by most telephone companies, and I have no idea why people tend to get the ripoff unlimited usage plans.
And of course cell phones have different tiers of service for lesser and greater usage.
But in reference to the OP, I do recall that AOL used to have a pay per minute pricing structure, and at some point converted to flat rates.
While AOL had such a structure, keep in mind that it existed at a time when there were very few choices for which portal to use to access the internet. The change to flat fees occurred at the point (well slightly PAST the point) where there were so many services offering access (usually at a flat fee) that paying as you went made no sense competitively.
I bring this us, not only as an old AOL member who on occasion managed the $500 monthly bill :eek: but to point out that your cable company has you by the balls because it’s usually a monopoly, or pretty close to it, for the content it provides. So the ability to price on a pay as you go fashion will work only so long as the same content isn’t available through multiple outlets charging flat fees that are substantially lower.
AOL charged per minute (after a certain limit) because that was their cost structure, for each connection they needed a phone line, that cost them. The bandwidth from AOL to the internet was not a big cost as all the incoming service was over 28.8 kb/s modems.
Now with large bandwidths to the home, TW needs to pay for the connection from TW to the internet - that is their cost structure. Large users are the ones driving their costs up.
So there are similarities between AOL and TW, they are both incurring higher costs due to a certain type of user, both wanting to shift that cost to that user.
I could be wrong, but I think pay-per-byte schemes are common in Europe. And, of course, it was how it worked for nearly everyone back when most network (Internet and non-Internet) traffic went over plain phone lines.
TW’s pricing structure does not necessarily have anything to do with their costs. They aren’t a heavily regulated telephone company, remember those, that has to base their rates on the actual costs of providing service. As with many things, the price may only have a tenuous connection to the cost of providing service. Many of their costs are fixed, and if you follow the old model of the telephone company, others are dependent on the capacity needed to provide acceptable service during peak usage periods. Beware of people who argue that usage-sensitive pricing is “just being fair”. It’s often used as a cover for maximizing profits via market segmentation. There are other ways of giving everyone their “fair share” of network bandwidth that don’t involve the costs and distortions caused by usage-sensitive pricing. What are TW’s costs if a user transfers large amounts of data during off-peak periods, using otherwise idle network capacity? Packet switches and data lines do not “wear out” with usage.
In fact they kind of do:
[ul]
[li]Computer parts do physically degrade over time and especially with use. Hard drives especially (MAKE BACKUPS) because they’re always moving (MAKE BACKUPS) and usually somewhat warm (HAVE YOU MADE BACKUPS YET?).[/li][li]Software must be upgraded, especially if upgrades are mandated as part of a service contract. Old versions of software are progressively more difficult to secure and less likely to handle increased loads gracefully.[/li][li]Old hardware and software requires more on-the-job training. Imagine how difficult it would be for an all MS-DOS shop to get new college grads up to speed. Imagine how much more difficult it would be for an all TOPS-10 shop or an all RSX-11M shop. Upgrading means the cheap employees know what’s going on after one week instead of three months. (Well, for a given value of “know”.)[/li][/ul]
None of which are affected by load. If the network has an operating capacity of 1E9 packets per second, it doesn’t matter whether it is operating at 10% of capacity or 90% of capacity. A DNS server may experience more disk failures under high load, but a packet switch or data line doesn’t have components that wear out more quickly under load. Reliability is almost entirely a function of time, not load. Assuming competent engineering, any increase in heat due to load will have a minimal effect on failure rates.
mks57: OK, packet switches in and of themselves aren’t affected by load. Which would be great and relevant if packet switches were the only kind of computer TW had to own in order to run its ISP. As long as load does increase wear on something essential, TW will have to pay more due to load.
It seems like charging for bandwidth hogs is motivated by their cost however, as it cost them money to have a larger pipe to the internet. In a strict cost analysis they might see something like 10% of the subscribers are using 90% of the bandwidth.
But I do also see that it can be used as a excuse for charging more just because we want more money.
The real question is, if they are trying to address the “inequity” of having 90% of the bandwidth consumed by 10% of the users… Are 90% of the customers going to see a corresponding discount in their internet connection fees? Or are they going to have a flat minimum fee that is likely only a teeny bit lower than the current flat fee, but have an escalating schedule of higher fees based on bandwidth?
I would counter with the pricing of broadband came from the all you can eat dialup model it replaced (or is replacing in some areas). At that time there were not that much people could do with that bandwidth except view web pages faster. With the faster services developed music and video applications which some people embraced causing them to use a high percentage of the bandwidth.
I don’t know if this is a buffet model where the users can actually end up costing the ISP money by overconsumption of a all you can eat deal, but if it is I think it is fair for the ISP to either refund the money of these people and have them leave, or go forward at a higher rate for them.
In the dialup era, ISP capacity requirements and costs were largely driven by peak usage. You could consider it the size of the dining room and kitchen needed to feed M diners at a rate of N plates/hour, where the values of M and N are determined by the busiest hour of the day. The uncooked food used by the kitchen is effectively free. It is staffed by robots that, while expensive to purchase, work 24 hours a day for free. The restaurant’s costs are primarily for the building, furnishings and staff. For any particular customer, the issue isn’t how much they eat during a month, it’s how often they show up during the busiest hour of the day, and how much food they order during that hour. If J. Random Glutton eats four times as much as the average customer, it doesn’t necessarily follow that he should be charged four times as much.