Yo’re not getting it, I’m afraid. USPS doesn’t do that. They report that Amazon sent them the electronic shipping data, and to that extent, they’re synchronized, but USPS doesn’t do package scans, at least not for that level of service. Their website isn’t going to show jack, so there’s nothing for Amazon to update you on.
I don’t think you get what I’m saying - maybe I’m explaining poorly.
I don’t really want minute-by-minute updates. I really just want accurate updates. If there’s an update, let’s make sure it’s accurate. To have two electronic information sources reporting different information seems wrong to me.
When you quoted me, you quoted me saying that I think Amazon should just re-post whatever information the USPS is giving.
I don’t know how that will cost half a billion dollars for them to just insert a line of code which copies and pasts what USPS reports on their own site - in fact, it seems like it would cost less.
Someone else in this thread mentioned that doing so would overload someone’s servers. I don’t see how - people re-post information from other sources all the time without overloading servers. People can embed YouTube videos on a Blog page, for instance, and no one gets overloaded by that. RSS readers re-post data that’s far more complex than a one sentence “Your package was picked up at 4:22pm” without overloading servers.
I’m basically talking about Amazon.com using a RSS reader to feed whatever data USPS posted, so that if a customer looks in both places, they see the same information.
The way it is now, it seems that both Amazon and USPS have independent tracking systems, which seems more expensive than just one of them having one tracking system, and the other one re-posting the data through a reader.
I’m not suggesting we make this process more complex and expensive - I’m suggesting it’s less complex, less expensive, and provides better information to the customer (because it’s the same and accurate).
And yet it happens constantly all around the world and it’s not going away. If it was easy it would have been done.
If that’s all it was it would be easy. But it’s not, trust me. It might seem like that but you haven’t considered the dozens and dozens of cases and variables.
Both systems have huge value to their company that they could never afford to give up or outsource. It’s the lifeblood of the organizations, and you only see the tippity tip of the system.
No, you are talking about combining two very large and critical systems and you don’t know how they are being used. Adding just a few links between the two is possible, but it’s probably a lot more complex then you are thinking, and it certainly wouldn’t be just cutting a line of text from one to the other.
No offense, but this sort of thing is infuriating. You admit to not having any idea how this works, but clearly it’s “simply a line of code to do a copy and paste”. If I don’t understand it, it must be simple, right?
There is considerable expense involved in getting existing systems to work together. Most of it comes from the fact that both systems are probably not designed to interoperate easily. I have no direct knowledge of how either one was written, but I do know that banking systems are notorious for this. All of your bank account information has been electronic for decades, yet getting two systems within the same bank (let alone other banks) to talk to each other about it can be the most hilariously expensive project you can imagine, because you are trying to shoehorn two systems together that were never meant to be.
Regarding Youtube allowing the embedding of your media “without being overloaded”, you should know that of course that puts massive strain on their servers. The reason they don’t appear overloaded is that they have accounted for this as part of their business, and they have the hardware to allow it. It’s part of their business model though, and it is very expensive.
RSS feeds for blogs are another poor example. For anyone who has maintained a webserver who is considering adding an RSS feed for content, the very first question should be: can I handle the increased load? For high volume sites, you’re definitely going to need more hardware.
Which raises another, more fundamental issue. Even if these systems could be magically made to interoperate today, why should the USPS expose data so that you never have to go to their site? They fund the server, which Amazon would then use to read data, to take and present as their own, and you never go to the USPS site at all. The USPS gets no advertising exposure out of it. All the expense, and none of the benefit. Not a great deal for them.
No, I get what you’re saying, you’re just refusing to believe that I’m saying what I’m saying.
There are no updates from USPS. There is no tracking from USPS. There is no information from USPS. There are not “two electronic information systems.” There is just Amazon’s information.
Amazon has a package. They tell USPS, USPS puts it on their website. The package gets picked up. Amazon puts that on their website. All the information is from Amazon.
Read the USPS message again: “The U.S. Postal Service was electronically notified by the shipper on November 13, 2008 to expect your package for mailing. This does not indicate receipt by the USPS or the actual mailing date. Delivery status information will be provided if / when available. Information,** if available**, is updated every evening. Please check again later.”
Hint: information is never available, because USPS doesn’t track packages. They track Express Mail.
If I embed a YouTube video in a Blog, how does that add to the Blog’s bandwidth? They’re just serving the HTML code or whatever is used to embed the video, they do not take on the bandwidth of serving the video to the viewer, that is still on YouTube.
When someone posts a RSS feed in a reader, all the reader is serving is the code which pulls the RSS. The company which is posting the RSS data is the one that serves it every time it’s accessed, they absorb the bandwidth consumption.
When a visitor to the Blog sees both a YouTube video and a RSS feed, they are putting load on YouTube’s servers for the videos and the content provider for the RSS data, not the Blog, they just serve the code.
If Amazon posted a RSS link to USPS’s data, all they’re serving is the code which pulls the data. USPS doesn’t have to serve an entire page to provide that data (which they would have to serve if someone goes to their site), they just serve a few lines of code.
Actually, when it comes down to it, when Amazon posts their own shipping data, the line of code they use to get their own shipping data works similarly to a line of code they would use to access it from USPS’s servers.
The only difference would be that they are pulling data from an outside server, not a local server.
If I ran a server and had the option of serving up one paragraph of text, or an entire page, I’d rather serve up one paragraph of text. That’s looking at it from USPS’s end - I can share the data, “The U.S. Postal Service was electronically notified by the shipper on November 13, 2008 to expect your package for mailing. This does not indicate receipt by the USPS or the actual mailing date. Delivery status information will be provided if / when available. Information, if available, is updated every evening. Please check again later.”
Or … I can share that exact same data, surrounded by a Web page, and maybe have to also serve up 2-3 other pages before you get to it. We’re not talking about increasing USPS’s loads if they share that data with Amazon, but decreasing them. And we’re not talking about releasing special data, but the exact same data a visitor would see if they came to USPS’s site.
And if the shopper knew that they were going to see the same info on both sites, they wouldn’t feel tempted to look at both sites, but only one, further decreasing the server load.
This continues to seem like a win-win.
There may be other obstacles here, such as legal departments getting in the way, or the USPS not wanting to show favoritism to Amazon, but I don’t think that the ideas that this would increase someone’s server load is the correct reason.
The big problem with RSS feeds and similar technology is that whenever you expose some sort of interface to be consumed by another service, you have to account for massively increased usage. As an example: when people check slashdot for their news using a browser, they might hit it a few times a day or so. With an RSS feed, people set up readers that have a tendency to hit repeatedly, depending on how frequently they update. Or, other sites will forward requests (from RSS feed sidebars), which means now slashdot has their own traffic + the other site’s traffic. This is a well known drawback to the whole “remote feed” approach: it drives up server load substantially.
The reality is that people who worry about this for a living, really do have to contend with the reality of massive load as a result of exposing a feed. It’s a nice feature, if you can afford it.
The USPS currently has a certain number of people who bother to go to their website and track status. If they decide to provide an interface to outside vendors, there will be a decrease in load from people bothering to go to their site, but a massive increase from all of the people using the vendor systems who never would have bothered to go to the USPS site in the first place. All of this to increase Amazon.com’s tracking system. If the USPS is counting on page views for advertisements, or at least brand exposure, then not having to visit their site in person is a hit they have to take. Personally, I would make Amazon pay for the privilege of being able to easily scrape data.