in other words, if we remove or destroy a chip that the scanner is aware of, how long will it take for the scanner to realize it disappeared? (while all relevant info is welcome, my interest is naturally biased towards cheaper, low end gadgets)
ok, google searching suggests that the metric closest to what I am asking here is “tags per second” for which I am seeing random numbers quoted from 50 to 400. It’s not clear if this metric is or is not affected by the actual number of tags in the vicinity of the scanner.
There is also another metric “transactions per second”, and I am making a (possibly inaccurate) assumption that it will differ from “tags per second” only if transactions involve high latency database access or other such computation. Or maybe in reality “tags per second” is always at least a linear coefficient greater than “transactions per second” because of how the interaction between the reader and tags works?
You are asking these bizarre questions again and diving straight into the minutiae of details without even make the question coherent to begin with. It is your job to give some background what problem you are actually interested in and you didn’t do that. RF ID tags are used in everything from toll booth transponders to products moving through a distribution center. All of those applications have their own specs and different answers. In many of them, RFID scanners only do reads when they need to. They don’t scan for everything all the time. That is just based on the purpose of the application. I so some experimental work with RF ID tags.
Please rephrase the question so that it means something. You could design a system so that it never notices one missing or one that scans ever 10 milliseconds for a specific one if you wanted to.
my question implies that the application I am considering requires finding out the “disappearance” of the chip as quickly as possible. I agree that I was unclear as to the identity of the chip that I care about - in fact, I am interested in quickly detecting appearance and disappearance of any of the chips of my working set of several hundreds (i.e. I know their ids ahead of time, if that matters), and not, let’s say, only detecting appearance/disappearance of a particular one.
Ok, that’s better and makes more sense. I don’t know the full answer because it depends on a lot of things. If this is a homework or theoretical exercise, I couldn’t tell you the answer. However, it is still a question of practical design. In the real world, database latency would introduce a few milliseconds lag at each read to look up what it is supposed to see. You would also want redundant checks to make sure there is really no signal before anything is triggered in the database for any practical use. The RFID tag has to be physically moved some distance before it stops reading which takes some time as well. I am having a hard time figuring out why very small time-scales are a factor in this question just based on the common practical uses of RFID tags.
if we only have a working set of 200 chips, then we can have the “database” as a list in memory, right?
Do you agree with my guess upthread that “tags per second” would be the essential characteristic in choosing the reader for this sort of application? Does the price curve get steep for high values of this metric? Or is the price of various readers determined by other parameters so that there may be very “fast” inexpensive readers that may really suck on some other standard performance metric?
so what are the typical false negatives and false positives rates in low end readers nowadays? Or is the error rate low but this statement is predicated on the high cost of not detecting such an error for the business process?
“Tags per second” is what you want, but there’s always fine print. That spec may be assuming just a few tags in the RFID reader’s range. That is, a particular reader may be able to do 400 tags per second, as long as only a few tags are in range. It may not even be capable of reading 400 tags if they’re all in range at the same time, no matter how long you give it. You have to read the datasheets of the reader and tags to figure out the details, especially if your usage doesn’t match the normal RFID use cases.
It has been a few years since I did anything with RFID, and certainly the state of the art has moved well beyond what I was familiar with. So apply a suitable discount to what follows.
The “reader” is not a totally stupid device, but close. Almost all the questions you’ve asked about these things are really questions about the software which will be consuming the output of the reader & reacting to it. And that software is something you have to write. They may have an SDK, but you do the work. Which includes deciding which metrics you care about & coding something to achieve them.
Certainly the limitations of the reader’s capability put an outer bound on what your software can be made to do. My bet is that trying to keep tabs on multi-hundred RFID chips all within range of your scanner will be the longest pole in your design tent. Ultra-cheap scanners won’t be able to do it at all, and more expensive ones won’t be able to do it both quickly & reliably. Fairly quickly OR fairly reliably has a chance of success.
You might like to read this. Not a brilliant paper, but it does discuss the basics of a CDMA based RFID system that has been simulated to manage 511 tags.
Running back to the papers referenced and then finding other papers that reference them is a typical tactic to find a spread of information.
what is the significance of “frequency” in the description of an RFID reader? E.g. from this manufacturer’s page http://www.feig.de/index.php?option=com_content&task=view&id=5&Itemid=193
so apparently this company thought that the frequency metric is somehow pretty important. Is that because the passive chips are made for a specific frequency range and higher frequency ones are cheaper to make?
(btw, the company did not bother to publish any other numeric metric about these gadgets in the nicely formated pdf product description… )
ok, here is a partial answer to my question in previous post. http://www.hobbyengineering.com/specs/PARALLAX-RFIDReader1.pdf
unfortunately this does not explain what this “faster data transfer” actually means. When it comes to just reading off id numbers from chips, it would naively seem to me that there is not much data being transferred. So are high frequencies only important when there is a lot of data to read off the chip? Or do they also help increase tags-per-second metric?