How does the VTech Bugsby reading system work?

If you don’t have young kids (or even if you do) you may not be familiar with this new reading learning system from VTech. Basically each printed book comes with a chip that you insert into a reader (shaped like a catepillar- named “Bugsby”). When you touch the reader to the page it reads the word that you touch. It can also read the whole story or identify the pictures and characters that the story revolves around. Here’s the webpage with a demo for the product:

What I can’t figure out is how the technology works. The pen does not need “scan” across the word, so it’s not optically recognizing the printed word. It doesn’t seem positionally based, since it works even if you fold the pages up. There doesn’t seem to be anything inserted into the paper stock itself, since it’s realtively thin and you can “see” through the paper and any wires or circuits would be visible.

I can’t find any source on the web discussing how it works so I’m hoping someone here has some insights before I go nutty from being stumped…

If I had to guess I’d say it probably has an RFID reader in the wand and the pages have RFID chips, which can be incredibly small and passively powered (so no wires). The cartridge for that book simply would map from ID to a piece of data associated with that page.

If you look really closely at the page, does it appear that the words are printed on top of a field of microscopic, almost randomly-scattered dots?

If so, it’s probably using the same technology behind the Fly and Livescribe pens. These pens use a tiny camera near the pen tip to recognize this dot pattern (which is actually not random at all) and thereby determine not only where they are on a given page, but even which page of a given book is currently in use. The pens use this positional information to do all kinds of tricks similar to what you’ve described.

It works even if the page is folded, as you’ve described, because the position information isn’t relative to the physical page itself, but rather to the entire universe of possible pages. That is, every word on every page can be encoded with dots that distinctly identify its position relative to the book, sentence, paragraph, etc. that it’s in.

Bugsby most likely uses this same technology, coupled with lookup info encoded in the chip, to produce the text-to-speech translation.

Oh, I just noticed that the OP says every word is uniquely touchable. It’s very unlikely they’re using RFID tags for every word on every page. That would be ridiculous. It’s probably something more like what sco3tt suggests.

sco3tt- I’ll have to check tonight, but I looked pretty closely before and didn’t notice anything. Would the dots you describe be visible to the naked eye or would I need a magnifying glass or something more powerful? The terminal point of the reader is a hollow tube, which is only 3mm or so wide, so I suppose it would be possible for a very small camera to be at play here…


That overview says the tech is the same as the Tags

Astro- Nice! Shoots and scores.

It may be that the dots are not very discernible to the naked human eye, especially if they are designed to work with an IR camera.

I believe there is a copy-protection scheme built into many banknotes that operates something like this - a pattern of specifically-arranged dots is printed in various places on the note - designed to be recognisable in any orientation and I believe with some redundancy built in to cope with damage and dirt, etc - digital photocopiers recognise the dot pattern and refuse to make a copy of the money.

A magnifier might help. I checked one of my Livescribe notebooks under normal room lighting and I could just barely discern the dots on the page, but then again it’s probably a case of “I’m only seeing them because I know they are there.” If I wasn’t already aware of the technology I would just think that the page had a faint gray halftone applied to it.

As Mangetout points out, the dots only need to be visible to the IR camera, so it’s possible that they’ve used an IR ink that’s transparent to humans. However, given that Livescribe allows you to create your own paper using a standard 600dpi printer, there’s obviously a lot of leeway in what inks those cameras will respond to. I doubt that VTech went to the expense of using an IR-only ink when microscopic gray dots will do just as well.

Also, in case I was unclear in my earlier post… the dots themselves don’t encode the text in any way. They just tell the computer in the pen that the tip is pointed at a given location, but the chip contains all the lookup info to tell the computer what word is found there. You may have already gleaned this from my and/or astro’s post, but I just wanted to be certain I didn’t mislead you.

I expect there is quite coincidentally a bit of variance between different, completely ordinary inks when viewed in IR, so it may have only required a little planning, rather than any expense.