Ultrasonic Detection Technology

I am interested in a device capable of determining the density of surrounding objects. The transmitter and receiver must be located in the same place, so we couldn’t put the material in between the two. Using commercial, off the shelf technology, how small and lightweight could such a device be? Disregard things like the display or user interface. I just need it to send a density signal via BLE or something to another device. Could I make this sensor tiny, wearable and lightweight? Beyond what is commercially available off the shelf, what are the theoretical limits to size, power consumption, weight, etc. Thanks.

ETA: Also, how close would we have to be to the objects? Can I wear a device that tells me I am standing next to a wall? Next to a bush? Or a car, etc.

Do ultrasonic density detectors work on arbitrary solids, not just liquids and slurries?

Wait, are you talking about ultrasound proximity sensors (like car parking aids)?

Yes… you can make a small device that does this, but what does that have to do with density? Your phone can probably do this with a camera and a computer vision model, for example. What are you actually trying to measure, and does it necessarily have to use ultrasound in particular?

IIRC there was a stud sensor that included the capability to find/ID things like pipes and wiring as well. Lemme see if I can find it.

ETA, it was Walabot, but I don’t think it was ultrasonic. If that’s okay, it appears they sell it in a format designed to allow you to add it to your own projects instead of using it for it’s intended purpose. They mention it being compatible a Raspberry Pi.

Maybe I’m misunderstanding the term “density” here? Originally I thought you were trying to measure the physical density of a specific substance, like its mass divided by volume. Do you instead mean density like a dense forest, the quantity of objects you’re surrounded by?

If so, what is the application? Self driving cars, for example, will use a variety of sensors (ultrasound, radar, lidar, cameras) to do that work, combined with sophisticated machine learning algorithms. The physical limits basically come down to wavelengths and resolution (in time and space) of what you’re trying to measure vs what you’re measuring with… if I understood your size question correctly at all, which I’m not sure.

There are a variety of in-wall imagers using various sensors: 4 Tools That Can See Through Walls | DoItYourself.com

But they’re quite different from solutions that measure either physical density or 360 surroundings.

I edited my post before I saw this. I agree, but OP may want to check out their Maker section. I’m not digging into it but it mentions “you can develop a wide range of programs to see through walls, track objects, monitor breathing patterns, and so much more”, so that to me, at least suggests, it can do more than see what’s a few inches a way from it.

I need it to differentiate between a bush, a car, a brick wall or a thin wood wall, etc. Not just proximty detection, but “proximiry to what” basically. Doesnt need to be super specific. Greatwe than or less than a determined density is fine.

Density isn’t going to work easily, if at all.

Mass per unit volume. There isn’t an obvious physical property that any sort of sound measurement can make that is characterised by just this alone. Bouncing sound off an object might, with enough effort, manage to measure the admittance of an object. But this almost always needs a contact measurement. Similarly (and related) you might measure the speed of sound in an object, so long as you know its size. Assuming you can get a reflection of the far boundary, and the reflection is reasonably clear.

An ultrasonic sound wave travelling in air is either going to reflect off, be adsorbed, or travel into the object, or a mixture of the three. The dominant factor is the impedance difference between the object and the air (which relates back to the admittance). None of these are directly dependant on the density of the object. The speed of sound in the material is related to the density, but you need to know the elastic modulus of the object as well. Same for admittance/impedance. A bucket of water and a solid lump of plastic have about the same density, but wildly different acoustic properties.

If this is the use case, you can step back quite a way. You don’t want density. It is the wrong metric. An ultrasonic sensor with a solid bit of signals processing could probably provide a metric of surface characteristics that might provide guidance.

A wall is flat, and typically somewhat rough, to the point where the high frequency adsorption spectrum might be used to characterise it. Even a painted drywall surface adsorbs at very high frequencies, and any sort of porosity will have a characteristic curve. Metal objects will be different again. Cars are smooth and curved. Bushes will adsorb a lot and generally have a messy signature.

A single ultrasonic transceiver might provide a starting point. However cheap ones typically only work at a single frequency. You really want to sweep the frequency. Better would be a phased array that you can actively scan. However that requires increased size, as you need the elements to be spaced apart in order for the beam steering to work.

In the modern world, the same, but better, capability might be provided with LIDAR arrays, such as used in phones or robotic vacuum cleaners.

Would help to know the use case. The question seems to be assuming the technology.

Like a collision sensor that can tell what it’s about to collide into? How confident does it have to be? Can you share what the actual intended application is (some sort of robotic vehicle?) or is it a secret? Not trying to steal your idea, just trying to understand what the parameters are.

At low confidence you could use a tiny cheap camera that can tell you whether something is a car, bush, or wall. But it would not be able to tell you how physically dense (or strong) that thing is, if you’re about to collide with it.

For something like that, I think you’d need a wavelength able to penetrate arbitrary common materials, like x-rays or terahertz imagers or some form of radar. You wouldn’t necessarily need a receiver on the other end, but that would affect the data or image you get back (if you get any at all) depending on the specific materials it’s aimed at. You’d need a computer model or a person to interpret the results, so it’s not a simple hardware device only.

The use case is important because something that works for “should the car emergency brake when it notices X” is quite different from “this space rover needs to measure unknown material X and send back scientific data” which is again different from “finding buried fossils in the ground” or “finding fish in the ocean”. Various sensors and technologies can be combined, but they all have tradeoffs. Differentiating a metal wall from drywall from brick, for example, is a different problem than differentiating a thin wall from a thin wall with brick wallpaper, which is again different from differentiating from a tree or a bush or a tree-like woody bush.

It would really help to understand the use case better, especially since something like a car isn’t uniformly dense… you also don’t necessarily know what’s behind a wall (if it matters).

Can’t a dolphin do this? Can’t they tell the difference between a rubber fin and a human limb, for instance? Is it all just a surface measurement?

Thanks for the replies so far. I will read them all later today.

Use case is more complicated. Basically, its for war games. Right now the military uses lasers to simulate shooting someone. Its worked well enough for 60 years but has always had limitations like not being able to shoot through a bush. Lots of companies are exploring other methods, but they all end up having more limitations than solutions. They solve the one problem, but create a dozen more. For example, a camera AI solution right now doesn’t work at night. A geopairing solution could work, knowing the location of the shooter and the target, and with a complicated set of magnetometers and accelerameters on the weapon can determine a shot, but now the shot huts the target regardless of where it is. A soldier behind a concrete wall still gets hit. So now, instead of the laser that goes through nothing, we have a shot that goes through everything. Millimeter wave technology could be used instead of the laser because it can go through some materials but not others. However, it doesnt go through foliage (which is the main thing it must go through), and there are range limitations and also the aperature becomes huge.

I am thinking of a geopairing solution to determine if the target is hit. But then, when the target gets hit, it triggers a sensor on the target that immediately determines if he/she is behind cover (rather than just concealment), and based on that info, their vest decides to process the hit or not.

Not an expert at all, but I thiiiink aquatic animals use echolocation movement patterns to help differentiate potential prey from static objects. I don’t know that they could distinguish two static object densities from each other in still water…? Happy to be proven wrong though…

They could also combine other senses though, like scent, electrical fields (sharks and platypuses, not sure about dolphins), etc.

Would a less-than-lethal round with similar flight characteristics not work?

For military and similar uses, especially, where there’s a hostile opposing force trying to actively confound your detector and measurements (like hiding behind a car door might give cover against a laser but not an actual projectile), I’d think it would be doubly important to use a mix of sensors for greater reliability. I don’t think there is a single physical “what is this thing over there” sensor either in the natural or built world… you have to combine a bunch of different measurements and interpret them intelligently, subject to different error levels under different conditions, to make an educated guess.

War games are extra hard because there are so many different projectiles and ballistics and spreads and types of cover and different penetration levels and angles of penetration and armors and such, along with shrapnel and airbursts and a million other variables…

You could use a lidar point cloud up to a certain distance, for example, to get a better idea of the 3d shape of an object (not really its density but its surface reflections). You could maybe combine that with a visual camera tuned to that laser’s wavelength to get a better idea of whether it “hit” based on its illumination pattern (the actual dots). I don’t believe there is a commonly available “x ray laser” for penetrative measurements at a distance, but I could be wrong there.

IIRC from my ancient encounters with MILES, there were different shooter / sensor combos for different weapons. Shooting a tank with an M-16 even point blank center of mass wasn’t a kill. But if you had a TOW, it sure was.

A complete solution to your current problem requires the shooting side to know the “ordnance permeability” of every obstacle between it and the target for the type of ordnance it shoots. And how soft that target is. Or equivalently, it requires the recipient of an on-target shot to know what it was shot with from what range and the ordnance permeability of all intervening obstacles. Since ordnance permeability is also a function of range, you need to know the location of all those obstacles.

It’s not obvious to me that any half-solutions to this problem will produce more realistic training as opposed to just introducing new artificialities.

But that’s a much higher level concern than the low level hardware ideas you’re asking about.

Good luck. Seriously, not snarkily.

Also, for the geo part of this, you’re not limited to GPS. You can get more accurate positioning with other technologies. Combine that with a map of the training ground and cover at any location and you have more data to feed into the model.

I think there are already real world examples of this, eg https://ubisense.com/training-real-time-tracking-military/?utm_source=chatgpt.com (think Airtags on each participant and sensors everywhere so you know where everyone is)

Just to be clear, the geopairing solutions already exist. They just cannot determine if someone is behind cover or not. LIDAR scanning an entire training area is not feasible. Areas are the size of entire counties or larger. And training areas can be anywhere. Also wouldnt account for things that move, like hiding behind a vehicle that wasnt there when the area was scanned.

The ballistic data and protection values for different ammo and different vehicles already exists. The laser systems do this already. Adjusting signal strength, beam shape and encoded messages allows for all this. And the equipmeny worn by the target runs its own lethality effect assessment routing (does thr dice roll) to determine damage. With retroreflectors, the laser systems even allow for time of flight, elevation, leading targets, etc. But, lasers don’t work when the target is behind a bush…

No solution right now is capable of determining if a target moves behind cover versus concealment.

Less lethal projectiles that can fly out to 500m accurately from a rifle do not exist. They would be lethal if they could do that. Plus, solutions need to be electronic because its not enough that the target knows it is dead. Everyone else analyzing the battle needs to know. Again, lasers do this fine as long as the target is not behind a bush.

Cameras and AI seem promising. But multiple cameras at different zooms and also night cameras would be needed. That is currently in development.

Im just curious if there is another way that hasnt been explored. The ultrasonic detector idea occured to me recently. Yea, it wouldnt be able to know everything in the path between shooter and target, but if it could even just know what is arounnd the immediate vicinty of the target, it would be a huge improvement.