In comparing it to humans, you have to argue whether it makes mistakes that humans wouldn’t. Mistaking a kid for his father is one that doesn’t tend to happen that often, so that means the Facebook algorithm is worse than humans.
Eye witness testimony is not as bad as people like to make it out to be. It’s worse than it was previously accepted as, but it’s not so bad as to be useless. Barring specific circumstances, it tends to work, and multiple instances from different people tend to make a pattern more clear.
As for the racial problem: it is a known problem with facial recognition that black faces may not be detected at all, so it’s reasonable to expect that it may be harder to distinguish features. We also know that photography has a harder time with darker faces, and that darker colors are more compressed in common digital formats.
None of this means the software is racist, but there are plenty of reasons to think it would have these problems, not just the possible input problem. Plus, I would expect that, as the number of inputs increase, the discrepancy would have less effect, as you would still get sufficient diversity in other inputs. So I’m not sure how strong the effect of a higher black input would have on identifying more black people.
On the other hand, if the software doesn’t handle black faces as well and the input faces are of lower quality due to photography and digital compression, then these could greatly exacerbate the problem. More black faces would look the same to the program.