There’s a more efficient way to do this.
Get a genetic database of all major genes across thousands of people who you know reliable information about. (how long they lived, every illness they ever had, their IQ, success, physical stats, etc)
First, correlate the same gene with all the other examples of it. Analyze like following :
If a gene has 3 variant alleles with a frequency of 60%, 40%, and 0.1%, you can safely eliminate the 0.1% allele right off the bat. Almost certainly (statistically this is correct), the 0.1% minority allele is a mutation that is not helpful. If it were a beneficial mutation, it wouldn’t be 0.1%, it’s most likely a mistake that is either neutral or deleterious. (the odds of it being beneficial are very low, so this is a safe bet)
Now, for things like 60/40 splits, look for commonalities across the population to try to figure out what the gene does. In some cases, there won’t be anything you can measure that has statistical validity, if so, just leave it alone.
For other genes, we know what they do from other studies. For example, certain immune marker genes we actually know how they work, and we know that variations are good. Giving each ‘designer baby’ a unique MHC combo that we know is valid because it isn’t correlated with problems like allergies would make sure our uber-babies aren’t vulnerable to disease.
We can readily guess a gene’s function if it translates to a known peptide chain or is a promoter for one - so if it codes for a different variant of myelin, that might be a good thing.
So after you do this first cleanup pass, eliminating errors and picking the better version of each allele when the benefits are measurable, you start making crops of clone designer babies. You make them in batches - you want 100 or 1000 or so genetically almost identical babies for the next phase. The almost is you do shuffle those immune system genes so a disease can’t kill them all.
Now, in the next phase, these uber-baby clones should already have a noticeable advantage over “naturals” in their life. But the batches of 1000 mean you can now narrow down the specific function of those unknown 60/40 genes. Different batches might have the 60% or the 40% allele, and there would be a patterning scheme. TLDR, you would wait 60 years, and now have the data you need to determine what the next phase of improvements should be. Clone batches eliminate a lot of confounding variables, leaving you with the information you need to pick the “superior” choice for these unknown alleles. (sometimes there wouldn’t be a clearly superior choice, nature has to make tradeoffs, but sometimes there would be)
And do another round of editing and another batch of clones and so on. In 1000 years, you’d have some seriously uber individuals who “naturals” wouldn’t hold a candle to.
Not that I think any of this will happen. AI or robot cyborgs have so much more potential that this is basically a waste of time. Reasonable napkin calculations suggest an AI should be capable of near flawless rational thought at a million times or more the thought speed of humans. They should be so far above us humans in brainpower to almost be the difference between us and insects. Genetically engineered humans would be smarter, but we’re talking “every one of them is as smart as 0.1% genius in today’s world”, which means each one of them is still held back by their biological brain.