It’s good enough to beat humans at limited domain tasks. It’s a simple and logical progression to expect us to get “bottom up” simulations to scale up thousands of times eventually. We’ve done it before - the very search engine that google uses used to be just a single computer with a small list of text it found crawling the known web. A “bottom up” simulation that uses a set of interconnected nodes to model a subsystem in the brain can scale, because every other simulation in the network uses the same source code, just a different set of data to govern how it operates.
I think that ultimately “it” will at a minimum allow us to run a system that is equal to human intelligence but at least 1 million times faster. This assumes an interconnected set of chips running at 5 ghz (for round numbers) and using hollow core optical fibers for the interconnects.
I think a human being of above average intelligence - say a currently living aerospace engineer, doctor, etc - who thinks the same way but a million times faster would be a being we would commonly agree is “super-intelligent”. Ask it a question, and you get a 100 page pdf file with a detailed, researched answer instantly. Ask it to design a new wide body jetliner or jet aircraft, and it has a preliminary design in a few hours. (the limitation would become how fast you can construct physical prototypes - obviously, even a being that thinks a million times faster can only do so much without empirical testing)
And yeah, for such a being, putting together a starship, if there is any way at all to do so, would be a straightforward sequence of steps. Personally, I think the obvious, non crackpot, albeit tricky to execute way is to produce anti-protons and anti-electrons via spontaneous pair production. (a big honking laser in space crosses some beams).
You then fuse the anti-hydrogen together in a series of fusion steps until you reach anti-beryllium or some other solid, superconducting element.
Without every touching it, you cool it down with lasers (this is also how you manipulate it) and compress it into fuel pellets with magnetic fields. A solid superconductor is trivial to contain, as it will reject other magnetic fields and just levitate there.
So, your starship is a bunch of fuel canisters with magnets in the walls. It reacts the antimatter in a big honking engine that gives very low thrust, but absurd ISP. Easy.
Or, we could just do fission fragment. This is a solid, almost certain to work engine design that current day humans could construct.
Either way, the limiting factor behind starships is the fragile apes who have to ride them. This is why a form of AI is the only practical way to do it. A centuries long journey is not a big deal if the crew is solid state and can go into low power mode for the boring trip. Not to mention, the risks of interstellar travel are a lot easier to mitigate if the beings are digital. You would just launch a fleet of ships, and they would beam memory state changes to each other via radios. As ships are destroyed by hitting interstellar dust, the surviving vehicles continue the mission. Doing it this way, if only 1% of the ships on average survived the trip, it’s no more than a minor inconvenience because the beings riding them are all crowded onto the surviving vehicles.