I’ve been writing test cases for my Go code, and the test unit will tell you how long it took to run after it’s done. On Mac and Linux, they’re usually around .002-.009 seconds, on Windows the figure ranges from .19s-.3s (though usually closer to the .19 range). Sometimes there’s a spike where it takes over 1 second (and in one case 10 seconds), but those are rare and probably just a function of task scheduler SNAFUs.
The thing is, no matter how many more tests I add, the Windows code always hovers around the same amount of running time, which makes me think it’s not actually slow, it just has a very large startup overhead for whatever reason. What could be causing this? Is this common when benchmarking, say, C code on Windows as well (i.e. is it just Go)?
My WAG is .dll files. Do Mac/Linux generally statically link code rather than use .dylibs/.so files? I thought that .so and such was pretty common nowadays, but maybe I’m wrong and it still primarily statically links. (Go does, ultimately, rely on the C standard library iirc). If so, it’s probably just the dynamic linker firing up making the difference. I just want to know what could be causing it, because it’s really a striking difference.
All of my programs in all 3 environments are run from command line, so no IDE overhead in any case. A similar number of programs are running on all three environments. The Hardware between the Windows and Linux machine is identical (dual boot), the Mac is laptop so it’s a little worse, if anything it shouldn’t be faster due to hardware alone.