First, I want to emphasize that this is just my opinion. There are plenty of people (and groups) in the company that use Git, and seem to like it (some are very evangelical about it). They have different needs and priorities.
Another thing is that some of Git’s limitations are fixed/worked around with third-party tools. GitHub is great. But it’s not Git, it’s Git plus something else. Or if you need large binaries, there’s Git LFS. Again: not Git. And you still need an external server like GitHub, too.
Nothing wrong with that per-se, but after a while it seems one might be better off with an all-in-one system. Git is no longer really distributed if you end up dependent on a central server for daily work, so you’ve lost its entire reason for existence. And all these systems become harder for IT to manage, especially if every group in the company uses a different subset of them.
I would say the primary thing that Git enthusiasts seem to love is the easy way to enable/disable various sets of changes. For example:
- You’re working on a few independent changes at once and want to switch between them
- You have some debug code that you don’t want to check in, and need to occasionally enable/disable
- Someone else gave you a change to try out, which you want to temporarily integrate into your tree
All fine things (and all doable with more traditional systems, though maybe with a tad more friction). But there’s a little problem: with big projects, you end up spending a ton of time just recompiling things. Change a common header, recompile the whole codebase. Intolerable. At least for C/C++ projects, recompilation is probably the single biggest contributor to wasted time.
So what’s the solution? Multiple local repos (“clients” in Perforce). Totally independent; they don’t step on each other at all. Because each repo only keeps the latest version of the file, and things like tools can be shared, there’s not really much hardware cost. And there’s zero developer cost, because switching changesets is literally as easy as bringing up the IDE window for the change you want to work in. No recompilation, no syncing, no reloading, no anything.
You can do this in Git too, of course. But then you’ve lost the advantages of Git, and paid a higher cost. So why not just use a centralized system?
Few people have a legit need for a 1GB repo.
LOL. I mean, maybe it’s true, but for me it’s a laughable statement. I looked at the source code size for just the piece of the codebase I work on, and it comes to 12 GB. A fair amount of that is generated files and third party files, but even without that it’s multiple gigabytes. And that is not counting tools (compiler binaries, etc.). It’s also just counting the latest version of the files, not the entire stored history. I don’t have a good means of estimating the real size of the history, but many files have several thousand changes.
If I include that I regularly need to look at, including code I need to touch while changing common code, it goes up to 45 GB. It’s >12 GB even if I include just .cpp and .h files. And still doesn’t include everything.
Use more repos
Why? More repos are bad practice. More stuff to keep track of, more things to go wrong. A single unified view of the universe is good.
use dependency managers
Not acceptable. Yet another point of failure, particularly if the source repro is external. Hell, I’d consider that unacceptable for security reasons alone. We need very particular versions of third party code that are static for long periods until a new version is needed, in which case it goes through a long qualification process. That code should be checked in alongside everything else.
don’t store large binaries
Absurd. Binaries are in need of just as much version control love as anything else. Perforce works great with large binaries. We have a large set of test binaries that are used with automated testing. They regularly get updated, because the underlying source data changed or the test generator changed. And sometimes that process itself causes bugs, so going back in time to figure out when the test binary broke is crucial.
I just checked and the largest of these test binaries is 11 GB. All are >1 GB, and there are hundreds total. They get revved on a regular basis, generally at least annually but often more like monthly.
And although a full history is crucial, it would be utterly stupid to require everyone to store the complete history of each one. As I mentioned, there’s Git LFS, which I haven’t used but AFAIK only stores the history on the server. It’s a fine example of how Git is insufficient and requires third-party support to be useful in many cases.
I can probably find a lot of sub-optimal practices in your organization
This is where I start to rant. In short: fuck that noise. That’s a lot of Steve Jobs “you’re holding it wrong” bullshit.
You mentioned a couple examples already of things that you apparently consider bad practice, but aren’t bad practice at all. They’re just bad practice for Git, because Git has some limitations. For many devs these limitations aren’t relevant, and for some the workarounds aren’t too high a cost to overcome, but for others they are. Again, going back to large binary support: the very fact that there are tools to make it kinda work illustrates that it’s a completely valid use case.
Microsoft uses Git internally. Except they don’t, they use VFS for Git (aka GVFS aka Scalar). To be honest I haven’t tried it, so I can’t really give my impression of it, besides pointing out that plain Git was so totally unsuitable that they had to virtualize the entire filesystem to make it work and have a whole team of people supporting it.
At any rate, this is a bit of a scattershot response; I don’t have the time or inclination to write a whole essay about it. But in summary: Git needs extra support to make it suitable for large and diverse codebases, which both makes it not-really-Git and eliminates much of its reason for existence. What’s left is a mildly confusing command line interface with the advantage that a lot of newbie devs are nevertheless familiar with it, which to be fair isn’t a small thing.