My previous company went this direction. I wouldn't recommend it.
Say you version each module and pull in specified versions. It'll work fine, right up until two modules both try to pull different versions of a third module. In practice, you have to update multiple modules at once to avoid conflicts, which, in turn, can require updating other team's code.
It also turns out some tools like Maven don't prevent conflicts by default. You can end up exploring pom.xml files in Eclipse, trying to add exclusions, or figure out which repo is dragging them in.
Yeah, it sure seems like Maven, by design, tries to avoid dependency locking and version resolution configuration, but in turn, becomes kind of a bear to manage once you get fairly large.
I've switched to Gradle, and one of the first things I do is usually flip on dependency locking, and then even go so far to reject and/or flag anything that doesn't have a clear semantic version scheme.
Gradle's documentation can be a little overwhelming, but there's a lot more to these topics that developers usually overlook:
Many companies would benefit from real dependency locking, and making sure they have reproducible builds. It's tricky, but, it can be a lot easier than "containerization", which I've often heard touted as a solution. (Containers are useful, but you should fix your CI separately.)
Also the way I see containerization being implemented (just Dockerfile) instead of relying on deploying the same image essentially kills the reproducibility part.
That's what semver is for, right?
Breaking changes go in a major, same major means that the latest is always compatible.
You'll have the same issue with microservices if you introduce breaking changes.
That would have let you know when to expect the impact, but not eliminate the impact itself.
Occasionally people versioned their module's APIs, which seemed like a cleaner way to handle the module update, as you don't have to update everything at once. They only went to that effort when they realized they'd have to update thousands of callers.
Yeah, but even with semver library owners have problems. If you need to make a critical change (e.g. a security update), you can either (a) wait until everyone does a version bump on their own schedule, or (b) do the version bump yourself, which means you're deploying every app that relies on your library. (a) might not be feasible for critical things, and (b) means that you might be deploying changes that the app isn't ready for yet.
For example, if master is v1.5.0, and I'm an app that uses v1.1.0, then if the library owner bumps to 1.5.1 for a critical update, I need to go from 1.1.0 -> 1.5.1, which might involve changes I'm not ready for yet. I better have phenomenal integration testing to make sure the update is safe to do.
> For example, if master is v1.5.0, and I'm an app that uses v1.1.0, then if the library owner bumps to 1.5.1 for a critical update, I need to go from 1.1.0 -> 1.5.1, which might involve changes I'm not ready for yet. I better have phenomenal integration testing to make sure the update is safe to do.
That could be solved by backporting security fixes to version 1.1.
Of course at a certain point you should deprecate older versions of your package. At which point it's the client's responsibility to upgrade (like for any third party library they would be using).
Say you version each module and pull in specified versions. It'll work fine, right up until two modules both try to pull different versions of a third module. In practice, you have to update multiple modules at once to avoid conflicts, which, in turn, can require updating other team's code.
It also turns out some tools like Maven don't prevent conflicts by default. You can end up exploring pom.xml files in Eclipse, trying to add exclusions, or figure out which repo is dragging them in.