# Guidance for Testing Dependent Modules Whenever you're dealing with multiple modules, potentially developed by different teams (and especially in a [[Microservice Architecture]]), the amount of integration problems you'll experience will mostly depend on your deployment and testing strategies. Juggling incompatible versions and cross-team communication are only some challenges that will plague your project unless you wrangle them from the beginning. Here are some guidelines. - Use [[Behavior-Driven Development]]. Each consumer service should specify to its producers its expectations in the form of executable specifications. - A set of modules is only considered ready for production if all expectations of all modules are satisfied for all other modules - This tests should future proof old APIs – if a new feature (or a fix) breaks an old API at least one specification test should fail - This also ensures your module is more reusable as it as two clients from the beginning (the actual client and a test suite) - Avoid API versioning - As soon as you have multiple versions in field, you potentially have to do your bug fixes multiple times (one for each supported version if the bugs are at the API level) - Try to follow [[Postel's Law]] instead by extending old APIs. This requires you to be vary of the deserializer you pick, as most fail when confronted with an unexpected field - If you absolutely have to use versioning, use [Semantic Versioning](https://semver.org) to better communicate breaking changes