In C++, I follow a playbook for keeping all hell from breaking loose:
1) Write a googletest
2) Write a googlebenchmark
3) Run all unit tests under AddressSanitizer, ThreadSanitizer, and g++ UB sanitizer
4) Tidy up with clang-format
5) Run cppcheck
So I feel pretty confident I'm not doing something braindead if I can get this stuff through CI.
But for Python, I don't really have good idea when I'm doing something that'll cause me agonizing pain in the future. The only tool I use is flake8, which is awesome, but I can't see memory leaks or performance profiles.
What strategies do you adopt (and what tools do you use) to keep all hell from breaking loose in large Python projects?
Not exactly the 'microservices' approach, but similar ideas.
One of the most useful things related to this is to focus on interface design. It's easy to scrap a bad implementation and re-implement the same interface later, it's harder to fix a bad interface that you're using all over the place. Making some implementations pluggable up-front will also make it easier to swap things out later.
Another thing that ended up causing the most pain in the long run was building in too much functionality directly instead of leaving things up to plugins. Plugins can more easily be enabled or disabled to compose specific functionality. The alternative is tons of code and tons of configuration options to handle every little corner case.
As far as lower level tools, the 'coverage' tool integrated with your test suite is a must have.