As software moves into maturity, it becomes a delicate art to add features and fix bugs without screwing up stuff that already works.
I’ve worked on three 10+ year-old, massively-sized projects (CorelDRAW, Microsoft Access, and ERDAS Imagine) and let me tell you – the older the code base, the tougher it gets.
In very mature projects, maintainability starts to become more important than optimization, and the prevention of introducing new bugs can trump fixing old ones. You’ll even see libraries where they decide to keep old bugs in place, since consumers up the chain rely on the buggy behavior.
If you’re a programmer, it’s important that before you change any function that is used elsewhere in the system (a “core” function), you check each and every place it is called and understand the implications of your change. If there’s any doubt, you need to either step through, or ask someone else who is more familiar with that code.
A safer alternative is to write a new function that caters to your specific needs. This may seem wasteful at first, but it actually pays off in the long run. This is because it usually makes for less complicated logic in the function, and simpler dependency chains.
Adding additional parameters and conditional branches in an existing function can introduce subtle changes that cascade to other parts, causing subtle bugs that are difficult to track down, and even harder to fix.
That said, there is a time and place for refactoring, but it needs to be done carefully. It should be done independent of other changes, and be extremely well tested.
Automated testing and regression tests help fight this war, but coverage is never 100%.
I’ve found it’s often better to err on the cautious side and try not to break things in the first place :)