In the mid-90s I was privileged to be on a team building a service-oriented middleware architecture (Datagate), which I have mentioned before. We used an underlying library that implemented XDR, for which we had no unit tests. Since much of our software was built on this and a couple other core technologies, it was important that they be as solid as possible. Make the foundation solid, and the rest will follow. I decided to build a suite of unit tests against this software, suspecting that there were some bugs in there. Using a coverage tool, I wrote tests that covered the entire XDR library, and we found that the suite would not run on one of our twenty or so supported platforms. We fixed that bug in the library.
After that, we found that one of our nagging bugs went away in our Directory Service. It turns out that the two were related. I took some time to build the test suite, but not that much. It saved us time in supporting the Directory Service. It probably saved application and service developers time, too, but we didn't research that.
After that, when it came to the value of testing, I told everyone who would listen, "We don't have time to skip that step." And it's still true today. Can you afford the extra time it takes to skip testing?