"Premature optimization is root of all evil" is something almost all of us have heard/read. What I am curious what kind of optimization not premature, i.e. at every stage of software development (high level design, detailed design, high level implementation, detailed implementation etc) what is extent of optimization we can consider without it crossing over to dark side.
When you're basing it off of experience? Not evil. "Every time we've done X, we've suffered a brutal performance hit. Let's plan on either optimizing or avoiding X entirely this time."
When it's relatively painless? Not evil. "Implementing this as either Foo or Bar will take just as much work, but in theory, Bar should be a lot more efficient. Let's Bar it."
When you're avoiding crappy algorithms that will scale terribly? Not evil. "Our tech lead says our proposed path selection algorithm runs in factorial time; I'm not sure what that means, but she suggests we commit seppuku for even considering it. Let's consider something else."
The evil comes from spending a whole lot of time an energy solving problems that you don't know actually exist. When the problems definitely exist, or when the phantom psudo-problems may be solved cheaply, the evil goes away.
Edit: Steve314 and Matthieu M. raise points in the comments that ought be considered. Basically, some varieties of "painless" optimizations simply aren't worth it either because the trivial performance upgrade they offer isn't worth the code obfuscation, they're duplicating enhancements the compiler is already performing, or both. See the comments for some nice examples of too-clever-by-half non-improvements. Tweet