This is a tricky question. The most basic estimates are just calculating the number in the O-notated complexity and if it’s up to 10^7, then it’d probably run in 1 second (10^6 is outdated, since modern machines are faster). That doesn’t have to work all the time, though, because there’s also a constant factor.
Different algorithms have different constants - in general, dynamic programming is really fast. Some operations are slow - modulo (that slows down the mentioned DP significantly), if-else conditions, standard minimum/maximum etc., while others, like bitwise operations, are really fast and a good way to optimize a program (they can be used for faster min/max functions, for example). Some data structures have a large constant - if we just limit ourselves to trees, then starting with red/black trees (STL set<>/map<>), down to interval trees - lazy-loaded, then simple, and BIT (Fenwick trees) have the best constant. The difference is so great that you can sometimes replace a map<> by a BIT, get an additional log-factor to complexity, and improve the actual running time!
Then, there are small tricks like adding a condition which for most test data makes the code skip some redundant operations, or noticing that it’s hard to make test data for which some assumption doesn’t hold. But that’s another topic… the point is that sometimes, these tricks (they can be in your code even if you don’t know about it :D) can improve your constant factor, or even complexity, noticeably.
In short, it all depends on experience. When you code more algorithms and note how fast they ran, you’ll be able to estimate better what could pass later, even without yet coding it.