While you can't find the running time of an algorithm without doing mathematical analysis, the empirical measurements can give you a reasonable idea of how the running time of the algorithm---or rather the program---behaves.
For example, if you have n measurements (x1, y1), (x2, y2), ..., (xn, yn), where xi is the size of the input and yi is the time of the program on the input of that size, then you can plot the function to see whether it's a polynomial. In practice it often is. However, it's hard to see what the exponent should be from the plot.
To find the exponent you could find the slope of the line that best fits the points (log xi, log yi). This is because if y=C*x^k+lower order terms, then since the term C*x^k dominates we expect log y =~ k*log x + log C, i.e., the log-log equation is a linear one whenever the "original" equation is a polynomial one. (Whenever you see a linear function in the log-log plot, your running time is polynomial; the slope of the line tells you the degree of the polynomial.)
Here's a plot of the quadratic function y(x)=x^2:

And here's the corresponding log-log plot:

We can see that it's a line with slope 2 (in practice you would compute this using, for example, linear least squares). This is expected because log y(x) = 2 * log(x).
The code I used:
x = 1:1:100;
y = x.^2;
plot(x, y);
plot(log(x), log(y));
In practice the function looks messier and the slope can (or should) only be used as a rule of thumb when nothing else is available.
I imagine there are many other tricks to learn about program behavior from running time measurements. I'll give others a chance to share their experience.