I despise significant figures and that style of error reporting. If an experiment gives a value of 1.673 [unit] with a gaussian error of .439 [unit], then that is what you report! Rounding to the nearest (usually decimal rounded!) multiple of the error is introducing a random bias to the mean on the order of the error. Rounding the error itself is introducing a random bias to the variance! And then what happens when you collect the results of those experiments to produce aggregate data? The statistics on the mean and variance are off.
Pretty much the only time I'll accept the decimal style error reporting is if its a direct readout from a digital measurement device. Even then, the device usually has a rated error which is larger than its minimum readout resolution, and that should be used instead of the readout decimal place.
/rant