Dear Ed (if I may),
Thanks for reading my essay and for your comment and question! I will certainly return the favour some time next week as I'll have a bit more time by then. Your worry about cumulative errors is very relevant, and I have have a long story to tell about that! In fact, I've come to the philosophical argument included in my essay mostly by thinking about accumulating numerical error! I tried to keep the essay as nontechnical as possible, but I may have gone a bit too far in that direction.
I take it that there are two dimensions to accumulating errors that are relevant here. The first is, as suggested, essentially numerical. These days, numerical solvers have become quite good at controlling accumulating numerical error. Part of it is that we use higher-order methods (say, for ODEs and PDEs) since we have more computational powers, and part of it is that we've become better at designing automated adaptive discretization methods that lead to more accurate results. But also, numerical interpolation methods have become really good too, so we can interpolate discrete solutions to find what I call the residual in the paper, and we can do this "live" to gauge the numerical error as we go. If you'd like technical details, please email me (nfillion@sfu.ca) and I'll be happy to send you a textbook chapter I wrote on this!
The second part of it is that whether accumulating numerical error will have a big impact on the quality of computed solutions has to do with sensitivity under perturbations---and here there's no reason not to consider numerical error as a perturbation, since very stable systems will quickly damp it while sensitive systems will magnify it. In the approach to numerical analysis I favour---the so-called backward-error analysis---the sensitivity is measured by a quantity called the condition number. And here, we can address the question you ask about composition f(f(f(x))). There is a nice theorem by Deuflhard about the submultiplicativity of the condition numbers under composition. So, if all functions are only moderately sensitive, the composed function will also be. Of course, when there's high sensitivity (as e.g. in chaotic systems, or near singularities), no algorithms will save you, however accurate it is. But this, at least, is something we can reliably detect thanks to residual analysis. And, if the stability of a problem is such that is amplifies numerical errors, then it would also amplify any other kinds of perturbations. So, as long as our numerical error is less than what comes from our modelling practice or from the system's environment, the algorithm isn't to blame for predictive error!
Finally, to address your last point, I think usefulness and accuracy are closely related to each other. And I think there's a principled mathematical reason for that, which goes far beyond pragmatic reassurance. I'll comment about that later in your thread after I read your paper!
Cheers, and talk to you next week!
Nic