Dear Ian T. Durham,
I have loved your discussion of infinitesimals. Effectively, those exotic objects have evaded mathematicians during three centuries, giving a long controversy, which is still open!
You cite Robinson's work on "Non-standard analysis"; but, as you must know, this modern analysis has received criticism by other mathematicians. For instance, Connes is trying to obtain a rationalization of infinitesimals using non-commutative geometry due to certain limitations of the non-standard analysis.
By a lack of space I did not discuss those interesting issues in my Essay. I met with the problem of the infinitesimals, when first tried to obtain a rigorous explanation for the neglect of second order corrections in the quanta n^(plusminus) in the canonical form, when deriving the results of classical physics. For instance, consider an elementary process describing the transport of energy between systems A and B (page 4 in my Essay). If the transport of energy is infinitesimal then the terms quadratic in epsilon are zero and one recovers the classical laws.
Time ago I named this "epsilon-calculus", although currently it is only a rule of thumb for our scientific applications and nothing that mathematicians would consider. Somehow as Max Planck used the concept of infinitesimal in his books in theoretical mechanics, although mathematicians considered his concept without mathematical meaning.
The problem is in finding an object epsilon different from zero but with (epsilon)^2 being zero. There is not real or complex number with those properties! Robinson want to characterize the infinitesimals using the new category of non-standard numbers, but if epsilon is a non-standard infinitesimal, then (epsilon)^2 is not zero, but a higher order infinitesimal. Another possibility could be the dual numbers and the Grassmann numbers, that have the property that their square is zero, but I have not studied this enough.
In practice, I merely take 'infinitesimals' to be very small real or complex numbers such that the squares are so small that cannot be measured in the lab. This is enough for practical applications and, at this point, I agree with you. However, time (fundamental time) is a different concept and a correct understanding of the (t --> t dt) will need of a careful consideration of those mathematical issues.
I would like to comment the part where you discuss uncertainty relationships for light. You say that when Dt --> 0, "the time-energy uncertainty relation prevents us [...] from measuring the velocity of the object". But for a photon (Dx = c Dt) and the ratio Dx/Dt is well defined in the limit when Dt --> 0, giving the instantaneous speed of the photon. It is true that the relativistic uncertainty relations introduces a lower limit for t as a function of the uncertainty in momentum Dp, but the same limit is also introduced for x, and since the speed of the photon is a constant, both the average speed and the instantaneous speed coincide. In the classical limit the uncertainties go to zero (h --> 0), but we obtain the same speed: c.
Of course, we do not really measure what you call the "truly" instantaneous velocities, but neither we measure "truly" temperatures, "truly" electric currents, "truly" masses... For instance, suppose that the temperature of an object is T, when we place a thermometer in thermal contact the temperature of the system changes from (T --> T' = T DT), where DT is the perturbation introduced by the thermometer. The goal is that if the thermometer is small enough when compared with the system size then T' will be near enough to T and can take T' as the temperature of the system. Indeed, one of the design goal of thermometers is to achieve the smallest possible size.
I fail to see why you consider that limitations in measurements imply that "our knowledge of the universe is discontinuous". Those limitations of our laboratories are with us since the very start of science, and all the classical physics, including its experimental branch has always been a science of the continuum.
You write "In fact it is doubtful, despite de Broglie's contention, that anyone prior to the twentieth century truly believed in a discontinuous universe, though they may have pondered the possibility". It is very difficult to accept that the chemists who developed the atomic theory of matter in the 19th century truly believed in a continuum universe. In a letter to Berzelius of 1812, Dalton Wrote: "The doctrine of definite proportions appears to me mysterious unless we adopt the atomic hypothesis".
You also write that the "results from the Wilkinson Microwave Anisotropy Probe (WMAP) have demonstrated that the geometry of the universe must be flat to better than 1%" and that "we of course have long known that it is locally curved". Well, we have also long known that spacetime is curved only in (geo)metric theories. In the so-named flat spacetime theories, e.g. the field theory of gravity (see the ref 19 in my Essay)), gravitation has a non-geometrical interpretation.
Moreover, this small deviation from flatness of less than 1% is the crux of the famous flatness problem in cosmology. Precisely, the non-geometrical approaches to gravity promise to solve this problem in a natural way (see e.g. [Nikolic]).
It seems that your "intuitive notion that causality is somehow related to continuity" is related to my emphasis on that fundamental time is a continuous quantity, unlike dimensional time, which can be discrete.
You write: "By quantizing fields we have seemingly turned something inherently continuous and non-localized into something discrete and localized". Precisely the quantum field theory suffers from the problem of localization, which obligated to physicists to introduce the concept of dummy spacetime. In quantum field theory, we no more can say where a particle "is" in spacetime. As emphasized by Sakurai in his well-known textbook: "It is important to note that the x and t that appear in the quantized field A(x,t) are not quantum-mechanical variables but just parameters on which the field operator depends. In particular, x and t should not be regarded as the space-time coordinates of the photon". See references 4-6 cited in my Essay for more technical details.
You continue with "To be clear, quantum electrodynamics, which is a quantum field theory, is the most accurate scientific theory ever developed, agreeing with experiment to within ten parts in a billion (10^−8)". The experimental support of quantum electrodynamics is excellent but it must be put in a right context. In the reference 6 in my Essay, I wrote: "Four main remarks may be done about the relativistic experiments and observations: (i) Precision tests of relativistic quantum electrodynamics are not normally carried out by directly comparing observations and experimental results to its theoretical predictions; (ii) the same tests are satisfied by formulations of relativistic quantum electrodynamics that are mutually incompatible between them; (iii) the experiments and observations only consider a very limited subset of phenomena; and (iv) both relativistic quantum electrodynamics and the relativistic quantum field theory are involved, at least indirectly, in some puzzling observations and glaring discrepancies". And then analyzed each remark by separate in the following two pages.
And then you add "But, ultimately, quantum field theory is built on quantum mechanics just as classical field theory is naturally consistent with classical mechanics". Well I opened the second section in my Essay, with a quote by Paul Dirac stating his dissatisfaction because quantum electrodynamics is not compatible with quantum mechanics. Several textbooks in quantum field theory emphasize some of the differences with quantum mechanics. In my Essay, I cited the standard textbook by Mandl and Shaw, but there is more.
You also write "Our only other recourse, then, is to assume that mathematical 'objects' have some kind of ontological status. The problem with this view is that there is no way to prove the ontological status of a mathematical object (one could always argue it is simply a representation of a physical object and is thus of a wholly different nature)". I think that would be good to emphasize here Feynman views in his celebrated course in Physics with Leighton, and Sands. They illustrated, in a marvelous form, the difference between physical reality and the mathematical objects used to represent them under certain conditions/approximations. One of their examples was about the difference between the physics of light and Euclidean geometry, which is very relevant to your own discussion of Euclidean geometry and radar guns.
In the last part of your Essay you write: "Classical physics, with its inherent continuity, is nothing more than a convenient myth. It's a nice approximation that works just fine when we don't look too closely". I think that this is a reflect of the traditional epistemological approach to physical reality, where science is perceived as a sequence of approximations to one supposed fundamental true.
Classical physics is not a myth, but a genuine branch of physics. From a purely theoretical point of view, classical physics would be considered a limiting case (h --> 0) of the underlying quantum physics. From an experimental point of view, classical physics is equivalent to quantum physics in those cases where the difference between both is less than the experimental error. In this modern epistemology, the word "approximation" would be used only for the cases where the difference is detectable.
[Nikolic] Some Remarks on a Nongeometrical Interpretation of Gravity and the Flatness Problem 1999: Gen. Rel. and Grav., 31(8), 1211-1217. Nikolic, Hrvoje.