You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
People contributing to Sphinx have the surprise that their PRs fail due to some Windows failures.
When we read documents, we put some timestamp to indicate when it was read. We use time.time_ns() // 1000 which gives us a lower-bound on the real time since EPOCH in microseconds. In particular, the time of the document is either the one that we stored or something a bit later.
When we want to test for the last modification time, we extract the closest microsecond of the last modification time as given by the OS in nanoseconds. So, we do -(ns // -1000).
Now, by playing in #11940, at some point I had time.time_ns() equal to T = 1707062261168486400. So, we store T // 1000 = 1707062261168486 (the exact microsecond value could be 1707062261168486.4 but we ignore the fractional parts). Now, if we compare it to a document for which os.stat(file).st_mtime_ns also returns T, we actually compute T' = -(T // -1000) = 1707062261168487. Usually, we don't compare the file with time T to a file with time T' = T but this entirely boils down to whether T and T' have a sufficiently good precision or not.
On Linux, it doesn't happen because the documents we are comparing our files to are "older" (the dependencies are created before those on which they depend). However, on Windows, both the dependencies and the dependent have the same time.time_ns() value. This is due to the resolution of time.time_ns() which is 318 microseconds on Windows and 84ns on Unix-based systems. And I assume that os.stat(...).st_mtime_ns should have the same precision on Windows (namely, a not that good).
It should be fine to err for the release build because better be safe than sorry. But for the tests, since those failures appear quite often, we always get a notification on the fact that the workflow fails. Also, people might get confused because they don't know whether what they did affected this component or not sometimes. So I suggest that, for the tests only, we could mock time.time_ns for the specific tests and only for Windows platforms so that it has a better resolution. I'll try something in 10 days or so, unless someone wants to do it.
People contributing to Sphinx have the surprise that their PRs fail due to some Windows failures.
When we read documents, we put some timestamp to indicate when it was read. We use
time.time_ns() // 1000
which gives us a lower-bound on the real time since EPOCH in microseconds. In particular, the time of the document is either the one that we stored or something a bit later.When we want to test for the last modification time, we extract the closest microsecond of the last modification time as given by the OS in nanoseconds. So, we do
-(ns // -1000)
.Now, by playing in #11940, at some point I had
time.time_ns()
equal toT = 1707062261168486400
. So, we storeT // 1000 = 1707062261168486
(the exact microsecond value could be1707062261168486.4
but we ignore the fractional parts). Now, if we compare it to a document for whichos.stat(file).st_mtime_ns
also returnsT
, we actually computeT' = -(T // -1000) = 1707062261168487
. Usually, we don't compare the file with timeT
to a file with timeT' = T
but this entirely boils down to whetherT
andT'
have a sufficiently good precision or not.On Linux, it doesn't happen because the documents we are comparing our files to are "older" (the dependencies are created before those on which they depend). However, on Windows, both the dependencies and the dependent have the same
time.time_ns()
value. This is due to the resolution oftime.time_ns()
which is 318 microseconds on Windows and 84ns on Unix-based systems. And I assume thatos.stat(...).st_mtime_ns
should have the same precision on Windows (namely, a not that good).It should be fine to err for the release build because better be safe than sorry. But for the tests, since those failures appear quite often, we always get a notification on the fact that the workflow fails. Also, people might get confused because they don't know whether what they did affected this component or not sometimes. So I suggest that, for the tests only, we could mock
time.time_ns
for the specific tests and only for Windows platforms so that it has a better resolution. I'll try something in 10 days or so, unless someone wants to do it.Reformulation of #11940 (comment)
The text was updated successfully, but these errors were encountered: