Improve offset calcuation and application when normalizing to SDK clock #34
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Goal
We should do a better job of normalizing a timestamp produced by OkHttp to fit the SDK clock baseline. Currently, we can potentially erroneously introduce a small difference simply because one clock reading comes after the other, and even though the difference should only be in the order of a handful of microseconds, the result when rounding to milliseconds means we could "tick over" the boundary even if there was no difference at all.
There are different approaches to doing the conversion, but since they all involve taking two clock readings, any naive way of doing it will have the issue we have now. The way I've chosen isn't perfect, but it should get the primary job done: eliminate the erroneous "tick over" if there's no actual difference at all. It will probably mean that sub-millisecond offsets will be ignored, but I think that level of precision is less of a problem, so the trade off is worth it.
Basically, we take two diff readings, and ensure that their absolute difference is less than or equal to one. Normally, there shouldn't be any differences, so we return their sum divided by 2, which should just be the same as either reading. If the difference is 1ms, it means it's most likely that one of the readings has a "tick over". If the difference is greater than 1, it means either the clock was changed in the time where these readings were being made, or if it takes more than a millisecond to longer between one reading than other, which would put us in a state where we don't know which to trust. In that case, we simply return 0 for the offset.
So for example, if the reading for one is 0 and the other is 1, taking the sum and dividing by 2 gets us 0 due of how division of longs work, thus eliminating the erroneous offset. If one is 100 and the other is 101, we'll return 100, conservatively assuming the larger value is due to tick over even though it could be the other way around. if one is 100 and the other is 150, something odd or incredibly rare is happening, so we'll just not return an offset.
Testing
Added unit tests