diff options
author | Arnd Bergmann <arnd@arndb.de> | 2017-11-07 11:38:32 +0100 |
---|---|---|
committer | David S. Miller <davem@davemloft.net> | 2017-11-08 15:56:12 +0900 |
commit | 7f5d3f2721b07ab5896526c5992edd2ab1665561 (patch) | |
tree | 9ae8a9cbe0c06afbaa629fbe831f1f9b2d39e280 /net/core/dst.c | |
parent | 2eb3ed33e55d003d721d4d1a5e72fe323c12b4c0 (diff) |
pktgen: document 32-bit timestamp overflow
Timestamps in pktgen are currently retrieved using the deprecated
do_gettimeofday() function that wraps its signed 32-bit seconds in 2038
(on 32-bit architectures) and requires a division operation to calculate
microseconds.
The pktgen header is also defined with the same limitations, hardcoding
to a 32-bit seconds field that can be interpreted as unsigned to produce
times that only wrap in 2106. Whatever code reads the timestamps should
be aware of that problem in general, but probably doesn't care too
much as we are mostly interested in the time passing between packets,
and that is correctly represented.
Using 64-bit nanoseconds would be cheaper and good for 584 years. Using
monotonic times would also make this unambiguous by avoiding the overflow,
but would make it harder to correlate to the times with those on remote
machines. Either approach would require adding a new runtime flag and
implementing the same thing on the remote side, which we probably don't
want to do unless someone sees it as a real problem. Also, this should
be coordinated with other pktgen implementations and might need a new
magic number.
For the moment, I'm documenting the overflow in the source code, and
changing the implementation over to an open-coded ktime_get_real_ts64()
plus division, so we don't have to look at it again while scanning for
deprecated time interfaces.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'net/core/dst.c')
0 files changed, 0 insertions, 0 deletions