wowczarek 2 days ago

The problem with these types of posts is that this is an area that many are unfamiliar with (at least not in depth) and making lots of authoritative statements makes it believable at face value. There are so many variables to network time sync that you design to minimise them. For example no multipath and no asymmetry unless you have PTP P2P transparent clocks everywhere.

The author also mixes precision with accuracy and relies on self-reported figures from NTP (chrony says xxx ns jitter). With every media speed change you get asymmetry which affects accuracy (not always precision though). So your 100m->1G link for example will already introduce over 1 us of error (to accuracy!), but NTP will never show you this and nothing will unless you measure both ends with 1PPS, and the only way around it is PTP BC or TC. There is a very long list of similar clarifications that can be made. For example nothing is mentioned about message rates / intervals, which are crucial for improving the numbers of samples filters work with - and the fact that ptp4l and especially phc2sys aren't great with filtering. Finally getting time into the OS clock, unless you use PCIE PTM which practically limits you to newer Intel CPUs and newer Intel NICs, relies on a PCIE transaction with unknown delays and asymmetries, and without PTM (excluding few NICs) your OS clock is nearly always 500+ ns away from the PHC and you don't know by how much and you can't measure it. It's just a complex topic and requires an end to end, leave no stone unturned, semi-scientific approach to really present things correctly.

  • mlichvar 2 days ago

    I agree with most of what you said.

    The author has other posts in the series where he tried to measure the accuracy relative to the PHC (not system clock) using PPS: https://scottstuff.net/posts/2025/06/02/measuring-ntp-accura...

    Steering the same PHC with phc2sys as chronyd is using for HW timestamping is not the best approach as that creates a feedback loop (instability). It would be better to leave the PHC running free and just compare the sys<->PHC with PHC<->PPS offsets.

    > So your 100m->1G link for example will already introduce over 1 us of error (to accuracy!), but NTP will never show you this

    That doesn't apply to NTP to such an extent as PTP because it timestamps end of the reception (HW RX timestamps are transposed to the end of the packet), so the asymmetries in transmission lengths due to different link speeds should cancel out (unless the switches are cutting through in the faster->slower link direction, but that seems to be rare).

    • wowczarek 2 days ago

      Yes, I saw those PTP posts and where the methodology lacks a bit.

      Re. asymmetries canceling out, OK, I oversimplified, and this is true in theory and often in practice, but for example, having done this with nearly all generations of enterprise type Broadcom ASICs sort of 2008 onwards I know that there are so many variations to this behaviour that the only way to know is to precisely measure latencies in one direction and the other for a variety of speed, CT vs. S&F and frame sizes, and even bandwidth combinations and see. I used to characterise switches for this, build test harnesses, measurement tools etc., and I saw everything ranging from: CT one way, S&F the other way, but not for all speed combinations, then CT behaviour regardless of enabling or disabling it, finally even things like latency having quantised step characteristics in increments of say X bytes because internally the switching fabric used X-byte cells, and then CT only behaving like CT above certain frame sizes. There's just a lot to take into account. There are even cases where a certain level of background traffic _improves_ latency fairness and symmetry, an equivalent of keeping the caches hot.

      The author's best bet at reliable numbers would be to get himself a Latte Panda Mu or another platform with TGPIO and measure against 1PPS straight from the CPU. That would be the ultimate answer. Failing that, at least a PTM NIC synced to the OS clock, but that will alter the noise profile of the OS clock.

      But you and me know all this because we've been digging deep into the hardware and software guts of those things for years, and have done this for a job, and what's a home lab user to do. It's a never-ending learning exercise and the key is to acknowledge the possible unknowns, and by that I don't mean scientific unknowns but that we don't know what we don't know, and bloggers sometimes don't do this.

    • eqvinox 2 days ago

      > Steering the same PHC with phc2sys as chronyd is using for HW timestamping is not the best approach as that creates a feedback loop (instability).

      This is standard practice, though, for most PTP slave clocks. The feedback is just factored into the math. (Why? No idea. I just know how the code works.)

      Although… it's standard practice in PTP setups that are designed for it. Not NTP… if only there were a specification… :)

      I do have to wonder though. Of what use are timestamps from an unsynchronized PHC to chrony? Is it continuously taking twin sys+PHC timestamps to line up things?

      • RossBencina a day ago

        > I do have to wonder though. Of what use are timestamps from an unsynchronized PHC to chrony? Is it continuously taking twin sys+PHC timestamps to line up things?

        That would be the logical way to do it. You want the lowest jitter timestamps you can get on the incoming ethernet frames. If conditions are stable enough you can compute a timebase translation between your MAC local clock and sys clock using the best available method, potentially using many samples over a (relatively) long time frame. And as the GP says, this gives a feed-forward structure without any need for stabilising a feedback loop.

        gPTP full-duplex ethernet peer synchronisation uses timestamps from free-running local clocks at each end of the link.

        • eqvinox a day ago

          > gPTP full-duplex ethernet peer synchronisation uses timestamps from free-running local clocks at each end of the link.

          Do you mean free-running as opposed to VCXO? Most implementations I know indeed use free-running clocks, but there's an increment/rate register in hardware that specifies what value to add to the time counter per crystal tick, which gets updated by the PTP layer — and timestamps use that. So even though the crystal is physically free running, the feedback loop is still there, it just doesn't include the crystal itself.

          • RossBencina 20 hours ago

            I know that ethernet MACs typically provide a dynamic mechanism to change the fractional phase increment of the timestamp counter, but it is clear from the gPTP specification that what is referred to as the "Local Clock" at each station is free running with respect to the PTP time base and with respect to connected peers. "Local Clock" is the time used to timestamp packet arrival/transmit times. "PTP Time" (i.e. TAI if we're GPS synchronised) is an independent logical time, separate from Local Time at each station.

            It is very clear that there is no feedback loop in 802.1AS (gPTP). I can't speak to other PTP versions. Local clocks (used, among other things for peer delay estimation) are specified to be free running with respect to the PTP time base and to connected peers, not disciplined to PTP Time, asynchronous, not synchronised, not syntonised. The peer delay mechanism equations compensate for both time offset and rate difference in peer local clocks. Furthermore, in order to average estimates over multiple measurements, time offset and rate difference are assumed to be stable (i.e. messing with the rate of a local clock violates invariant assumptions of the peer delay algorithm).

            A few more qualifications: I am talking here about non-master peers. I think it might be compliant for the grand master to discipline the local clock to the time source (e.g. GPS PPS), provided it appears stable to connected peers, but it is not at all required by the protocol. Similarly, you might discipline Local Clock to some other stable clock, or to perform temperature compensation. In principle you could synchronise Local Clock to system clock (both free-running with respect to PTP time) so that your packet timestamps are automatically in sys-clock timebase. Once again, there is nothing in the PTP spec that requires this, but the potential utility is clear on a general purpose OS (not necessarily so on an embedded device).

            • eqvinox 9 hours ago

              The local clock [in this HW] is synced to PTP, cf. https://ww1.microchip.com/downloads/aemDocuments/documents/O...

              There's only one clock in HW, not two. And you really want PTP time in HW for PPS/timestamp IO. (And gPTP uses 1588 HW, there are no special 802.1AS HW implementations that I'm aware of.)

              Whether this matches the spec — no idea. My knowledge is from implementations… there could of course be ones that have two clocks. Can you link one?

  • Dylan16807 2 days ago

    It's not that the author is mixing accuracy and precision, it's that they only care about precision.

    Any asymmetry that is consistent is irrelevant.

  • petesoper 2 days ago

    The OP says early he only needs 10us.

  • eqvinox 2 days ago

    ConnectX-6 has PTM, though I have not tested it.

    • wowczarek 2 days ago

      Do it, that should make a significant difference. See https://www.opencompute.org/wiki/PTM_Readiness for other hardware that supports it, i225/226 are the most common these days, but also a system with TGPIO 1PPS will show the real picture.

      • RossBencina a day ago

        Is the i225/226 datasheet public? I have only ever been able to locate the product brief.

      • eqvinox a day ago

        > Do it, that should make a significant difference.

        Need the cards for that first ;). Still on cx5 here.

        (Also some nVidia docs say you need cx7, but it's listed for cx6, not sure which is true…)

diarmuidc 3 days ago

Why is there no mention of PTP here? If you want accurate time synchronisation in a network just use the correct tool, https://en.wikipedia.org/wiki/Precision_Time_Protocol

Linux PTP (https://linuxptp.sourceforge.net/) and hardware timestamping in the network card will get you in the sub 100ns range

  • jacob2161 3 days ago

    Chrony over NTP is capable of incredible accuracy, as shown in the post. Most users who think they need PTP actually just need Chrony and high quality NICs.

    Chrony is also much better software than any of the PTP daemons I tested a few years ago (for an onboard autonomous vehicle system).

    • eqvinox 2 days ago

      NTP fundamentally cannot reach the same accuracy as PTP because Ethernet switches introduce jitter due to queueing delays and can report that in PTP but not NTP.

      • erincandescent 2 days ago

        Chrony can do NTP encapsulated inside PTP packets so as to combine the best parts of both protocols

        • eqvinox 2 days ago

          That's not exactly NTP though ;)

          I'll also say PTP is superior since it syncs TAI rather than NTP's UTC. Which probably isn't going to change even with NTPv5.

      • mlichvar 2 days ago

        chrony can be configured to encapsulate NTP messages in PTP messages (NTP over PTP) in order to get the delay corrections from switches working as one-step PTP transparent clocks. The current NTPv5 draft specifies an NTP-specific correction field, which switches could support in future if there was a demand for it.

        The switches could also implement a proper HW-timestamping NTP server and client to provide an equivalent to a PTP boundary clock.

        PTP was based on a broadcast/multicast model to reduce the message rate in order to simplify and reduce the cost of HW support. But that is no longer a concern with modern HW that can timestamp packets at very high rates, so the simpler unicast protocols like NTP and client-server PTP (CSPTP) currently developed by IEEE might be preferable to classic PTP for better security and other advantages.

      • anon6362 2 days ago

        With retail hardware, definitely, but there is boundary PTP support with enterprise gear.

        For telco gear, there is PTP + SyncE.

      • cozzyd 2 days ago

        Some NICs support hardware timestamping (though some only for PTP packets, looking at you, XL710).

    • ainiriand 2 days ago

      Correct me if I am wrong, but wouldn't that be true only for testing accross comparable hardware? Would that be true in scenarios like the one that the author describes, where he uses 3 different systems (threadriper cpu, raspberrypi, and LeoNTP GPS-backed NTP server) and architectures?

    • secondcoming 2 days ago

      On our GCP cloud VMs, cloud-init installs chrony and uninstalls ntp automatically.

  • michaelt 3 days ago

    The blog's next post is about PTP, if that's what you're interested in.

    The Linux PTP stack is great for the price, but as an open source project it's hamstrung by the fact the PTP standard (IEEE1588) is paywalled; and the fact it doesn't work on wifi or usb-ethernet converters (meaning it also doesn't work on laptop docking stations or raspberry pi 3 and earlier)

    This limits people developing/using for fun. And it's the people using it for fun who actually write all the documentation, the 'serious users' at high frequency trading firms and cell phone networks aren't blogging about their exploits.

    • RossBencina 2 days ago

      > it doesn't work on wifi

      802.1AS-2020 (gPTP) includes 802.11-2016 (wifi) support.

      The IEEE's gatekeeping is indeed odious.

      The biggest limitation is that many ethernet MACs do not support hardware timestamping. Nor do many entry-level ethernet switches.

      For what it's worth, I'm interested in TSN for fun (music, actually), and I'm prepared to buy compatible networking hardware to do it. No difference to gamers spending money on a GPU.

      • eqvinox 2 days ago

        Most new MACs do it, (cheap) switches are still a problem though.

  • rendaw 3 days ago

    There's a discussion on that in the comments at the bottom of the article, where the author explains why it wasn't analyzed.

eqvinox 2 days ago

GPS modules need to be put in a special stationary mode (and ideally measured-in to a location for a day or two) to get accurate timing. I'm consistently achieving ca. 10ns of deviation. Hope the author didn't forget this. (But it might also just be crappy GPS modules, I'm using u-blox M8T which are specifically intended for timing.)

  • mwpmaybe 2 days ago

    Interesting. The serial GPS module currently wired up to my Raspberry Pi doesn't have a stationary mode per se but there is a feature called AlwaysLocate that seems related. I can choose between Periodic Backup/Standby and AlwaysLocate Backup/Standby modes. I'll need to look into this...

    ETA: I can also increase the nav speed threshold to 2m/s.

    • eqvinox 2 days ago

      Not all modules have this feature; and it's also locked behind feature/license bits sometimes. It's obviously not needed for normal GPS use… u-blox timing targeted modules definitely have it. Some have a "measure-in" mode where you let it sit for a while (days) and it does all the setup automatically. Other cases you actually have to feed things into the module (annoying and error prone…)

      It's simply that if you know your location, you can remove that as free variable from the equations and instead constrain the time further.

  • RossBencina 2 days ago

    What method do you use to measure 10ns deviation?

    • eqvinox 2 days ago

      Delta between modules on a scope

      (offset values on the hardware timestamp on the immediately connected PTP clock also line up with this)

      [Caveat: everything is in the same room with the same ambient temperature drifts…]

azalemeth 3 days ago

My experience with rt Linux is that it can be exceptionally good at keeping time, if you give up the rest of the multitasking micro sleeping architecture. What do you need this accurate time for? I'm equally sure, as acknowledged, the multipath routing isn't helping either.

  • michaelt 2 days ago

    > What do you need this accurate time for?

    Some major uses of high-precision timing, albeit not with NTP, include:

    * Synchronising cell phone towers, the system partly relies on precise timing to avoid them interfering with one another.

    * Timestamping required by regulators, in industries like high-frequency trading.

    * Certain video production systems, where a ten-parts-per-million framerate error would build up into an unacceptable 1.7 second error over 48 hours.

    * Certain databases (most famously Google Spanner) that use precise timing to ensure events are ordered correctly

    * As a debugging luxury in distributed systems.

    In some of these applications you could get away with a precise-but-free-floating time, as you only need things precisely synchronised relative to one another, not globally. But if you're building an entire data centre, a sub-$1000 GPS-controlled clock is barely noticeable.

    • rkomorn 2 days ago

      > But if you're building an entire data centre, a sub-$1000 GPS-controlled clock is barely noticeable.

      Dumb personal and useless anecdote: one of those appliances made my life more difficult for months (at a FAANG company that built its own data centers, no less) for the nearly comical reason that we needed to move it but somehow couldn't rewire the GPS antenna, and the delays kept retriggering alerting that we kept disabling until the expecte "it'll be moved by then" time.

      So, I guess to make the anecdote more useful: if you're gonna get one, make sure it doesn't hamstring you with wires...

      • michaelt 2 days ago

        The secret, I'm told, is to make friends with the CCTV/access control team.

        They always know the paperwork and contractors needed to get a guy on a cherrypicker drilling holes and installing data cables without upsetting the building owners.

  • JdeBP 3 days ago

    Bear in mind that the author specifically reminds us, halfway down, that the goal is consistency, not accuracy per se. Making all of the systems accurate to GNSS is merely a means of achieving the consistency goal so that event logs from multiple systems can be aligned.

  • aa-jv 2 days ago

    > What do you need this accurate time for?

    Scientific and consistent analysis of streaming realtime sensor data.

    Been there, done that, shipped the package. Took quite a bit of fun to get it working consistently, which was the main thing.

  • RossBencina a day ago

    > What do you need this accurate time for?

    Synchronising the clocks on network connected audio devices (ADCs, DACs, DSP processors) on a LAN (https://en.wikipedia.org/wiki/Audio_Video_Bridging), or over the internet (broadcast-grade live streaming). This, and related standards, are more or less the norm in live sound and high-channel-count digital recording setups.

  • mschuster91 2 days ago

    > What do you need this accurate time for?

    Say you are running a few geographically apart radio receivers to triangulate signals, you want to have all of them as closely synchronized as possible for better accuracy.

  • stinkbeetle 2 days ago

    What is the rest of the multitasking micro sleeping architecture, and how do you give it up to improve time keeping?

RossBencina 2 days ago

There was some related discussion a couple of weeks ago here:

Graham: Synchronizing Clocks by Leveraging Local Clock Properties (2022) [pdf] (usenix.org) https://news.ycombinator.com/item?id=44860832

In particular the podcast about Jane Street's NTP setup was discussed.

Bender 2 days ago

They state there is a problem but then state they are happy with what Chrony is doing so what exactly is the problem they are trying to solve? What on their network is requiring better than 200ns? or even 400ns for that matter? Not in theory but in reality? Also there are optimizations they are missing in this document [1] such as disabling EEE.

On a more taboo note, while RasPi's can be great little time servers they have more drift and will have higher jitter but that should not matter for a home setup and should not be surprising. If jitter is their concern then they should consider using mini-pc's, disable cpuspeed, all power management and confine/lock the min/max speed to half the CPU capabilities and disable all services other than chrony. It will use more power but would address their concerns. They could also try different models of layer 2 switches. Consumer switches will add some artificial jitter and that varies wildly from make, model and even batch but again for a home network that should not matter. I think they are nitpicking. Perfect is the enemy of good, especially in a day and age when people prefer power saving over accuracy.

[Edit] As a side note the aggressive min/max poll settings they are using can amplify the inefficiencies of consumer switches and NICs regardless of filter settings and that can make the graphs more chaotic. They should consider re-testing that on data-center class servers, server NICs and enterprise class switches or just reduce the polling to something reasonable for a home network minpoll 1 maxpoll 2 for client, minpoll 5 maxpoll 7 for edge talking to a dozen stratum 1's with a high combinelimit. Presend should not be required even with default ARP neighbor GC times and intervals. Oh and if you want to try something fun with the graphs, run chronyc makestep ever minute in cron on every node. yeah yeah I know why one would not do that and its just cheating.

[1] - https://chrony-project.org/examples.html

  • wowczarek 2 days ago

    In addition, for the purposes of characterising the system using NTP, ideally one should also either eavoid any ensembling / combining of sources because that's just pulling in multiple sources of noise, or it should be proven that doing so does not affect the final results, or if it does, then by how much.

    There's so much more that can be picked apart here because it's an absolute rabbit hole of a topic - for example, saturate the links a little or a little more, especially with bursty traffic in both directions (or do an 80-20 cycle), and watch those measurements go out the window and only with PTP-capable switches at every hop will you survive this. The Telecom industry has done it ad nauseam and for years with appropriate standardised measurements, test masks and requirements.

    And this whole business is also not fundamentally PTP vs. NTP because the principles are exactly the same, it's the fact that PTP was designed with hardware timestamping in mind and it would serve no purpose more useful than NTP had NTP gained support for one-step operation, hardware timestamping - and network assistance. But the default PTP profile uses known multicast groups and thus known destination MACs and it was the easiest entry into hardware packet matching - early "PTP-enabled" NICs only timestamped PTP packets (and most only multicast), only more modern ones allowed to timestamp all packets and that includes NTP.

    And as far as RasPi goes - for time sync, at least in terms of COTS equipment, Intel is king, but that's because they had smart people working hard for years to purposefully integrate time-aware functionalities into the architectures (Hey Kevin and team!) - invariant TSC, ART, culminating with PCIE PTM. But this is where aiming for the tens to single digit ns region.

    You can easily deliver sub-10 ns sync to a NIC, but a huge source of uncertainty is time transfer from your hardware-timestamping NIC to the OS clock. PTM is the only way to do this in hardware, otherwise, with Solarflare being the only NON-PTM exception I've worked with, comparing NIC to OS time is literally reading the time register on the NIC and the kernel time in quick succession in batches (granted, with local interrupts disabled), and then picking the pair of reads that seems to have taken the least amount of time. Unknowns on top of unknowns.

    • Bender 2 days ago

      There's so much more that can be picked apart here because it's an absolute rabbit hole of a topic

      That pretty much sums it up and I agree with everything you stated. There are countless variables that one could spend a lifetime trying to understand, tune and compensate for and all of that changes with each combination of hardware and refreshing hardware is inevitable. It can be a never ending game. I just tune for good enough for my needs that being slightly better than defaults.

nullc 3 days ago

GPS timing modules should have a sawtooth correction value that will tell you the error between the upcoming pulse and GPS time. The issue is that PPS pulse has to be aligned to the receiver's clock. Using that will remove the main source of jitter.

  • wowczarek 2 days ago

    Applying sawtooth correction will remove _some_ of the jitter but at the time server level, not network level, and quantisation error is not the main source of jitter here (I'm talking long time constants) - Packet Delay Variation (PDV) and internal time comparisons are. Plus, any decent loop should average the sawtooth and transform it into fluctuations slow enough that they will not have that much effect on what is being measured in the blog post - the output of the time server looks nothing like the raw 1PPS input, at least in the short term it doesn't. Of course sawtooth should be removed and let's hope his time servers do it, especially the RPi ones.

    • nullc 2 days ago

      > any decent loop should average the sawtooth

      You can't really, depending on their relative phases and the resulting aliasing products the average of the sawtooth error can still have an arbitrary offset which last for an arbitrarily long time.

      > that they will not have that much effect

      okay fine for some definition of 'not much' that's true. But failing to account for it can result in a bigger error than many people expect-- and in an annoying way, since when you test it might be in a state where it is averaging out okay but then later shift into a state where it's producing an offset that doesn't average out.

      Assuming your receiver outputs the correction it's pretty easy to handle, so long as you know it's a thing.

  • RossBencina 2 days ago

    Only the expensive ones have the correction capability (e.g. uBlox LEA-M8T) hat tip to tverbeure:

    https://news.ycombinator.com/item?id=44147523

    Aligning the PPS pulse with an asynchronous local clock is going to require a very large number of measurements, or a high resolution timer (e.g. a time to digital converter, TDOA chip, etc. there are a few options.)

    • wowczarek 2 days ago

      To an extent. You can get previous generation GNSS receivers with sawtooth correction for cheap, eBay is full of those, say an old Trimble Resolution whatever, and lots of LEA-6T carrier boards going for $20-range, and bare modules for less. I would trust those carrier boards more though, less chance of getting a fake module.

      • RossBencina a day ago

        Great, thanks. For anyone following along at home, LEA-6T reports the time pulse error in picoseconds in the TIM-TP message. See "11 Timepulse" on page 26, and/or search for "TIM-TP" in the product spec:

        https://content.u-blox.com/sites/default/files/products/docu...

        There are cheap LEA-8MT on eBay too, especially if you have a method to pull the module off a scrap board fragment.

      • nullc 2 days ago

        The timing receivers often have other advantages beyond just sawtooth, including stuff like being able to produce a time pulse with a single working satellite in view once their position is learned.

        • wowczarek 2 days ago

          Should have clarified: previous generation timing receivers. Yes, self-survey + stationary mode + O/D mode where you can track a single SV and such, are essential for stable time sync, not just sawtooth.

          There is an exception re. sawtooth, but only a recent one: the Furuno GT-100, priced between the ZED-F9T and the Mosaic-T, has 200 ps clock resolution and doesn't even provide a quantisation error output.

watersb 3 days ago

Segal's Law:

"A man with a watch knows what time it is. A man with two watches is never sure."

https://en.m.wikipedia.org/wiki/Segal's_law

  • ofalkaed 2 days ago

    If you actually care about what time it is you need at least three so you can average them and knock out the error. The Beagle carried 22 when it also carried Darwin, in the nearly 5 year expedition they only lost 30 odd seconds.

  • nullc 2 days ago

    A person with three or more watches knows what time it is in proportion to the square root of the number of watches.

    • stinkbeetle 2 days ago

      A person with four watches knows what time it is in proportion to 2?

      • dspillett 2 days ago

        Sort of. With two watches your confidence is ~1.4 of an arbitrary measurement, with three it is ~1.7, with 4 ~2, etc. Though this is an ever-growing sequence.

        A better model might be to measure the confidence with something like (x-1)/x as this grows, more slowly with each step, towards 1, without really getting there until infinity. With two watches you are 50% of maximum confidence in your time, with three 66%, with four 75%, five->80%, and so on.

  • JdeBP 3 days ago

    A person with two watches finds xyrself suddenly in the messy business of doing full NTP, rather than the much simpler model of SNTP. (-:

myrandomcomment 2 days ago

The paths on the network with the MLAG is NOT likely the issue. There is a serialization delay different between the NICs in the NTP servers and switches. 100M takes more time then 1G which takes more time then 10G which takes more time then 40G. Also the UI kit is store and forward (not sure on the 10G but the desktop one is) switching and the Arista kit is cut thought (at around ~500 byte packets IIRC). The end to end paths are not the same given the link speed difference and that is the source of the variations. The MLAG hashing is in hardware and will not have an effect, also IIRC you can set the LAG hash to be SRC/DST on both L2 and L3 even on a L2 link.

contingencies 2 days ago

When it comes to realtime guarantees, bare metal code on a dedicated IC or MCU is by definition better than anything running on a general purpose OS on a general purpose CPU. Even if you tune the hell out of it, the latter will have more room for bugs, edge cases, drift, delayed powerup, supply chain idiosyncracies, etc. FYI GNSS processing chips cost $1.30 these days. https://www.lcsc.com/datasheet/C3037611.pdf

klaas- 3 days ago

maybe I missed that, but why not just combine ptp and ntp within chrony, it does support that.

sugarpimpdorsey 3 days ago

There are so many inaccurate technical details here I wouldn't know where to begin, let alone write a blog post. Sigh.

  • ainiriand 3 days ago

    Unfortunately I think the same as you. The provided details in the blog post are by no means any way of doing any sort of time benchmark or network i/o benchmark. For starters, he is comparing times from tsc enabled hardware (x86_i64), with raspberry pi which are arm. Network i/o benchmarking on linux should be done with system calls to the network cards or input devices and not through the kernel drivers etc...

    • Avamander 2 days ago

      > For starters, he is comparing times from tsc enabled hardware (x86_i64), with raspberry pi which are arm.

      Well, that TSC-enabled hardware also has other peripherals (like SMBUS as mentioned in the article) that on the other hand introduce errors into the system.

      I personally use a RPi4 with its external oscillators replaced with a TXCO. Some sellers on AliExpress even have kits for "audiophiles" that let you do this. It significantly improved clock stability and holdover. So much so that "chronyc tracking" doesn't show enough decimal places to display frequency error or skew. It's unfortunate though that the NIC does not do timestamping. (My modifications are similar to these: https://raspberrypi.stackexchange.com/a/109074)

      I'd love to find an alternative cheap (microcontroller-based) implementation that could beat it.

      • mwpmaybe 2 days ago

        The CM5's NIC has timestamping, but not sure if there's a TXCO hack for it.

  • jmpman 3 days ago

    I would be extremely interested in reading your blog post. Fascinating topic.

  • pastage 3 days ago

    Saying this is actually the only same thing to do.

    I personally will not care for sub 200 microseconds and think it was a good article if read critically. I think it does describe why you should not do that at the moment if you have lots of nodes that need to sync consistently.

    Having a shared 10Mhz reference clock is great and that gives you a pretty good consistent beat. I never managed to sync other physical sensors to that so the technical gotchas are too much for me.

    There is madness in time.

    Edit: changing some orders of magnitude honestly I feel happy if my systems are at 10ms.

    • ainiriand 3 days ago

      In my opinion when you want such precision, you need to stablish strict constraints to the measurements, for example memory fences: https://www.kernel.org/doc/Documentation/memory-barriers.txt

      If you do not do this, the times will never be consistent.

      The author produced a faulty benchmark.

      • Dylan16807 2 days ago

        What benchmark? The only numbers he's measuring himself are on the oscilloscope. Everything else is being measured by chrony. Unless you're talking about a different post on the blog?

        • ainiriand 2 days ago

          He uses Chrony, which uses system time, and compares those times across different machines. Unless proper setup is done the benchmark is faulty.

          • Dylan16807 2 days ago

            Chrony is what's comparing the times. Zero code written by the author is running except to log the statistics chrony created. Are you accusing chrony of failing at one of its core purposes, comparing clocks between computers? What could the author do differently, assuming the author isn't expected to alter chrony's code?

            • ainiriand 2 days ago

              If those times are produced on different architectures, then yes, the comparison can never be accurate enough since the underlying measurement mechanisms differ fundamentally. While the author goes to great lengths to demonstrate very small time differences, I believe the foundation of their comparison is already flawed from the start. I do not want to generate any polemic sorry!

              • Dylan16807 2 days ago

                But you do or don't think chrony knows how to do the memory barriers and everything else properly?

                Making the sync work across existing heterogenous hardware is the goal of the exercise. That can't be a disqualifier.

                • ainiriand a day ago

                  Chrony is an incredible piece of software, but I am afraid that in this case it is used incorrectly, unless more details are provided. Benchmarking across threadripper AMD, raspberrypi arm, and LeoNTP servers cannot be done lightly. I do not see how going down to the nanosecond scale can be done in this way, without more details, sorry. This is only my opinion and I know it is not very useful here.

                • ainiriand a day ago

                  Even the author acknowledges that when they say:

                  It’s easy to fire up Chrony against a local GPS-backed time source and see it claim to be within X nanoseconds of GPS, but it’s tricky to figure out if Chrony is right or not.

                  • Dylan16807 a day ago

                    The thing is, any offset or jitter that can't be picked up over the network is irrelevant to what the author is trying to accomplish. And if it can be picked up over the network, I don't see why Chrony is the wrong thing to measure with.

                    "system time as distorted by getting it to/from the network" is exactly what is supposed to be measured here.

throwawaysoxjje 3 days ago

It’s wild they talk about the jitter in the pps signals but glossed over the jitter the oscilloscope?

  • marshray 3 days ago

    The scope should be capturing samples from the three channels synchronously.

    It appears to be set to trigger on the bottom trace (it appears still) and then retrospectively display the other two.

  • magicalhippo 2 days ago

    Given he's using the desktop PPS signal as a trigger reference, and compares the relative times, how would intrinsic scope jitter significantly affect that?

  • wowczarek 2 days ago

    I would worry less about _trigger jitter_, especially for relative measurements, because unless it's one of those toy scopes, trigger jitter will be negligible for this purpose, so yes the scope, but less trigger jitter and more measurement jitter.

    For anything that involves a scope, I think it's good practice to specify how and what you are measuring - namely what termination, what trigger level and show what the pulse actually looks like on the scope.

    Where you are definitely right is that all those devices can produce completely different looking pulses, and only by looking at the pulse with a scope that has sufficient bandwidth you can pick the right trigger level that lands at a point of the waveforms that gets consistent characteristics, namely you stay away from the top of the pulse and tend to trigger somewhere mid-ramp that looks clean, and keep your paws away from those AUTO buttons!

    ...but this is all nitpicking as far as the post goes, this is where lots of electronics people get triggered (badum, tss!) where we have a network in the middle that is, essentially, chaos.

  • guenthert 2 days ago

    Not sure about the Siglent oscilloscope used here, but my old LeCroy WaveAce 2032 (were Siglent had obviously it's hands on) has a trigger jitter of 0.4ns. I'd think the one used here will be of the same order of magnitude, i.e. negligible.

    Uh, the Siglent SDS 1204X-E used here has a "new, innovative digital trigger with low latency" ...

    But yes, as others have commented already, if only the relative jitter between the signals is of interest, the trigger jitter itself is inconsequential.

    • magicalhippo 2 days ago

      > Uh, the Siglent SDS 1204X-E used here has a "new, innovative digital trigger with low latency" ...

      Datasheet claims trigger jitter of <100 ps.

      • guenthert a day ago

        Well, I looked for the data sheet and couldn't find it, just the ominous quote. Now, newly motivated, I did eventually find it (trick is to look for the data sheet of the series, not the model; SDS1204X-E being the 200MHz, 4 channel model of the SDS1000X-E series): https://www.batronix.com/files/Siglent/Oszilloskope/SDS1000X...

        • magicalhippo a day ago

          Ah, didn't recall they had two of em. I got the little brother so downloaded the datasheet back when I bought it. Always like to have those handy for any equipment I own.