ruhith 2 days ago

The DuckDB-as-a-storage-engine approach is clever because it lets you keep your existing MySQL connections, tooling, and replication topology while routing analytical queries to a columnar engine underneath. That's a much easier sell operationally than standing up a separate analytics database and building a sync pipeline. The real question is how they handle consistency between the InnoDB and DuckDB copies of the same data, that's where every hybrid OLTP/OLAP system either shines or quietly loses rows.

  • Captain32zxz 2 days ago

    Nice question! We did spend a lot of time considering the issue of data consistency.

    In the MySQL replication, GTID is crucial for ensuring that no transaction is missed or replayed repeatedly. We handle this in two scenarios (depending on whether binlog is enabled):

        - log_bin is OFF: We ensure that transaction in DuckDB are committed before the GTID is written to disk (in the mysql.gtid_executed table). Furthermore, after a crash recovery, we perform idempotent writes to DuckDB for a period of time (the principle is similar to upsert or delete+insert). Therefore, at any given moment after a crash recovery, we can guarantee that the data in DuckDB is consistent with the primary database.
        - log_bin is ON: Unlike the previous scenario, we no longer rely on the `mysql.gtid_executed` table; we directly use the Binlog for GTID persistence. However, a new problem arises: Binlog persistence occurs before the Storage Engine commits. Therefore, we created a table in DuckDB to record the valid Binlog position. If the DuckDB transaction fails to commit, the Binlog will be truncated to the last valid position. This ensures that the data in DuckDB is consistent with the contents of the Binlog.
    
    Therefore, if the `gtid_executed` on the replica server matches that of the primary database, then the data in DuckDB will also be consistent with the primary database.
linuxhansl 2 days ago

Curious how it stacks up to pg_duckdb. (pg_duckdb seems pretty clean, due to Postres' powerful extension mechanisms)

  • baotiao 2 days ago

    Here is the professional English translation of your analysis, optimized for a technical audience or a blog post:

    Why I Believe MySQL is More Suited than PostgreSQL for DuckDB Integration Currently, there are three mainstream solutions in the ecosystem: pg_duckdb, pg_mooncake, and pg_lake. However, they face several critical hurdles. First, PostgreSQL's logical replication is not mature enough—falling far behind the robustness of its physical replication—making it difficult to reliably connect a PG primary node to a DuckDB read-only replica via logical streams.

    Furthermore, PostgreSQL lacks a truly mature pluggable storage engine architecture. While it provides the Table Access Method as an interface, it does not offer standardized support for primary-replica replication or Crash Recovery at the interface level. This makes it challenging to guarantee data consistency in many production scenarios.

    MySQL, however, solves these issues elegantly:

    Native Pluggable Architecture: MySQL was born with a pluggable storage engine design. Historically, MySQL pivoted from MyISAM to InnoDB as the default engine specifically to leverage InnoDB's row-level MVCC. While previous columnar attempts like InfoBright existed, they didn't reach mass adoption. Adding DuckDB as a native columnar engine in MySQL is a natural progression. It eliminates the need for "workaround" architectures seen in PostgreSQL, where data must first be written to a row-store before being converted into a columnar format.

    The Power of the Binlog Ecosystem: MySQL’s "dual-log" mechanism (Binlog and Redo Log) is a double-edged sword; while it impacts raw write performance, the Binlog provides unparalleled support for the broader data ecosystem. By providing a clean stream of data changes, it facilitates seamless replication to downstream systems. This is precisely why OLAP solutions like ClickHouse, StarRocks, and SelectDB have flourished within the MySQL ecosystem.

    Seamless HTAP Integration: When using DuckDB as a MySQL storage engine, the Binlog ecosystem remains fully compatible and intact. This allows the system to function as a data warehouse node that can still "egress" its own Binlog. In an HTAP (Hybrid Transactional/Analytical Processing) scenario, a primary MySQL node using InnoDB can stream Binlog directly to a downstream MySQL node using the DuckDB engine, achieving a perfectly compatible and fluid data pipeline.

    • ozgune a day ago

      I feel this analysis is unfair to PostgreSQL. PG is highly extensible, allowing you to extend write-ahead logs, transaction subsystem, foreign data wrappers (FDW), indexes, types, replication, others.

      I understand that MySQL follows a specific pluggable storage architecture. I also understand that the direct equivalent in PG appears to be table access methods (TAM). However, you don't need to use TAM to build this - I'd argue FDWs are much more suitable.

      Also, I think this design assumes that you'd swap PG's storage engine and replicate data to DuckDB through logical replication. The explanation then notes deficiencies in PG's logical replication.

      I don't think this is the only possible design. pg_lake provides a solid open source implementation on how else you could build this solution, if you're familiar with PG: https://github.com/Snowflake-Labs/pg_lake

      All up, I feel this explanation is written from a MySQL-first perspective. "We built this valuable solution for MySQL. We're very familiar with MySQL's internals and we don't think those internals hold for PostgreSQL."

      I agree with the solution's value and how it integrates with MySQL. I just think someone knowledgeable about PostgreSQL would have built things in a different way.

      • zenmac a day ago

        Thanks for providing this from PG perspective. Also wonder if storage engine such as OrioleDB would be better suited for FDWs to handle consistency between copies of the same data between DuckDB?

        • adsharma a day ago

          The only concern I have about OrioleDB is how long it's taking to get to GA.

          Anyone using it in prod even with the beta status?

      • baotiao a day ago

        Actually, that’s not the case. I also support PostgreSQL products in my professional work. However, specifically regarding this issue—as I mentioned in my article—it is simply easier to integrate DuckDB by leveraging MySQL's binlog and its pluggable storage engine architecture.

    • benterix a day ago

      It looks like you pasted the output from LLM verbatim, the first line is a bit confusing. It's a pity because the answer itself is meaningful.

      • joshiain a day ago

        I think we can give them a pass for this one. I think they are one of the developers and I suspect English may not be their first language, so they asked an LLM to help translate for them. If they don't understand English, I can see why they might have accidentally included that first line.

      • darkwater a day ago

        I guess they don't speak English at all, but they could clearly improve their prompting skills :)

    • mavamaarten a day ago

      So you pasted someone's comment in an LLM and posted the output here. Cool. Not really.

      • dust42 a day ago

        He's Chinese and if you had looked into his comment history you'd know this is not someone who uses LLMs for karma farming and looking at his blog he has a long history of posting about database topics going back before there was GPT.

        Should I ever participate in a Chinese speaking forum, I'd certainly use an LLM for translation as well.

      • phireal a day ago

        Looks to me like they're using an LLM for _translation_, not for generating a response. The model output even says "Here's the _translation_" (emphasis mine).

infogulch a day ago

How I see SQL databases evolving over the next 10 years:

    1. integrate an off the shelf OLAP engine
       forward OLAP queries to it
       deal with continued issues keeping the two datasets in sync
    2. rebase OLTP and OLAP engines to use a unified storage layer
       storage layer supports both page-aligned row-oriented files and column-oriented files and remote files
       still have data and semantic inconsistencies due to running two engines
    3. merge the engines
       policy to automatically archive old records to a compressed column-oriented file format
       option to move archived record files to remote object storage, fetch on demand
       queries seamlessly integrate data from freshly updated records and archived records
       only noticeable difference is queries for very old records seem to take a few seconds longer to get the results back
polskibus 2 days ago

Does this feed DuckDb continuously data from transactional workloads, akin to what SAP hana does? If so that would be huge - people spend lots of time trying to stitch transactional data to warehouses using Kafka/debezium.

BTW, Would be great to hear apavlo’s opinion on this.

jimmyl02 2 days ago

HTAP is here! It seems like these hybrid databases are slowly gaining adoption which is really cool to see.

The most interesting part of this is the improvements to transaction handling that it seems they've made in https://github.com/alibaba/AliSQL/blob/master/wiki/duckdb/du... (its also a good high level breakdown of MySQL internals too). Ensuring that the sync between the primary tables and the analytical ones are fast and most importantly, transactional, is awesome to see.

  • necubi 2 days ago

    I don't think this is meaningfully HTAP, it's gluing together two completely different databases under a single interface. As far as I can tell, it doesn't provide transactional or consistency guarantees different than what you'd get with something like Materialize.

    This isn't new either, people have been building OLAP storage engines into MySQL/Postgres for years, e.g., pg_ducklake and timescale.

    • redwood 2 days ago

      Genuinely curious in what situation would you actually want transactional consistency in the same session as you are doing analytical or vector retrieval style use cases?

      I might make the argument that paying the tax of delivering what you're arguing for has so many significant downsides in the end you'd have something you wouldn't really want anyway

dzonga 2 days ago

having an embedded column database for analytics in your traditional db is a massive win for productivity + operations simplicity.

at the moment I use PG + Tiger Data - couldn't find a mysql equivalent

so this as one.

Keyframe 2 days ago

On a drive-by-glance it looks like if you had a tighter integrated version of PSQL FDW for DuckDB and Vector Storage - meets Vespa. I find it interesting they went with extending MySQL instead of FDW route on PSQL?

  • jitl 2 days ago

    probably they had millions of lines of code already using mysql

enamya 2 days ago

the commits history looks a bit weird, 2 commits in 2022, 1 in 2024 and 2025, and 5 in 2026 (one is "First commit, Support DuckDB Engine")

  • knallfrosch 2 days ago

    Just guessing, but it probably wasn't planned as open source.

    The real version control history might be full of useless internal Jira ticket references, confidential information about products, in Mandarin, not even in git... there's a thousand reasons to surface only a minimal fake git version history, hand-crafted from major releases.

redwood a day ago

Wonder how DuckDB compares here to what TiDB did using Clickhouse instead

  • baotiao a day ago

    I’m quite certain that if DuckDB had been open-sourced and reached stability around 2020, TiDB would have definitely chosen DuckDB instead of ClickHouse.

    • redwood 19 hours ago

      Interesting. I'd think they serve different purposes

manishsharan a day ago

FoundationDB Record layer doesn't get much attention here but I have found that all my use cases are satisfied by it.

And I get the benefit of resiliency and DR for free.

If you are a developing for My SQL and you are using Java/kotlin/closure/scala consider this as well.

cies a day ago

I get the feeling that Oracle is abandonning MySQL.

Let's all hope Ali will pick it up :)

I'm fully invested on Postgres though.

milesward 2 days ago

[flagged]

  • akie 2 days ago

    In almost no situation is laughing at what someone says appropriate, also not here.

IhateAI 2 days ago

[flagged]

  • tclancy a day ago

    Quickly becoming my least-favorite account. If you’re going to have a schtick, have a schtick. Write your comments in and old timey voice or iambic pentameter or whatever, include a signature, ascii art, lean into being annoying.

aussieguy1234 2 days ago

I hope the poor devs that built this wernt subjected to the brutal 996 culture (9am-9pm, 6 days per week)