Show HN: Arc – high-throughput time-series warehouse with DuckDB analytics

github.com

22 points by ignaciovdk 15 hours ago

Hi HN, I’m Ignacio, founder at Basekick Labs.

Over the past months I’ve been building Arc, a time-series data platform designed to combine very fast ingestion with strong analytical queries.

What Arc does? Ingest via a binary MessagePack API (fast path), Compatible with Line Protocol for existing tools (Like InfluxDB, I'm ex Influxer), Store data as Parquet with hourly partitions, Query via DuckDB engine using SQL

Why I built it:

Many systems force you to trade retention, throughput, or complexity. I wanted something where ingestion performance doesn’t kill your analytics.

Performance & benchmarks that I have so far.

Write throughput: ~1.88M records/sec (MessagePack, untuned) in my M3 Pro Max (14 cores, 36gb RAM) ClickBench on AWS c6a.4xlarge: 35.18 s cold, ~0.81 s hot (43/43 queries succeeded) In those runs, caching was disabled to match benchmark rules; enabling cache in production gives ~20% faster repeated queries

I’ve open-sourced the Arc repo so you can dive into implementation, benchmarks, and code. Would love your thoughts, critiques, and use-case ideas.

Thanks!

leguy 7 hours ago

In conjunction with Postgres for related relational data, I’m using timescale for IoT based time series data.

Is this something I’d use instead of timescale, or, am I understanding that the intention here is to be a data warehouse, where we could potentially offload older data to Arc for longer term storage or trend analysis?

  • ignaciovdk 6 hours ago

    Hey, thanks for asking.

    I’d say both roles are possible, though the original intent of Arc was indeed to act as an offload / long-term store for systems like TimescaleDB, InfluxDB, Kafka, etc. The idea: you send data into Arc to reduce storage and query load on your primary database for ML, deep analysis, etc.

    But as we built it, we discovered that Arc is really good not just at storage but at actively answering queries, so it’s kind of hybrid: somewhat “warehouse-like,” but still retaining database qualities in performance. I feel that saying a database its too much, but we are going on that direction.

    IoT is absolutely one of the core use cases. You’re often ingesting tens or hundreds of thousands of events per second from edge devices, and you need a system that doesn’t choke. Our binary MessagePack ingestion helps shrink the payload size and reduce parsing overhead, that allows higher throughput for writes, which is crucial in IoT scenarios.

    Let me know if you want to explore this a little more, not for selling you anything, at least not yet, I would love to understand your use case. Let me know if you are open: ignacio[at]basekick[dot]net

bormaj 7 hours ago

Exciting project and definitely something I'd like to explore using. I particularly like the look of the API ergonomics. A few questions:

- is the schema inferred from the data? - can/does the schema evolve? - are custom partitions supported? - is there a roadmap for future features?

  • ignaciovdk 6 hours ago

    Thanks! Let’s go by parts, as Jason would say

    Schema inference: yes, Arc infers the schema automatically from incoming data (both for MessagePack and Line Protocol). Each measurement becomes a table, and fields/tags map to columns.

    Schema evolution: supported. New fields can appear at any time, they’re added to the Parquet schema automatically without migration or downtime.

    Custom partitions: currently partitioning is time-based (hour-level by default), but custom partitioning by tag or host or whatever is planned. The idea is to allow you to group by any tag (e.g. device, region) in the storage path for large-scale IoT data.

    Roadmap: absolutely. Grafana data source, Prometheus remote write, retention policies, gRPC streaming, and distributed query execution are all in the works.

    We are going to start to blogging about it, so, stay tune.

    Would love any feedback on what you’d prioritize or what would make adoption easier for your use case.

drchaim 8 hours ago

Sounds interesting, just some questions: - tables are partitioned? By year/month? - how do you handle too many small parquet files? - are updated/deleted allowed/planned?

  • ignaciovdk 8 hours ago

    Great questions, thanks! Partitioning: yes, Arc partitions by measurement > year > month > day > hour. This structure makes time-range queries very fast and simplifies retention policies (you can drop by hour/day instead of re-clustering).

    Small Parquet files: we batch writes by measurement before flushing, typically every 10 K records or 60 seconds. That keeps file counts manageable while maintaining near-real-time visibility. Compaction jobs (optional) can later merge smaller Parquet files for long-term optimization.

    Updates/deletes: today Arc is append-only (like most time-series systems). Updates/deletes are planned via “rewrite on retention”, meaning you’ll be able to apply corrections or retention windows by rewriting affected partitions.

    The current focus is on predictable write throughput and analytical query performance, but schema evolution and partial rewrites are definitely on the roadmap.

leakycap 14 hours ago

Did you consider confusion with the Arc browser and still go with the name, or were you calling this Arc first and decided to just stick with it?

  • ignaciovdk 14 hours ago

    Hey, good question!

    I didn’t really worry about confusion since this isn’t a browser, it’s a completely different animal.

    The name actually came from “Ark”, as in something that stores and carries, but I decided to go with Arc to avoid sounding too biblical.

    The deeper reason is that Arc isn’t just about ingestion; it’s designed to store data long-term for other databases like InfluxDB, Timescale, or Kafka using Parquet and S3-style backends that scale economically while still letting you query everything with SQL.

  • bl4kers 11 hours ago

    The browser is dead anyway

  • nozzlegear 11 hours ago

    Didn't that browser get mothballed by its devs?

simlevesque 13 hours ago

I'll try this right now. I'm looking to self-host duckdb because MotherDuck is way too expensive.

  • ignaciovdk 13 hours ago

    Awesome, would love to hear what you think once you try it out!

    If it’s not too much trouble, feel free to share feedback at ignacio [at] basekick [dot] net.

whalesalad 12 hours ago

> Arc Core is designed with MinIO as the primary storage backend

Noticing that all the benchmarking is being done with MinIO which I presume is also running alongside/locally so there is no latency and it will be roughly as fast as whatever underlying disk its operating from.

Are there any benchmarks for using actual S3 as the storage layer?

How does Arc decide what to keep hot and local? TTL based? Frequency of access based?

We're going to be evaluating Clickhouse with this sort of hot (local), cold (S3) configuration soon (https://clickhouse.com/docs/guides/separation-storage-comput...) but would like to evaluate other platforms if they are relevant.

  • ignaciovdk 12 hours ago

    Hey there, great questions.

    The benchmarks weren’t run on the same machine as MinIO, but on the same network, connected over a 1 Gbps switch, so there’s a bit of real network latency, though still close to local-disk performance.

    We’ve also tried a true remote setup before (compute around ~160 ms away from AWS S3). I plan to rerun that scenario soon and publish the updated results for transparency.

    Regarding “hot vs. cold” data, Arc doesn’t maintain separate tiers in the traditional sense. All data lives in the S3-compatible storage (MinIO or AWS S3), and we rely on caching for repeated query patterns instead of a separate local tier.

    In practice, Arc performs better than ClickHouse when using S3 as the primary storage layer. ClickHouse can scan faster in pure analytical workloads, but Arc tends to outperform it on time-range–based queries (typical in observability and IoT).

    I’ll post the new benchmark numbers in the next few days, they should give a clearer picture of the trade-offs.