Show HN: SnapQL – Desktop app to query Postgres with AI

github.com

92 points by nicktikhonov a day ago

SnapQL is an open-source desktop app (built with Electron) that lets you query your Postgres database using natural language. It’s schema-aware, so you don’t need to copy-paste your schema or write complex SQL by hand.

Everything runs locally — your OpenAI API key, your data, and your queries — so it's secure and private. Just connect your DB, describe what you want, and SnapQL writes and runs the SQL for you.

joshstrange an hour ago

I might test this out, but I worry that it suffers from the same problems that I ran into the last time I played with LLMs writing queries. Specifically not understanding your schema. It might understand relations but most production tables have oddly named columns, potentially columns that changed function overtime, potentially deprecated columns, internal-lingo columns, and the list goes on.

Granted, I was using 3.5 at the time, but even with heavy prompting and trying to explain what certain tables/columns are used for, feeding it the schema, and feeding it sample rows, more often than not it produced garbage. Maybe 4o/o3/Claude4/etc can do better now, but I’m still skeptical.

  • nicktikhonov 35 minutes ago

    might be possible to solve this with prompt configuration. e.g. you'd be able to explain to the llm all the weird naming conventions and unintuitive mappings

    • joshstrange 26 minutes ago

      I did that the last time (again, only with 3.5, things have hopefully improved in this area).

      And I could potentially see LLMs being useful to generate the “bones” of a query for me but I’d never expose it to end-users (which was what I was playing with). So instead of letting my users do something like “What were my sales for last month?” I could use LLMs to help build queries that were hardcoded for various reports.

      The problem is that I know SQL, I’m pretty good at, and I have a perfect understanding of my company’s schema. I might ask an LLM a generic SQL question but trying to feed it my schema just leads to (or rather “led to” in my trials before) prompt hell. I spent hours tweaking the prompts, feeding it more context, begging with it to ignore the “cash” column that has been depreciated for 4+ years, etc. After all of that it still would make simple mistakes that I hard specially warned against.

gabrielruttner 21 hours ago

This is nice -- we're heavy users of postgresql and haven't found the right tool here yet.

I could see this being incredible if it had a set of performance related queries or ran explain analyze and offered some interpreted results.

Can this be run fully locally with a local llm?

jasonthorsness a day ago

Looks useful! And the system prompt didn't require too much finessing. I wonder how it would work with some later models than gpt-4o as in my own dabbling around gpt-4o wasn't quite there yet and the latest models are getting really good.

For analytical purposes, this text-to-SQL is the future; it's already huge with Snowflake (https://www.snowflake.com/en/engineering-blog/cortex-analyst...).

  • nicktikhonov 21 hours ago

    Appreciate the input! I'd love to be able to support more models. That's one of the issues in the repo right now. And I'd be more than happy to welcome contributions to add this and other features

anshumankmr 21 hours ago

Would love to contribute. I have made a fork, will try and raise a PR if contributions are welcome.

Question, how are you testing this? Like doing it on dummy data is a bit too easy. These models, even 4o, falter when it comes to something really specific to a domain (like I work with supply chain data and other column names specific to the work that I do, that only makes sense to me and my team, but wouldn't make any sense to an LLM unless it somehow knows what those columns are)

  • nicktikhonov 21 hours ago

    I'm using my own production databases at the moment. But it might be quite nice to be able to generate complex databases with dummy data in order to test the prompts at the higher levels of complexity!

    And thank you for offering to contribute. I'll be very active on GitHub!

sgarland 13 hours ago

Genuinely do not understand the point of these tools. There is already a practically natural language to query RDBMS; it’s called SQL. I guarantee you, anyone who knows any other language could learn enough SQL to do 99% of what they wanted in a couple of hours. Give it a day of intensive study, and you’d know the rest. It’s just not that complicated.

  • brulard 12 hours ago

    SQL is simple for simple needs, basic joins and some basic aggregates. Even that you won't learn in 2 hours. And that is just scratching the surface of what can be done in SQL and what you need to query. With LLMs and tools like this you simply say what you need in english, you don't need to understand the normalizations, m:n relation tables, CTEs, functions, JSON access operators, etc.

    • sgarland 11 hours ago

      For reference, I’m a DBRE. IMO, yes, most people can learn basic joins and aggregates in a couple of hours, but that is subjective.

      > you don’t need to understand the normalizations

      You definitely should. Normalizing isn’t that difficult of a concept, Wikipedia has terrific descriptions of each level.

      As to the rest, maybe read docs? This is my primary frustration with LLMs in general: people seem to believe that they’re just as good of developers as someone who has read the source documentation, because a robot told them the answer. If you don’t understand what you’re doing, you cannot possibly understand the implications and trade-offs.

      • aurareturn an hour ago

        Thank goodness 99% don’t want to understand everything. Otherwise, you wouldn’t be paid very well at your job, right?

  • v5v3 4 hours ago

    In a business, a management decision maker has to rely on a Db analyst if any query they have cannot be answered by any front end tool they have been given. And that introduces latency to the process

    A 100% accurate ai powered solution would have many customers.

    But can this generation of llms produce 100% accuracy?

  • physix 10 hours ago

    Without having looked at it, I would assume the value comes from not having to know the data model in great detail, such that you can phrase your query using natural language, like

    "Give me all the back office account postings for payment transfers of CCP cleared IRD trades which settled yesterday with a payment amount over 1M having a value date in two days"

    That's what I'd like to be able to say and get an accurate response.

  • nicktikhonov 13 hours ago

    and yet this was on the front page of hacker news for an entire day :D

    it's all about friction. why spend minutes writing a query when you can spend 5 seconds speaking the result you want and get 90-100% of the way there.

    • sgarland 13 hours ago

      Mostly because you don’t know if it’s correct unless you know SQL. It’s entirely too easy to get results that look correct but aren’t, especially when using windowing functions and the like.

      But honestly, most queries I’ve ever seen are just simple joins, which shouldn’t take you 5 minutes to write.

      • AdieuToLogic 12 hours ago

        > Mostly because you don’t know if it’s correct unless you know SQL. It’s entirely too easy to get results that look correct but aren’t ...

        This is the fundamental problem when attempting to use "GenAI" to make program code, SQL or otherwise. All one would have to do is substitute SQL with language/library of choice above and it would be just as applicable.

        • sgarland 10 hours ago

          Fully agree, I just harp on SQL because a. It’s my niche b. It always seems to be a “you can know this, but it doesn’t really matter” thing even for people who regularly interact with RDBMS, and it drives me bonkers.

      • brulard 11 hours ago

        > most queries I’ve ever seen are just simple joins

        Good for you. Some of us deal with more complex queries, even if it may not seems so from the outside. For example getting hierarchical data based on parent_id, while having non-trivial conditions for the parents and the children or product search queries which need to use trigram functions with some ranking, depending on product availability across stores and user preferences.

        I agree knowing SQL is still useful, but more for double checking the queries from LLMs than for trying to build queries yourself.

        • sgarland 10 hours ago

          > getting hierarchical data based on parent_id

          So, an adjacency list (probably, though there are many alternatives, which are usually better). That’s not complex, that’s a self-join.

          > trigram functions

          That’s an indexing decision, not a query. It’s also usually a waste: if you’re doing something like looking up a user by email or name, and you don’t want case sensitivity to wreck your plan, then use a case-insensitive collation for that column.

          > I agree knowing SQL is still useful, but more for double checking the queries from LLMs

          “I agree knowing Python / TypeScript / Golang is still useful, but more for double checking the queries from LLMs.” This sounds utterly absurd, because it is. Why SQL is seen as a nice-to-have instead of its reality - the beating heart of every company - is beyond me.

          • brulard 2 hours ago

            Your Python / TypeScript etc. argument is a strawman, thats why it sounds absurd. Your arguments would hold better if an average person was good and very quick at learning and memoizing complex new things. I don't know if you work with people like that, but that's definitely not the norm. Even developers know little SQL unless it's their specific focus.

            In the original comment you said:

            > I guarantee you, anyone who knows any other language could learn enough SQL to do 99% of what they wanted in a couple of hours. Give it a day of intensive study, and you’d know the rest. It’s just not that complicated.

            Well your "guarantee" does not hold up. Where I live, every college level developer went through multiple semesters of database courses and yet I don't see these people proficient in SQL. In couple hours? 99% of what they need? Absurd

jpb0104 21 hours ago

I like this a lot. I am looking forward to having something similar built into Metabase.

throwmeaway9876 20 hours ago

Great tool!

Pardon my technical ignorance, but what exactly is OpenAI's API being used for in this?

  • nicktikhonov 20 hours ago

    OpenAI LLM is used to generate SQL based on a combination of a user prompt and the database schema.

sirjaz 21 hours ago

Looks like a good idea. Any reason you didn't use React native?

  • nicktikhonov 21 hours ago

    Not really - I had some previous experience with electron and wanted to finish the core feature set in a few hours, so just went with what I already know.

kebsup a day ago

I was looking for something like this that supports graphs.

iJohnDoe 15 hours ago

Which MCP is the recommended or “official” for SQLite and PostgreSQl for use with Cursor?

thedudeabides5 17 hours ago

data engineering about to be eaten by llms

zicon35 a day ago

congrats on the launch! This looks very interesting

GarrickDrgn a day ago

Am I misunderstanding something? How is this "Everything runs locally" if it's talking to OpenAI's APIs?

  • piskov a day ago

    I guess he means there is no proxy between you and openai. API key won’t leak, etc.

  • nicktikhonov 21 hours ago

    What I meant was that it isn't a web app and I don't store your connection strings or query results. I'll make this more clear

    • kokanee 20 hours ago

      It is a web app, though. You just aren't running the server, OpenAI is. And you're packaging the front end in electron instead of chrome to make it feel as if it all runs locally, even though it doesn't.

      Side note: I don't see a license anywhere, so technically it isn't open source.

    • omega3 21 hours ago

      You might not but openai does.

      • nicktikhonov 21 hours ago

        That makes no sense. OpenAI doesn't know the secret database connection string or any query results. Perhaps you should have read the code before making baseless claims.

        • nessbot 21 hours ago

          But it knows what you're querying, which depending on what you're doing may also give away a good bit about whats in the DB.

      • doctorpangloss 21 hours ago

        API gateways could accept public keys instead of generating bearer tokens. Then the private key could reside in an HSM, and apps like this could give HSMs requests to sign. IMO even though this could be done in an afternoon, everyone - Apple and Google, the CDN / WAF provider, the service provider - is too addicted to the telemetry.

esafak a day ago

If you can do this, can't you create a read-only user and use it with a database MCP like https://github.com/executeautomation/mcp-database-server ? Am I missing something?

  • nicktikhonov a day ago

    You can set up an MCP and use it in your existing AI app, but is afaiu the first open source standalone app that gives you a familiar interface to other SQL workspace tools. I built it to be a familiar but much more powerful experience for both technical and nontechnical people.

bobbyraduloff 18 hours ago

[flagged]

  • nicktikhonov 18 hours ago

    Interesting lead. What else would they be looking for in a tool like this? My bad re the video, I'll make sure not to toggle dark mode in the next one.

jaimin888patel a day ago

awesome work nick, literally been asking for a vibe coding SQL interface for months

  • nicktikhonov 21 hours ago

    thanks Jaimin. happy you finally found what you were looking for :D