Show HN: HelpHub – GPT chatbot for any site

108 points by dazzeloid 14 days ago

Hi HN,

I’m the founder of a SaaS platform called CommandBar (YC S20). We’ve been mucking around with AI-related side quests for a while, but recently got excited enough about one to test it with some customers. Results were surprisingly good so we decided to launch it.

HelpHub is AI chat + semantic search for any website or web app.

You can add source content in 3 ways: -Crawling any public site via a URL (e.g. your marketing site or blog) -Syncing with a CMS (like Zendesk or Intercom) -Add content manually

The chatbot is then “trained” on that content and will answer question’s based on that content only, not referencing directly the knowledge.

The output is an embeddable widget the contains two things: the chatbot interface for user’s to ask questions, and a search interface for users to search through the content the bot is trained on directly (as well as view source content).

You can play around with a demo on some popular sites here:

Some features we added that make it better IMO than just chat: -Suggested questions (based on the page the user is on and their chat history) -Suggested follow-up questions in a chat response -Ask a question about a specific doc -Recommend content based on who the user is and where they are

Would love to hear feedback (not lost on me that there are other chatgpt-for-your-site products and we are probably missing a ton of functionality from there) and can also share details about how we built this. It’s not rocket science but does feel magic :)


nutanc 14 days ago

One thing I always hated about chatbot sites when they were the craze and the the AI help bot sites now is the fact that these sites do no provide a chatbot for their own site. I mean, why isn't there a CommandBar for

I actually see that has an intercom chat widget.

  • tkainrad 13 days ago

    That's a very valid concern.

    However, we do love to use our products! Once logged in, you will see that we replaced the Intercom chat widget with HelpHub there. We still offer Intercom chat as a fallback if you need to talk to a human.

  • colecut 14 days ago

    Intercom chat is better if you have the resources to support it.

    AI is then theoretically the next best thing.

mritchie712 14 days ago

I'm all for "charge more", but these two restriction on a $249 a month plan seem nuts:

- 20 Commands

- “Powered By” Branding

After paying over $100 a month, it's unusual to keep your branding and a limit of 20 commands seems to defeat the purpose of the product.

  • tommoor 14 days ago

    $20m in funding, so lets say a $100m post-money valuation – probably need at least 10k sites running this at $250/mo for that valuation to stack up.

tommoor 14 days ago

I think it's a good idea and someone will see success here, unfortunately it's far too slow to be useful right now (HN effect?)

  • dazzeloid 14 days ago

    HN has slowed it down a bit but speed is definitely not where I want it to be. A lot of that comes from the openai side atm. Just turned on caching for the suggested questions so at least those are extremely fast.

    • nkozyra 14 days ago

      How much future planning do you do around the whole "Don't build your business on top of another business?" axiom?

      I realize this is just one part of your product, but what if OpenAI goes away (not directly, but effectively)? Is there a contingency plan to move to another LLM? Build something else?

pech0rin 14 days ago

Tested out the landing page chatbot and got one “We couldnt find an answer to your question” and another request timeout. Not confident at all using this with any product.

  • melvinmelih 14 days ago

    Well, what do you expect from a product that is put together as a AI-related side quest :)

    • dazzeloid 14 days ago

      it's a main quest now!!

  • dazzeloid 14 days ago

    Oops looking into it. Do you remember which bot you used and what question you asked?

    • AugurCognito 14 days ago

      Same issue here, toggled ai chats only and selected one of the default questions, maybe api met hn hug of death(?)

      • dazzeloid 14 days ago

        Wahhh. Strava example?

        • AugurCognito 14 days ago

          After some digging I discovered something. This error happens when occurrence API returns 400 Bad Request error. This led me to identify two scenarios in which it can happen:

          1. When a query is sent while another query is being processed, it consistently triggers a 400 bad request error. Subsequent queries also yield the same error code.

          2. Although less common, random queries can sometimes result in a Bad Request error. I do not know how helpful this information is, but I can provide the chat_id associated with the instance: 677cfad3-5084-40d0-81d0-08592b5927f5.

          • vayyala 14 days ago

            Great catch - should be fixed now!

arberx 14 days ago

What's the difference between helphub and

  • dazzeloid 14 days ago

    I'm not super familiar with chatbase (which looks awesome in its own right). Some things that I haven't seen before: -Semantic search added -Ability to open source docs in the widget -Recommendation sets -Personalized suggested question

    Also we're pretty focused on embedded use cases and looks like chatbase is more generically usable (e.g. on discord, via API).

    But honestly, I expect these products (including our own) to grow so much over the next few months than this answer could totally change.

  • ativzzz 14 days ago

    One is VC funded

ishjindal 14 days ago

How is it different from

  • tkainrad 14 days ago

    Without knowing that product well, I think the main difference is that HelpHub is not just a ChatBot. It's also a full in-app help center with semantic search etc. The ChatBot integrates with the rest of the features and among other things links you to the sources it used to generate the answers.

    • futhey 13 days ago

      How do users who can't find an answer get an answer from support?

      • tkainrad 13 days ago

        HelpHub has a way to add a large CTA for that as a fallback. E.g. our own HelpHub implementation has a _Message Us_ button in the bottom to trigger a chat with a human support agent.

jonahx 14 days ago

I've wanted a product like this since I first encountered chatGPT.

How do you handle curation? Meaning... if the model picks up some out of date info or misinterprets it, and a human admin notices that and wants to mark something as out of date or wrong, can they? I see this as similar to the way I can correct ChatGPT over the course of a chat session, and it will remember the corrections.

  • dazzeloid 14 days ago

    If we're doing our job right then HH should only be answering based on its source content (and not background knowledge). So bad answers coming from incorrect source content would need to be corrected in the source content. Citations should help with this, e.g. as a human admit, review answer -> notice it's bad -> click citation -> locate incorrect part -> change it.

    Also thinking about ways to ensure the bot answers common questions correctly while still being able to personalize responses. Working on something called "answer shaping" where an admin can write out a response and tag with the question it responds to. Then the bot would first check to see if the human question matches a cached question, and if so would prioritize using info in the cached answer in its response. Seems like this can give the bot freedom to personalize the answer but make sure it includes the right stuff.

    • jonahx 14 days ago

      > e.g. as a human admit, review answer -> notice it's bad -> click citation -> locate incorrect part -> change it.

      I understand why you want the flow to work this way (single source of truth, fix things at the root, simplifies everything), but, respectfully, it is really bad from a UX perspective. Here are the main reasons:

      1. Not all admins will have the ability to edit all source pages. Both from a permissions perspective (eg, a zendesk ticket or slack message created by someone else) or a technical ability perspective (eg, you need to edit html and create a PR).

      2. People are busy and lazy. If I can see the problem in the answer, notice it's wrong, and correct it right now on the page where I see it, I will. Otherwise, I often won't. Think busy CS agents, developers in the midst of problem solving, etc.

      Yes, supporting this workflow makes life harder on you, because it's technically more complex, but it's the way people will want to use this product.

      • dazzeloid 14 days ago

        I guess my concern is being a system of adjusted record on top of a system of record. Wouldn't it be a cluster if changes are being made to docs in CommandBar but not [Zendesk]?

        That said maybe there's room to store bot-specific stuff in CB. For example, tagging passages with "exclude this from training data" if they're causing bad answers for some reason.

        • jonahx 14 days ago

          Indeed, it will be a total cluster.

          From a pure engineering perspective, it is obviously the wrong solution, and your initial suggestion is the right one.


          I strongly predict that the forces of market demand and human behavior will push your product inexorably in this direction.

tkainrad 14 days ago

What I like most about this is that it's not just a chatbot but rather a full in-app help center with a chatbot built-in.

thih9 14 days ago

Congrats on the launch!

How do you approach handling sensitive data that might come from a CMS in an AI context?

Both: 1. Sensitive data being surfaced accidentally during regular conversation and 2. Malicious actors using prompt injection or similar techniques?

  • dazzeloid 14 days ago

    For CMS's we built custom integrations (not just a generic crawler) that strips out obviously sensitive info like internal notes to support people.

    Nothing revolutionary to report on the prompt injection stuff. Most people using HH are using it for public documentation so there really isn't any info in the source content that couldn't be surfaced in an answer.

thepaulthomson 14 days ago

Congrats on the launch! Super speedy setup and loved the Strava example!

What do you have on the roadmap for upcoming features? Is there anything you're particularly excited about adding?

JshWright 14 days ago

Can the chat be handed off to a human seamlessly in the event the bot can't answer the question?

  • dazzeloid 14 days ago

    Not yet but this has come up a lot so thinking about it. We probably don't want to build our own human ticketing system so the clearest path would be to have an entrypoint to kick the user over to something like intercom and maybe provide the context of the AI conversation as history in that interface. Not ideal to have two chat interfaces tho :/

    Ideally we could be the UI layer for intercom, zendesk, etc. We already do that for docs search / exploration.

frabjoused 14 days ago

Why does this homepage spike my CPU?

kukkukb 14 days ago

When the end user asks a question, is it sent to OpenAI? Or is this an LLM you built yourself?

  • dazzeloid 14 days ago

    Right now it's hitting OpenAI, but we want to try out other LLMs in the future.

    • kukkukb 13 days ago

      Thanks. That might be a privacy issue, especially if the requester is from the EU

emptysongglass 14 days ago

Any plans for a Discord bot?

  • dazzeloid 14 days ago

    Not currently but would be pretty easy with our API. Just a different UI.