Show HN: HelpHub – GPT chatbot for any site
helphub.commandbar.comHi HN,
I’m the founder of a SaaS platform called CommandBar (YC S20). We’ve been mucking around with AI-related side quests for a while, but recently got excited enough about one to test it with some customers. Results were surprisingly good so we decided to launch it.
HelpHub is AI chat + semantic search for any website or web app.
You can add source content in 3 ways: -Crawling any public site via a URL (e.g. your marketing site or blog) -Syncing with a CMS (like Zendesk or Intercom) -Add content manually
The chatbot is then “trained” on that content and will answer question’s based on that content only, not referencing directly the knowledge.
The output is an embeddable widget the contains two things: the chatbot interface for user’s to ask questions, and a search interface for users to search through the content the bot is trained on directly (as well as view source content).
You can play around with a demo on some popular sites here: https://helphub.commandbar.com
Some features we added that make it better IMO than just chat: -Suggested questions (based on the page the user is on and their chat history) -Suggested follow-up questions in a chat response -Ask a question about a specific doc -Recommend content based on who the user is and where they are
Would love to hear feedback (not lost on me that there are other chatgpt-for-your-site products and we are probably missing a ton of functionality from there) and can also share details about how we built this. It’s not rocket science but does feel magic :)
-James
One thing I always hated about chatbot sites when they were the craze and the the AI help bot sites now is the fact that these sites do no provide a chatbot for their own site. I mean, why isn't there a CommandBar for commandbar.com?
I actually see that commandbar.com has an intercom chat widget.
That's a very valid concern.
However, we do love to use our products! Once logged in, you will see that we replaced the Intercom chat widget with HelpHub there. We still offer Intercom chat as a fallback if you need to talk to a human.
Intercom chat is better if you have the resources to support it.
AI is then theoretically the next best thing.
I'm all for "charge more", but these two restriction on a $249 a month plan seem nuts:
- 20 Commands
- “Powered By” Branding
After paying over $100 a month, it's unusual to keep your branding and a limit of 20 commands seems to defeat the purpose of the product.
$20m in funding, so lets say a $100m post-money valuation – probably need at least 10k sites running this at $250/mo for that valuation to stack up.
That pricing is for our whole product, which includes more than just Helphub.
For HH only we have a free tier with 2k messages / mo and a $50 tier for 10k messages a month.
https://www.commandbar.com/pricing-helphub
I think it's a good idea and someone will see success here, unfortunately it's far too slow to be useful right now (HN effect?)
HN has slowed it down a bit but speed is definitely not where I want it to be. A lot of that comes from the openai side atm. Just turned on caching for the suggested questions so at least those are extremely fast.
How much future planning do you do around the whole "Don't build your business on top of another business?" axiom?
I realize this is just one part of your product, but what if OpenAI goes away (not directly, but effectively)? Is there a contingency plan to move to another LLM? Build something else?
Tested out the landing page chatbot and got one “We couldnt find an answer to your question” and another request timeout. Not confident at all using this with any product.
Well, what do you expect from a product that is put together as a AI-related side quest :)
it's a main quest now!!
Oops looking into it. Do you remember which bot you used and what question you asked?
Same issue here, toggled ai chats only and selected one of the default questions, maybe api met hn hug of death(?)
Wahhh. Strava example?
After some digging I discovered something. This error happens when occurrence API returns 400 Bad Request error. This led me to identify two scenarios in which it can happen:
1. When a query is sent while another query is being processed, it consistently triggers a 400 bad request error. Subsequent queries also yield the same error code.
2. Although less common, random queries can sometimes result in a Bad Request error. I do not know how helpful this information is, but I can provide the chat_id associated with the instance: 677cfad3-5084-40d0-81d0-08592b5927f5.
Great catch - should be fixed now!
What's the difference between helphub and https://www.chatbase.co/?
I'm not super familiar with chatbase (which looks awesome in its own right). Some things that I haven't seen before: -Semantic search added -Ability to open source docs in the widget -Recommendation sets -Personalized suggested question
Also we're pretty focused on embedded use cases and looks like chatbase is more generically usable (e.g. on discord, via API).
But honestly, I expect these products (including our own) to grow so much over the next few months than this answer could totally change.
One is VC funded
One is YC funded
How is it different from https://admin.hellotars.com/ai?
Without knowing that product well, I think the main difference is that HelpHub is not just a ChatBot. It's also a full in-app help center with semantic search etc. The ChatBot integrates with the rest of the features and among other things links you to the sources it used to generate the answers.
How do users who can't find an answer get an answer from support?
HelpHub has a way to add a large CTA for that as a fallback. E.g. our own HelpHub implementation has a _Message Us_ button in the bottom to trigger a chat with a human support agent.
I've wanted a product like this since I first encountered chatGPT.
How do you handle curation? Meaning... if the model picks up some out of date info or misinterprets it, and a human admin notices that and wants to mark something as out of date or wrong, can they? I see this as similar to the way I can correct ChatGPT over the course of a chat session, and it will remember the corrections.
If we're doing our job right then HH should only be answering based on its source content (and not background knowledge). So bad answers coming from incorrect source content would need to be corrected in the source content. Citations should help with this, e.g. as a human admit, review answer -> notice it's bad -> click citation -> locate incorrect part -> change it.
Also thinking about ways to ensure the bot answers common questions correctly while still being able to personalize responses. Working on something called "answer shaping" where an admin can write out a response and tag with the question it responds to. Then the bot would first check to see if the human question matches a cached question, and if so would prioritize using info in the cached answer in its response. Seems like this can give the bot freedom to personalize the answer but make sure it includes the right stuff.
> e.g. as a human admit, review answer -> notice it's bad -> click citation -> locate incorrect part -> change it.
I understand why you want the flow to work this way (single source of truth, fix things at the root, simplifies everything), but, respectfully, it is really bad from a UX perspective. Here are the main reasons:
1. Not all admins will have the ability to edit all source pages. Both from a permissions perspective (eg, a zendesk ticket or slack message created by someone else) or a technical ability perspective (eg, you need to edit html and create a PR).
2. People are busy and lazy. If I can see the problem in the answer, notice it's wrong, and correct it right now on the page where I see it, I will. Otherwise, I often won't. Think busy CS agents, developers in the midst of problem solving, etc.
Yes, supporting this workflow makes life harder on you, because it's technically more complex, but it's the way people will want to use this product.
I guess my concern is being a system of adjusted record on top of a system of record. Wouldn't it be a cluster if changes are being made to docs in CommandBar but not [Zendesk]?
That said maybe there's room to store bot-specific stuff in CB. For example, tagging passages with "exclude this from training data" if they're causing bad answers for some reason.
Indeed, it will be a total cluster.
From a pure engineering perspective, it is obviously the wrong solution, and your initial suggestion is the right one.
Yet....
I strongly predict that the forces of market demand and human behavior will push your product inexorably in this direction.
What I like most about this is that it's not just a chatbot but rather a full in-app help center with a chatbot built-in.
Congrats on the launch!
How do you approach handling sensitive data that might come from a CMS in an AI context?
Both: 1. Sensitive data being surfaced accidentally during regular conversation and 2. Malicious actors using prompt injection or similar techniques?
For CMS's we built custom integrations (not just a generic crawler) that strips out obviously sensitive info like internal notes to support people.
Nothing revolutionary to report on the prompt injection stuff. Most people using HH are using it for public documentation so there really isn't any info in the source content that couldn't be surfaced in an answer.
I made a much crappier, little personal project with similar goals: https://github.com/mkwatson/chat_any_site
Congrats on the launch! How does this differ from something like Fin from Intercom? https://www.intercom.com/fin
Congrats on the launch! Super speedy setup and loved the Strava example!
What do you have on the roadmap for upcoming features? Is there anything you're particularly excited about adding?
Wrote about a few things here: https://www.commandbar.com/blog/why-we-built-helphub
There are so many low-hanging fruit: -Edit the system prompt -Shape common answers -Give bot access to actions so it can act like ChatGPT with plugins
Can the chat be handed off to a human seamlessly in the event the bot can't answer the question?
Not yet but this has come up a lot so thinking about it. We probably don't want to build our own human ticketing system so the clearest path would be to have an entrypoint to kick the user over to something like intercom and maybe provide the context of the AI conversation as history in that interface. Not ideal to have two chat interfaces tho :/
Ideally we could be the UI layer for intercom, zendesk, etc. We already do that for docs search / exploration.
Why does this homepage spike my CPU?
Lotties :sweat
When the end user asks a question, is it sent to OpenAI? Or is this an LLM you built yourself?
Right now it's hitting OpenAI, but we want to try out other LLMs in the future.
Thanks. That might be a privacy issue, especially if the requester is from the EU
Any plans for a Discord bot?
Not currently but would be pretty easy with our API. Just a different UI.