Show HN: Abide – Prevent employees from leaking API keys/NDA data to AI tools

tryabide.co

5 points by millgrove a year ago

Hi HN,

We’re the founders of Abide — a tool to prevent employees from leaking sensitive data to AI models.

Over the past month we’ve spoken to lots of friends at tech companies, law firms, creative agencies, and more. There was a consistent theme: companies aren’t sure how AI tools affect their data privacy and security processes.

There are concerns that when employees use LLMs to write code or send emails, there's a security/compliance risk (e.g., accidentally sharing an API key in a prompt) or violation of privacy agreements (e.g., using AI to draft a memo about non-public client work).

The companies we spoke with have done one of two things: (a) outright banned use of LLMs until they figure out a plan or (b) sent a memo to employees telling them to be cautious.

We want everyone to have the 10x productivity gain of AI, but compliance is important for a business to do right by its customers, its regulators, and keep its name out of the news.

For now, we’ve built a straightforward product that does two things: (a) an app for compliance teams to upload words that cannot be sent to AI models and (b) a tool that monitors AI usage on employee laptops to enforce those policies.

We’re in beta with a couple of customers, but we’d love for you to checkout the demo on the website! If you’re interested, join the sign up list and we’ll set up an account for you today to poke around. Welcome to any questions, feedback, ideas, etc.!

Thanks! Jitesh & Vaibhav

peter_l_downs a year ago

How does this actually work?

If you're classifying "banned" / "sensitive" text strings, and intercepting network requests on all employees computers, and then showing the "banned"/"sensitive" text strings to admins in a web interface... aren't the employees leaking a ton of information to you, a third party?

  • millgrove a year ago

    Hey Peter, that's a great point! I've listed my thoughts below:

    A couple things: 1. From a compliance / risk point of view, the customers we've spoke to treat AI-enabled tools differently from standard enterprise products. The main reason is because the standard terms for free-tier usage specifies that model developers might use your inputs to train future iterations unless a user opts out. For that reason, a single third party that doesn't do anything with your data is better than the various AI tools that employees might use (and if you don't have an enterprise license with one provider like ChatGPT, that won't stop your employees from using free alternatives).

    2. When it comes to sensitive data, there are different tiers. Companies like to keep their customer lists private, for example, but many tools in the sales stack rely on reading that data out of Salesforce as long as those vendors pass a security/compliance audit. Private keys are another tier of sensitivity, and we absolutely don't expect our users to share those. They can, however flag other text strings like "key=" or "API_KEY" or "username"