Show HN: Open Prompts – dataset of 10M Stable Diffusion generations

279 points by vipermu 14 days ago

Open Prompts is the dataset used to build The data comes from the Stability AI Discord and includes around 10M images from 2M prompts. You can use it for creating semantic search engines of prompts, training LLMs, fine-tuning image-to-text models like BLIP, or extracting insights from the data—like the most common combinations of modifiers.

Samin100 14 days ago

Great work! If anyone’s planning to use AI generated artwork in their projects, I’ve added an image search API to Lexica, similar to Unsplash. All the images are licensed CC0 and millions more are being added every few weeks.

Docs here:

  • Lerc 14 days ago

    Been using Lexica quite a bit for prompt analysis, thanks for the work.

    General browsing is heavily dominated by portraits though. I was wondering if it would be worth having a face detected flag on images so you could filter portraits.

    • dvrp 14 days ago

      That’s a really cool idea. Especially since that in particular doesn’t seem quite hard. Do you have any ideas in how you’d implement that?

  • tgtweak 14 days ago

    Great work on, it's been indispensable for finding good prompts and combinations.

    How much does it cost to host it? Feels like hosting 500GB of images and serving them can't be cheap.

    • Samin100 14 days ago

      Last month Lexica served a little over 1 billion images and the Cloudflare bill (I'm using R2 + workers) was a little over $5k. I've since gotten it down to a more reasonable amount after spending some time to re-encode the images to reduce our bandwidth usage significantly. If Lexica were running on AWS/S3 I imagine our first month's bill would be closer to $100k rather than $5k. This is only image serving, so not including costs to run the beefy CPU servers to run CLIP search, frontend, DB, backend, etc.

      • indigodaddy 14 days ago

        Why not go with a server or two or some VMs on Hetzner/Kimsufi/OVH/Netcup/BuyVM etc where they have very generous included transfer or even unmetered (BuyVM) ?

        I get it that everyone wants to use the trendy newest tech (workers etc or whatever the latest is), but your bill could easily be 20% (or less) of the $5k kind of numbers you are mentioning.

        I guess if those kind of numbers are just water under the bridge for you than you may as well go with the easier cloud setup/infra though.

        • RandomBK 14 days ago

          Having used many different providers (though not all of the ones on your list), be very careful with suspiciously generous or unmetered anything. More often than not, you'll hit a soft limit where your performance will degrade and/or you'll get kicked for ToS violations.

          Most apps never hit those limits, but once you get to multi-thousands/mo in cloud bills, there's benefits in pay-as-you-go billing where the provider is incentivized to let you use as much as you want to pay for.

          • capableweb 14 days ago

            > More often than not, you'll hit a soft limit where your performance will degrade and/or you'll get kicked for ToS violations.

            Hetzner does have truly unmetered root servers, been using it myself for years. If you're doing Tor, Torrents or commercial CDN traffic, they might kick you out, but for other "normal" things, they seem to have no issues with handling it as they say they will.

      • dvrp 14 days ago

        Besides search and image listing, what other plans do you have with Lexica? Also, do you plan to open-source anything of it?

      • avereveard 14 days ago

        Weird 1billion 50k request ends up about 5k on the cloudfront pricing calculator. Do you have a bandwidth estimate? Idk how you got that 100k quote

      • johndough 14 days ago

        What is the bandwidth like? I guess something in the order of a few 100 TB? Perhaps you can host this from a an "unmetered" server for $50 per month. Not sure how high your peak loads are though.

  • vipermu 14 days ago

    Thanks! Great work with Lexica.

    We also released a free API if anyone wants to check it out.

    It will soon have endpoints with custom image generation features.

    • edreismd 14 days ago

      Amazing! Great to see more prompt APIs!

  • edreismd 14 days ago

    Amazing work, love Lexica! Thank you!

password4321 14 days ago

Show HN: I made 7k images with DALL-E 2 to create a reference/inspiration table

  • davidbarker 14 days ago

    Thanks for posting this — I made it!

    I've been steadily adding new prompts/images, and in fact today was the first time a set of user-submitted prompts were added.

    I'm so pleased to know it's useful to others.

  • vipermu 14 days ago

    really cool! that must have been a lot of work. Here's another great site for references:

    • davidbarker 14 days ago

      Thanks for the kind words. I collected them and built the site over ~2 months. The financial cost to run the prompts on DALL•E 2 was the hardest part!

  • dvrp 14 days ago

    I already knew your site. I don’t recall where I saw it. I shared it with Victor right away because of the sheer amount of content! Insane!

    In fact, I think thought for a while about how to potentially integrate it with Krea somehow. But I came up empty. If you have any ideas, please reach out via twitter!

Oras 14 days ago

This is fantastic. A few days ago I was checking PromptBase [0] and thought it was a really good idea. Yours just took it to the next level being free with massive amount of data.

Great work.


  • vipermu 14 days ago

    Thanks! Prompts can be hard to create, we hope that having access to these kinds of datasets we will be able to create tools and conduct studies that help us create better images and understand better the possibilities of AI models like stable diffusion.

  • dvrp 14 days ago


    I’d love to integrate our crawler with GitHub Actions and make it a self-updating dataset…

    There’s so much stuff to do!

    • ionwake 14 days ago

      Amazing work great stuff !!!!!

  • tasuki 14 days ago

    I have a negative emotional reaction to PromptBase. Stable diffusion is free and someone tries to make a business out of adding little value on top of it? It's not wrong or anything, I just don't like it...

whalesalad 14 days ago

This is wild to me. Now we have meta-ai that is surrounding other forms of ai, analyzing the user submitted input as well as the image output, using ai to infer intent, identify nouns etc... and yet all of this is stipulated on the initial datasets that these initial text->img robots were trained on which may or may not be a true representation of our actual culture. So we are lava/magma layering all of these approximations on top of each other and gluing them with scrambled eggs. I think this is all really cool, for the record, it's just something I have been thinking about. For art, I love it, for a self-driving vehicle, lmao.

  • vipermu 14 days ago

    We're living some crazy times! A truly AI summer.

    If you enjoy thinking about how the future of this field might look like, I highly recommend watching the interview between Yannic Kilcher and Sebastian Risi (

    I was mind-blown after hearing it. It was a long time since I didn't hear such an interesting conversation. It's crazy how Risi's ideas correlate so well with the way how complex systems emerge in nature (optimizing locally), and the idea of self-organizing systems is just amazing.

    • dr_dshiv 14 days ago

      > AI Summer


nextaccountic 14 days ago

This is fantastic, thanks for publishing it. I'm glad many players in the Stable Diffusion ecosystem is striving for openness (not only of the model itself but there are also open source frontends and related tooling)

  • vipermu 14 days ago

    Stable Diffusion was released just a month ago and look at the amount of applications and improvements that have been developed, it feels like a year!

    Open-source is the way to get the most out of this tech. We plan to keep building all the features that are to come at in this way.

    • pwillia7 14 days ago

      It is truly crazy the pace of updates. It's the first single topic I feel like I can't keep up with no matter how much time I spend with it.

      It's like how they say Da Vinci's was the last generation that could know 'everything' but now we're getting into real-time too much new information!

      • dvrp 14 days ago

        There’s definitely no way to keep up with machine learning today. Let alone all the world’s knowledge.

        AI will be the new Da Vinci.

smusamashah 14 days ago

I want back to my ai creations on NightCafe which I did 5 months ago. Almost all of them now looked pretty ugly/stupid now.

It was same as when I saw game play of NFS most wanted it looked so realistic, now it absolutely does not.

This effect is amazing, don't know if it has a name though.

cercatrova 14 days ago

How does it compare with

  • dvrp 14 days ago

    Krea dev here.

    Lexica is a search engine (like, but it doesn’t allow you to create collections or like generations.

    Regarding the API, both have public APIs although I’m not sure if you can paginate through several search results using the public Lexica API. In the Krea Prompts API, you can do cursor-based pagination.

    Finally, Lexica API allows you to do CLIP-based search, but with Krea we are using PostgreSQL full-text search (for now). However, the code to do CLIP search with the dataset (including reverse image search) is in the repository.

    (edit: also, nor Lexica nor other search engines or similar products are offering the dataset afaik.)

    • edreismd 14 days ago

      Nice! That is cool!

  • vipermu 14 days ago

    The source (Stability AI Discord) is the same, but I don't know how Sharif gathered his data.

    • edreismd 14 days ago

      How to access the SD dataset?

      • dvrp 14 days ago

        In the repository there’s a link or a `wget` statement that you can execute to download the 10M dataset.

maaaaattttt 14 days ago

Is there any effort being done in rating prompts in regard to the image the model output and/or what the user chose as being a satisfactory image?

I could (probably naively) imagine that this would be the next step in making these models even more pleasing to humans. Or at least in creating a GPT-based "companion" model that would suggest, from an initial subpar prompt, a prompt yielding better results.

  • dvrp 14 days ago

    Well, we have a like + collection system at

    We may use the data from there to train custom models. Kind of the same that MidJourney has, where they ask people to rate images in exchange for GPU hours as prizes.

    We haven’t thought deeply about it yet.

rhacker 14 days ago

someone should do a thing that lets strangers critique AI by allowing them to select:

A person has a glitchy face in this photo. A person has a glitchy body in this photo. etc..

and then train the AI to have a fixup pass.

  • mgraczyk 14 days ago

    One of the main things were learning from the current trajectory of large models is that this kind of supervision isn't necessary. It's better to focus on bigger models, more data, and better datasets. Models will improve faster than we can come up with clever ways to add this supervision.

  • vipermu 14 days ago

    I wonder how CLIP search would work for finding errors in the dataset.

    For now, a workaround is to create your own "glitch" collection in and store there images with artifacts.

    If you end up doing it we will add a "download all" button right away :)

    And all the prompts from each collection could also be added to Open Prompts for sure.

  • dvrp 14 days ago

    Interesting. Kind of like Scale AI for generative AI.

mod 14 days ago

Wish the back button could find my spot on the page.

Fun to explore the prompts getting results similar to what I want. Great project.

  • dvrp 14 days ago

    I know! I’m sorry that it doesn’t work that way.

    The site is using Svelte + SvelteKit and I couldn’t find amazing Masonry components (like Masonic from React) that allow me to save and restore the scroll position easily. I can do it using hacky ways but there’s more important things to do.

    I’m also still trying to figure out why Back Forward Cache is not working right away with my current implementation. I would make the site snappier and also address the issue you’re bringing.

    Perhaps open-sourcing the code and figuring it out all together is the way…

masterspy7 14 days ago

I'm curious, I've seen a few sites like this which grab from the Stability Discord. Is there a way to quickly scrape this amount of data from a Discord server?

  • dvrp 14 days ago

    There’s many ways. We actually did a basic script because we didn’t want to saturate Discord servers. You know, the classic politeness rules for scrapers…

    But scraping data quickly and doing a (D)DOS is almost a synonym.

nextaccountic 14 days ago

Hey, was the specific Stable Diffusion version used to generate each image recorded anywhere in the dataset?

In, it doesn't say which version of the model was used to generate each image

It appears that later versions are better in generating faces or something. Like, Stable Diffusion 1.5 vs 1.4 (I'm not sure but there's a great variability nonetheless and I wanted to know if the version of the model accounted for this)

  • vipermu 14 days ago

    Good question. All the generations in the dataset were generated with version 1.3.

    • nextaccountic 14 days ago

      Could you somehow indicate this in the web ui of Maybe when there are more versions

      (Why aren't 1.4 images part of the dataset? Someone said they are public too)

      • dvrp 14 days ago

        We’re in the process of adding them

  • dvrp 14 days ago

    As @vipermu says, 1.3. 1.4 images are public but, afaik, 1.5 images are not.

hwers 14 days ago

What’s interesting about datasets like this is that you can likely use it to distil an even more compressed SD generator from it.

jononor 14 days ago

Nice. Another meta thing I would like to do, is to generate a bunch of prompts around a topic, mashed up with related or unrelated other topics. So that I can get a bunch of images and just be able to review/curate them all in one go. Does anyone know of tooling in that direction?

  • vipermu 14 days ago

    You can use the code in to do so. You first want to compute the CLIP embeddings of each prompt, index them using something like K-Nearest Neighbors (so you can get search for similarities fast), and then, given an input prompt, you will be able to find other indexed prompts that share their semantics.

    • nextaccountic 14 days ago

      Oh my god. This means that we can use stable diffusion to do pure text processing, like, given two textual descriptions, assess how similar they are

      I now expect that the next model like GPT-3 will be multi-modal like Stable Diffusion, to better account for those semantic connections

      • alecdibble 14 days ago

        This is a very interesting point. It's like using Fourier Transforms/Laplace Transforms to do operations or comparisons that are way easier in those spaces.

XorNot 14 days ago

I'm very much looking forward to how collections like this influence the second generation AI models, which will include data like this and tend to rank it highly on alt-text/clip embedding alignment.

ipaddr 14 days ago

When it comes to faces or people all photos default to horror.

dr_dshiv 14 days ago

Radical. I’m imagining randomly sampling images and identifying the text attributes associated with human ratings of image beauty.

  • vipermu 14 days ago

    in you can create collections of images with their prompts by pressing the "+" button in an image.

    you also have access to all the different components that create each prompt, and you can search similar ones by clicking them.

jaimex2 14 days ago

There's also

  • dvrp 14 days ago

    And OpenArt, PromptBase, PromptHero, Libraire… and who knows how many more will pop up in the next months… or shall I say hours ;-)?

Philomath 14 days ago

That's amazing, thanks for sharing. For how long have you been gathering this data?

  • vipermu 14 days ago

    We do not have a continuous system, the data is a mix between our own crawled generations and the dataset published by Dave Caruso ( With our crawler, we were able to get about 100k generations per day.