If anyone's interested in synthetic data generation, we've built a fully interactive visual tool for SDG. It supports generating hierarchical topic trees like other tools, but we do two things others don't:
First: fully interactive UI. This might sound unnecessary, but synthetic data is a creative and iterative process. It helps to review each step as you go, tweaking prompts. Are the topics right? Are the inputs realistic? Are the outputs reasonable? Once your prompts are dialed in, you can scale up the volume, but there's a creative iterative process to get there.
Second: we have many templates for common synthetic data gen use cases. For fine-tuning you want to focus on the breadth of realistic inputs. For "bug" evals you want to trigger specific error cases based on a description of the issue. For measuring evaluators/LLM judges you need a topic tree mixing passing and failing data. We also provide templates for common use cases: bias, maliciousness, toxicity, jailbreaking, etc. These are good to bootstrap the creative process above, but you can edit each to meet your needs.
Ah right, kiln - Deepfabric was originally named promptwright , and I can see kiln has copied over some of our code and used it for its synth-gen (which is a nice compliment!)
We are actually planning on moving to graphs now, which we are seeing better results with over trees, check it out if you also want to use them in kiln - but you might want to wait until we validate a little more and lift it out of experimental.
I think the key difference between the two since kiln adopted the same approach is the ability to generate reasoning / chain of thought and export to alpaca, chatml, etc - along with direct to unsloth.ai's formatting. I doubt we will have UI as its for running on backend systems and part of an ML pipeline along with being a library / SDK.
I personally wrote Kiln's SDG code myself -- no code was copied from here or anywhere else. Not sure where that claim is coming from, but it's not accurate.
I might have taken some of the prompts and modified them. I didn't recognize the new name, do recognize the old one.
Edit:
- just confirmed. No code copied. Prompts were originally from the Pluto library, then modified by the library above, then modified again by me for Kiln.
- And just to clarify, Kiln has had supported for chain of thought, reasoning, and all major export formats (ChatML/Unsloth/OpenAI/Hugging Face). Plus API integrations with Together, Fireworks, OpenAI, Google Vertex.
People should try both. I just want to clear on the origins of the code/prompts, and the feature set.
# The contents of this file are adapted from the promptwrite library (https://github.com/StacklokLabs/promptwright),
# which was adapted from the pluto library (https://github.com/redotvideo/pluto).
I read the code. I also remember writing the code and that comment.
As disclosed: some prompt strings were taken and modified, but none of the code was. The original strings are using a templating library that we don't support, so their code/strings wouldn't have worked in our codebase, nor would the wrapping code. Those interfaces/LOC are all unique. It's possible for some "content" to be taken (partial prompt strings), but zero code, and the statement "copied over some of our code and used it" to be incorrect.
Not trying to make a big deal of this, just clarifying these are separate libraries, with no shared code. Looks like the author saw the comment and assumed we used code (vs prompts); not a big deal, but not the case. Their work is super cool, and did inspire parts of my project.
Also worth noting, the library Pluto originated this prompt (as far as I know), and it's been tweaked/evolved many times over.
Hey There, this thread is getting derailed. Could you please create a separate post for your project and we let this one be for discussion of deepfabric, thanks!
Agreed, and sorry about that. Maybe edit the incorrect comment about "I can see kiln has copied over some of our code" for clarity. I get it was probably honest mistake, but hard not to reply when people are claiming I copied something I didn't. Great project, people go check out deepfabric!
Very good, and even better with the new DAG approach - we have been using great-expectations to bench and seeing very good diversity and low amounts of duplication - you check out one of the recent CoT examples here: https://huggingface.co/datasets/lukehinds/deepfabric-devops-...
sure, just starting to get some up on HF. A good example might be GSM8K as this shows the structured output where every result is strictly formatted - I am using this right now to train models and managaing to get a small qwen model up in the 60% range, which wildly is higher then llama2 and xAI Grok 1
If anyone's interested in synthetic data generation, we've built a fully interactive visual tool for SDG. It supports generating hierarchical topic trees like other tools, but we do two things others don't:
First: fully interactive UI. This might sound unnecessary, but synthetic data is a creative and iterative process. It helps to review each step as you go, tweaking prompts. Are the topics right? Are the inputs realistic? Are the outputs reasonable? Once your prompts are dialed in, you can scale up the volume, but there's a creative iterative process to get there.
Second: we have many templates for common synthetic data gen use cases. For fine-tuning you want to focus on the breadth of realistic inputs. For "bug" evals you want to trigger specific error cases based on a description of the issue. For measuring evaluators/LLM judges you need a topic tree mixing passing and failing data. We also provide templates for common use cases: bias, maliciousness, toxicity, jailbreaking, etc. These are good to bootstrap the creative process above, but you can edit each to meet your needs.
It's a free app on GitHub. Docs and videos: https://docs.kiln.tech/docs/synthetic-data-generation
Ah right, kiln - Deepfabric was originally named promptwright , and I can see kiln has copied over some of our code and used it for its synth-gen (which is a nice compliment!)
We are actually planning on moving to graphs now, which we are seeing better results with over trees, check it out if you also want to use them in kiln - but you might want to wait until we validate a little more and lift it out of experimental.
I think the key difference between the two since kiln adopted the same approach is the ability to generate reasoning / chain of thought and export to alpaca, chatml, etc - along with direct to unsloth.ai's formatting. I doubt we will have UI as its for running on backend systems and part of an ML pipeline along with being a library / SDK.
I personally wrote Kiln's SDG code myself -- no code was copied from here or anywhere else. Not sure where that claim is coming from, but it's not accurate.
I might have taken some of the prompts and modified them. I didn't recognize the new name, do recognize the old one.
Edit:
- just confirmed. No code copied. Prompts were originally from the Pluto library, then modified by the library above, then modified again by me for Kiln.
- And just to clarify, Kiln has had supported for chain of thought, reasoning, and all major export formats (ChatML/Unsloth/OpenAI/Hugging Face). Plus API integrations with Together, Fireworks, OpenAI, Google Vertex.
People should try both. I just want to clear on the origins of the code/prompts, and the feature set.
no worries, its not a big deal - I saw promptwrights name referenced in kilns source. Best of luck , looks like a cool project.
Line 1 makes it pretty clear:
https://github.com/Kiln-AI/Kiln/blob/d38a64b598bf21939263bed...Curious how the OP "just confirmed. No code copied."
I read the code. I also remember writing the code and that comment.
As disclosed: some prompt strings were taken and modified, but none of the code was. The original strings are using a templating library that we don't support, so their code/strings wouldn't have worked in our codebase, nor would the wrapping code. Those interfaces/LOC are all unique. It's possible for some "content" to be taken (partial prompt strings), but zero code, and the statement "copied over some of our code and used it" to be incorrect.
Not trying to make a big deal of this, just clarifying these are separate libraries, with no shared code. Looks like the author saw the comment and assumed we used code (vs prompts); not a big deal, but not the case. Their work is super cool, and did inspire parts of my project.
Also worth noting, the library Pluto originated this prompt (as far as I know), and it's been tweaked/evolved many times over.
Hey There, this thread is getting derailed. Could you please create a separate post for your project and we let this one be for discussion of deepfabric, thanks!
Agreed, and sorry about that. Maybe edit the incorrect comment about "I can see kiln has copied over some of our code" for clarity. I get it was probably honest mistake, but hard not to reply when people are claiming I copied something I didn't. Great project, people go check out deepfabric!
How easy it is to pass an existing db schema to this library in order to generate a testable synthetic dataset?
I would love to learn more and have a try, I figure you can dump out to txt or csv -
you can raise and issue and I will certainly give it a go - or also reach me via the discord link on the main repo. Let's see what we can do.
How is the diversity, duplication?
Very good, and even better with the new DAG approach - we have been using great-expectations to bench and seeing very good diversity and low amounts of duplication - you check out one of the recent CoT examples here: https://huggingface.co/datasets/lukehinds/deepfabric-devops-...
based on the description, I think it's using something similar to GLAN https://arxiv.org/abs/2402.13064
are their good synthetic data sets generated from DeepFabric publicly available?
sure, just starting to get some up on HF. A good example might be GSM8K as this shows the structured output where every result is strictly formatted - I am using this right now to train models and managaing to get a small qwen model up in the 60% range, which wildly is higher then llama2 and xAI Grok 1
GSM8K: https://huggingface.co/datasets/lukehinds/deepfabric-GSM8K-c...
also some others
infra failures reasoning / CoT: https://huggingface.co/datasets/lukehinds/deepfabric-devops-...
Medical (multi-turn): https://huggingface.co/datasets/lukehinds/deepfabric-7k-medi...
Programming challenges: https://huggingface.co/datasets/lukehinds/programming-challe...
If there is anything in particular you need, drop me a message or feel free to open an issue and I can create something for you.
Thanks, what LLMs were used to create these?
I think it was gpt4-mini, but local models do surprisingly well too.
"Synthetic CDOs"