Show HN: Anyshift.io – Terraform "Superplan"

app.anyshift.io

35 points by fasten 7 months ago

Hello Hacker News! We're Roxane, Julien, Pierre, Mawen and Stephane from Anyshift.io. We are building a GitHub app (and platform) that detects Terraform complex dependencies (hardcoded values, intricated-modules, shadow IT…), flags potential breakages, and provides a Terraform ‘Superplan’ for your changes. To do that we create and maintain a digital twin of your infrastructure using Neo4j.

- 2 min demo : https://app.guideflow.com/player/dkd2en3t9r - try it now: https://app.anyshift.io/ (5min setup).

We experienced how dealing with IaC/Terraform is complex and opaque. Terraform ‘plans’ are hard to navigate and intertwined dependencies are error prone: one simple change in a security group, firewall rules, subnet CIDR range... can lead to a cascading effect of breaking changes.

I’ve dealt in production with those issues since Terraform’s early days. In 2016, I wrote a book about Infrastructure-as-code and created driftctl based on those experiences (open source tool to manage drifts which was acquired by Snyk).

Our team is building Anyshift because we believe this problem of complex dependencies is unresolved and is going to explode with AI-generated code (more legacy, weaker sense of ownership). Unlike existing tools (Terraform Cloud/Stacks, Terragrunt, etc...), Anyshift uses a graph-based approach that references the real environment to uncover hidden, interlinked changes.

For instance, changing a subnet can force an ENI to switch IP addresses, triggering an EC2 reconfiguration and breaking DNS referenced records. Our GitHub app identifies these hidden issues, while our platform uncovers unmanaged “shadow IT” and lets you search any cloud resource to find exactly where it’s defined in your Terraform code.

To do so, one of our key challenges was to achieve a frictionless setup, so we created an event-driven reconciliation system that unifies AWS resources, Terraform states, and code in a Neo4j graph database. This “time machine” of your infra updates automatically, and for each PR, we query it (via Cypher) to see what might break.

Thanks to that, the onboarding is super fast (5 min): 1. Install the Github app 2. Grant AWS read only access to the app

The choice of a graph database was a way for us to avoid scale limitations compared to relational databases. We already have a handful of enterprise customers running it in prod and can query hundreds of thousands of relationships with linear search times. We'd love you to try our free plan to see it in action

We're excited to share this with you, thanks for reading! Let us know your thoughts or questions here or in our future Slack discussions. Roxane, Julien, Pierre, Mawen and Stephane!

kestane 6 months ago

Hi Stephane, You might run into scale or cost issues soon with Neo4J. Check out https://kuzudb.com/ instead for your graph database.

wg337 7 months ago

This is such a cool idea! I’ve run into the pain of navigating Terraform dependencies before, and your graph-based approach feels really promising.

I’m especially intrigued by the "time machine", how does it manage historical state changes in larger environments without requiring you to start a side-business in cloud storage? ^^

Excited to give this a spin. Amazing work!

  • fasten 7 months ago

    This mean searching through time and changes. Imagine prod is on fire and api returns 500. Often you need to check through logs, git, cloud consoles, kub configs etc... with the time machine, Anyshift will directly return the list of 5 changes that occured during the week, including the autoscaler and who did the change

RobertCrumbs 7 months ago

Congrats on Anyshift.io—it looks amazing! Quick question: how does the GitHub app handle complex pull requests with multiple Terraform files? Does it flag dependencies across modules in real-time?

Looking forward to trying it out!

  • fasten 7 months ago

    We can handle multiple changes in the same PR thanks to our graph, a digital twin of your infra. We will query each changes separately, so it can support Terraform files. But you're right on one point : if multiple PR are open, we don't have a chronological way to treat them (to take into account the first PR and its impact and based on that do the analysis on the second PR etc..).

estellebotbol 7 months ago

Amazing product addressing a truly real pain point—such a game-changer. The team is also stellar. Been hoping to see something like this for a while. Excited to see the impact, this will definitely be big!

  • fasten 7 months ago

    Thanks for your kind words!!

emmtold 7 months ago

Cool post, thank you for sharing, it could be a useful use case indeed.

You mention AI-generated code causing dependency issues. Are there plans to integrate AI-driven recommendations?

  • fasten 7 months ago

    Thanks for the feedback! We already use AI in the PR to explain whats happening and the best practices to adopt. As for the code remediation part: most LLMs fail to generate the right IaC code thats adapted to your infra because they miss its general context (config, dependencies..). We are building first the deterministic part (the context) and once we have the context our plan is to add the fix/recommendation in the change.

    • anAiguy 7 months ago

      How will you be checkiing the quality of the AI recommendations in the your PR. Do you think that using different model ( chatgpt, claude,gemin, qwen) to challenge the recommendation made by another AI could help ?

      • fasten 7 months ago

        About having differents models challenging each other, I haven’t seen anything useful yet but I understand where you are going. Might be a future direction

        • anAiguy 7 months ago

          I have in mind the following paper. It is called Self-Taught Evaluators (https://arxiv.org/pdf/2408.02666)by Meta . It is interesting as they get big improvements from LLM checking and improving solution. WDYT ? I don't know if you could generate an PR using AI with let's say Claude and then check the quality by using chatgpt or gemini.. I would be interested by knowing if that would provide quality and more trust or the opposite

    • emmtold 7 months ago

      Ok focusing on context makes sense but I’d challenge the idea that LLMs inherently fail without it. Some teams have used fine-tuned models or hybrid workflows with partial context to generate useful IaC snippets

      • fasten 7 months ago

        Agreed 100%. LLMs are doing solid job at generating IaC but in a context where the person who use them knows what he/she's doing. In our case, remediaiton means an extra level of trust, where your infra is already super sensitive.

        • emmtold 7 months ago

          we have used some tools to generate terraform code based on our unmanaged cloud resource for instance and it worked well..

          • fasten 7 months ago

            The tools we are aware of will create a 1-to-1 mapping to some code, but very often with hardcoded values because they lack the full context of your infrastructure. This can lead to potential incidents in the future (broken dependencies / visibility). This is at least the way we are approaching it, and why we want to build this "deterministic" part first and then use it as context to the LLMs.

gastonv 7 months ago

Sounds amazing! Very smart approach to solve the complex Terraform dependency issues. The T1 team behind the projet makes it very exciting!

  • fasten 7 months ago

    thanks for your feedback!

ericmahe 7 months ago

Outstanding solution for gaining a holistic view of your cloud infrastructure and accelerate change and remediation

lauraac 7 months ago

Super exciting and well thought out! The team looks amazing, can’t wait to follow your progress!

geraldC13 7 months ago

Terraform plan on steroids? Love it Do you have plans to interlink with other observability platforms?

  • fasten 7 months ago

    we are thinking to add live monitoring data to it such as datadog or prometheus. What do you use ?

    • geraldC13 7 months ago

      Datadog

      • fasten 7 months ago

        thats the first one we are thinking about, thats great thanks

jtol 7 months ago

Do you provide insights on cost optimization as part the dependency analysis?

willydouhard 7 months ago

This looks great! Any plan to support other languages like bicep?

MichaLevy 7 months ago

Congrats! Sounds amazing and could be very useful!

gregvers 7 months ago

Super interesting! This will save my time

gfaivre 7 months ago

how seamless is the onboarding process for heavily customized workflows (Terraform + scripts)

  • fasten 7 months ago

    Most IaC setups will generate a terraform state, wheter in a directory (s3 bucket, hcp...) or on the fly. As long as we are able to access them we will be able to create a reconciliation at some point. which framework do you use?

benjipick 7 months ago

sounds cool but how do you prevent it from false positives? What’s the accuracy rate so far?

  • fasten 7 months ago

    In our Pull Request bot, we provide more information with a clear sumup of whats gonna be impacted. One of our next feature is to configure what type of information is more critical to you: by type of resources, owner (git blame) and tags. Do you have one that you would prefer in particular ?

    • benjipick 7 months ago

      I guess by type of resources but also env would be interesting. I don't care on most of the impact of my dev end tbh.

      • fasten 7 months ago

        super interesting thanks! for having config combined

NatachaBrm 7 months ago

Such a cool product, congrats!

joeyagreco 7 months ago

almost all of the positive responses on here are from brand new accounts...

zoemohl 7 months ago

Very cool product!!

tact_boy 7 months ago

Pretty epic product!

ELIOTOS 7 months ago

very cool release !