Show HN: Tiny Code Improver

github.com

19 points by mr_kotan a year ago

Hey, fellow hackers! I'm excited to share my latest project, TinyCodeImprover, which has become an indispensable tool in my coding workflow.

What is TinyCodeImprover?

TinyCodeImprover leverages the power of GPT-4 to analyze and enhance your project files. By simply loading your code into the GPT-4 context, you can ask questions about your code, identify bugs, and even request GPT-4 to write code snippets across multiple files simultaneously.

The Story Behind its Creation

As a programmer, I frequently turn to GPT-4 for assistance with topics outside my expertise. However, I found the process of copying and pasting code snippets into the chat cumbersome and time-consuming. That's when I had an idea: a tool that seamlessly integrates GPT-4 into my coding environment.

A month ago, during a flight from Bangkok to Dubai, I developed the first prototype of TinyCodeImprover. It allowed me to feed project files directly to GPT-4 and request code improvements based on my specifications! It even wrote a Readme for itself – quite mind-blowing!

Refining the Process

To maximize the effectiveness of TinyCodeImprover, I discovered the importance of employing a critical approach. I created special commands, ".critic" and ".resolver," to initiate self-reflection, enabling GPT-4 to identify its own mistakes in approximately 30% of cases.

Since its inception, I've integrated TinyCodeImprover into four different projects, transforming error-fixing into an enjoyable experience, even when dealing with CSS challenges. It has proven useful not only for code but also for any type of text.

karsuren a year ago

You should probably add an example for using the special commands '.critic' '.resolve' commands that you are projecting as the key 'spice' of your project. There is no 'how to use' or 'how it works' provided for these special commands in your readme.md. One would have to walk through the entire code to hopefully get an idea. I went through the main script - I have an idea on the overall flow, but I still don't know how these special commands work. Other people might also run into the same issue. So, additional documentation can help

  • mr_kotan a year ago

    Thanks! You are right. I will add more instructions about the commands.

kesor a year ago

My version of loading multiple files into the context allows the UI of ChatGPT to load the files on his own, when it decides that it needs/wants to read the content of the file. And from some experimentation it would seem that each file it loads, it considers it as a separate interaction. Thus the token limit is much less of a problem, allowing to load larger pieces of code - either the whole thing, or piece by piece. https://github.com/kesor/chatgpt-code-plugin

  • kordlessagain a year ago

    Document boundaries are much more interesting with code vs. a text document.

karsuren a year ago

What is the perf of GPT4 vs GPT3.5 in the 'reasoning', 'reflection', 'critisism' and 'resolver' tasks like in your project? I see that you have commented out gpt3.5 and replaced with GPT4 in config yaml. Was GPT3.5 perf too bad? I don't think many people have GPT4 API access. If this requires atleast GPT4 to be effective, it might take a while before anyone else in the community can take it up.

  • mr_kotan a year ago

    GPT3.5 works not bad for generating, but it is not very good at self-reflection and self-critisism. Or maybe my prompts are not so good.

karsuren a year ago

Claude has 100k context for around 2$ per million tokens.

With GPT4's 4-8k token limit, anything but very small projects in their early phase can benefit from this. Also GPT4 would be far too cost prohibitive

  • mr_kotan a year ago

    Good idea about Claude, never tried it myself. But I've heard that GPT-4 is superior at the moment. Is it true? Also thought about GPT-4 prices, but from my experience: If I use it intensively for a 3-4 hours it costs about $5. This is much cheaper than anyones hour rate.

  • lgas a year ago

    How does it compare to GPT-4 at coding tasks? I haven't tried it but everything I've heard suggests that it is noticeably worse.

devdiary a year ago

how do you deal with token limitation? What is the maximus size of codebase it can work on?

  • mr_kotan a year ago

    You can work on a subset of project files that fit into 8k tokens. It's a limitation, yes. It's usually more than enough to add 4-5 files for context providing.