Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: Patchwork – Open-source framework to automate development gruntwork (github.com/patched-codes)
83 points by rohansood15 16 hours ago | hide | past | favorite | 20 comments
Hi HN! We’re Asankhaya and Rohan and we are building Patchwork.

Patchwork tackles development gruntwork—like reviews, docs, linting, and security fixes—through customizable, code-first 'patchflows' using LLMs and modular code management steps, all in Python. Here's a quick overview video: https://youtu.be/MLyn6B3bFMU

From our time building DevSecOps tools, we experienced first-hand the frustrations our users faced as they built complex delivery pipelines. Almost a third of developer time is spent on code management tasks[1], yet backlogs remain.

Patchwork lets you combine well-defined prompts with effective workflow orchestration to automate as much as 80% of these gruntwork tasks using LLMs[2]. For instance, the AutoFix patchflow can resolve 82% of issues flagged by semgrep using gpt-4 (or 68% with llama-3.1-8B) without fine-tuning or providing specialized context [3]. Success rates are higher for text-based patchflows like PR Review and Generate Docstring, but lower for more complex tasks like Dependency Upgrades.

We are not a coding assistant or a black-box GitHub bot. Our automation workflows run outside your IDE via the CLI or CI scripts without your active involvement.

We are also not an ‘AI agent’ framework. In our experience, LLM agents struggle with planning and rarely identify the right execution path. Instead, Patchwork requires explicitly defined workflows that provide greater success and full control.

Patchwork is open-source so you can build your own patchflows, integrate your preferred LLM endpoints, and fully self-host, ensuring privacy and compliance for large teams.

As devs, we prefer to build our own ‘AI-enabled automation’ given how easy it is to consume LLM APIs. If you do, try patchwork via a simple 'pip install patchwork-cli' or find us on Github[4].

Sources:

[1] https://blog.tidelift.com/developers-spend-30-of-their-time-...

[2] https://www.patched.codes/blog/patched-rtc-evaluating-llms-f...

[3] https://www.patched.codes/blog/how-good-are-llms

[4] https://github.com/patched-codes/patchwork

[Sample PRs] https://github.com/patched-demo/sample-injection/pulls






Yall know there’s a popular oss project called patchwork right https://patchwork.readthedocs.io/en/latest/

There are a few open source projects by the name and we were aware of https://github.com/ssbc/patchwork which is archived. Didn't know of this though.

It's a common noun which works really well for patch-based offerings I guess, and we chose it because we built a 'framework to patch code'. But we'll think more about this - thanks for bringing it up.


Patchwork is used by the Linux kernel: https://patchwork.kernel.org/

When I saw your submission title I thought it was that Patchwork.




The opensource ecosystem is large enough now compared to previous decades that name collisions are very likely to happen given that they're always named in English.

Ok the video explains this way better - and it looks awesome.

Do you accept PRs yourself :-)


A feature comparison to https://github.com/paul-gauthier/aider would be great.

Is this just a non interactive version of this kind of agent?


Aider is great, but the use case is different:

1. You use Aider to complete a novel task you're actively working on. Patchwork completes repetitive tasks passively without bothering you. For e.g. updating a function v/s fixing linting errors.

2. Aider is agentic, so it figures out how to do a task itself. This trades accuracy in favor of flexibility. With patchwork, you control exactly how the task is done by defining a patchflow. This limits the set of tasks to those that you have pre-defined but gives much higher accuracy for those tasks.

While the demo shows CLI use, the ideal use case patchwork is as part of your CI or even a serverless deployment triggered via event webhooks. Hope this helps? :)


PR reviews are the one thing you sure don't want a LLM doing.

Please elaborate.

While obviously a LLM might miss functional problems, it feels extremely well suited for catching “stupid mistakes”.

I don’t think anyone is advocating for LLMs merging and approving PRs on their own, they can certainly provide value to the human reviewer.


They can lull the human reviewer into a false sense of security.

"Computer already looked at it so I only need to glance at it"


I don’t know what your process is but if someone else has reviewed a PR before I take my turn I don’t ignore the code they’ve looked at. In fact I take the time to review both the original code as well as their comments or suggestions. That’s the point of review after all, to verify the thinking behind the code as well as the code itself and that applies equally to thoughts or code added by a reviewer.

I agree and disagree. You definitely need someone competent to take a look before merging in code, but you can do a first pass with an LLM to provide immediate feedback on any obvious issues as defined in your internal engineering standards.

Especially helpful if you're a team with where there's a wide variance in competency/experience levels.


Until that immediate feedback is outright wrong feedback and now you’ve sent them down a goose chase.

That happens with human review too and often serves as an opportunity to clarify your reasoning to both the reviewer and yourself. If the code is easily misunderstood then you should take a second look at it and do something to make it easier to understand. Sometimes that process even turns up a problem that isn’t a bug now but could become one later when the code is modified by someone in the future.

This is where prompting and context is key - you need to keep the scope of the review limited and well-defined. And ideally, you want to validate the review with another LLM before passing it to the dev.

Still won't be perfect, but you'll definitely get to a point where it's a net positive overall - especially with frontier models.


I stand corrected: LLMs are great to block PRs by raising issues. A lack of issues should not be taken as a good PR tho.

We're trialing ellipsis.dev for exactly this, and it's pretty good most of the time.

Oh this is really cool and great name! Will definitely try this out!



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: