Lately I’ve noticed coding agents getting significantly better especially at handling well-scoped, predictable tasks.
It made me wonder:
For a lot of Jira tickets especially small bug fixes or straightforward changes most senior developers would end up writing roughly the same implementation anyway.
So I started experimenting with this idea:
When a new Jira ticket opens:
-It runs a coding agents (Claude/cursor)
-The agent evaluates the complexity. If it’s below a configurable confidence it generates the implementation.
-It opens a GitHub PR automatically.
From there, you review it like any normal PR.
If you request changes in GitHub, the agent responds and updates the branch automatically.
So instead of “coding with an agent in your IDE”, it’s more like coding with an async teammate that handles predictable tasks.
You can configure:
-The confidence threshold required before it acts.
-The size/complexity of tasks it’s allowed to attempt.
-Whether it should only handle “safe” tickets or also try harder ones.
It already works end-to-end (Jira → implementation → PR → review loop).
Still experimental and definitely not production-polished yet.
I’d really appreciate feedback from engineers who are curious about autonomous workflows:
-Does this feel useful?
-What would make you trust something like this?
-Is there a self made solution for the same thing already created at your workplace?
Good question.
Atlassian MCP provides access to Jira.
Anabranch focuses on orchestration, deciding when to act, estimating complexity, and attempting to automatically open PRs for low-complexity tickets.
The goal isn’t to replace existing agent setups, but to explore whether the “boring majority” of tickets can be automated without manually going into the IDE, prompting, waiting for a result, opening a PR, waiting for a review of a second team-member. It is asynchronous by nature, and you just need to review it. I would argue that at some points agent would be so good that you would trust it to auto merge the result as well.
It’s still experimental, and part of the project is validating how reliable this approach can be in practice.
Re: README tone agreed. It was auto-generated. I’ll update it to be more neutral.
Lately I’ve noticed coding agents getting significantly better especially at handling well-scoped, predictable tasks.
It made me wonder:
For a lot of Jira tickets especially small bug fixes or straightforward changes most senior developers would end up writing roughly the same implementation anyway.
So I started experimenting with this idea:
When a new Jira ticket opens:
-It runs a coding agents (Claude/cursor)
-The agent evaluates the complexity. If it’s below a configurable confidence it generates the implementation.
-It opens a GitHub PR automatically.
From there, you review it like any normal PR.
If you request changes in GitHub, the agent responds and updates the branch automatically.
So instead of “coding with an agent in your IDE”, it’s more like coding with an async teammate that handles predictable tasks.
You can configure:
-The confidence threshold required before it acts.
-The size/complexity of tasks it’s allowed to attempt.
-Whether it should only handle “safe” tickets or also try harder ones.
It already works end-to-end (Jira → implementation → PR → review loop).
Still experimental and definitely not production-polished yet.
I’d really appreciate feedback from engineers who are curious about autonomous workflows:
-Does this feel useful?
-What would make you trust something like this?
-Is there a self made solution for the same thing already created at your workplace?
GitHub link here: https://github.com/ErezShahaf/Anabranch
Would love to keep improving it based on real developer feedback.
Why this instead of giving my current setup access to the Atlassian MCP?
What do I lose by using another interface than the one I use for all my other agentic coding?
nit, project README should not be written in the first person
Good question. Atlassian MCP provides access to Jira. Anabranch focuses on orchestration, deciding when to act, estimating complexity, and attempting to automatically open PRs for low-complexity tickets.
The goal isn’t to replace existing agent setups, but to explore whether the “boring majority” of tickets can be automated without manually going into the IDE, prompting, waiting for a result, opening a PR, waiting for a review of a second team-member. It is asynchronous by nature, and you just need to review it. I would argue that at some points agent would be so good that you would trust it to auto merge the result as well.
It’s still experimental, and part of the project is validating how reliable this approach can be in practice.
Re: README tone agreed. It was auto-generated. I’ll update it to be more neutral.