Just switched into a CPTO role in a industry I have limited domain knowledge about (gastro) and first week I build up some automated world modelling for my small company to get a handle on things:
- crawl all relevant news sites and subreddits hourly, rss fetch style, and for every new thing let a small LLM think about what X means in essence and derive insight nuggets or thoughts about "what does this mean for customer group X", super fast and cheap oneliners essentially but at times surprisingly good snippets.
- add in designated slack channels and allow direct tagging of a bot so that internal discussions inputs also go into a database with a similar LLM treatment of extracting the interesting bits and pieces
- have a hourly cronjob look at the new stuff in those databases and create proper hypothesis/assumptions for customer behavior and needs, lean startup style, or link new stuff to existing stuff and refine it.
- have another hourly cronjob go through the new/changed hypothesis + some knowledge about our product/platform and formulate feature candidates worth building that are distinct and linked by evidence
- so far its all just code + LLM with some statistics for relevance, essentially a bayesian board of ranked experiments to try next, running fully on autopilot, and quality 80/20 is fully acceptable.
- I added a straightforward kano voting UI for internal folks on the most promising candidates to get explicit human judgements factored in, again some math formulas of weighing stuff changing the scoring significantly depending on vote results, and everyone can give their input/judgement in a asynchronous way, human touch decentralized, future planned as in-product little feedback ask widgets to get thecustomer voices in it, too
- I am the only one who can pick "the next thing to build" from the prepared and ranked list without much emotion but with my own judgement so I can also pick not just the highest ranking item but for good reason a different thing slightly lower. onclick github issue is created and framed with a user sotry and the hypothesis + all the context accumulated bottom up nicely formatted as markdown
- my codex/claude code on the platform repo knows how to work with GH issues and what the processis, so I pick up the next formulated ticket and brainstorm together HOW to build the hypothesis in a semi-supervised way (vibecoding, but with precise context and me driving it with my own judgement/ideas for how to make it happen in the best/leanest way that is faithful to the hypothesis)
- shipping it closes the issue, goes back to the feature candidate, moves it into a "measure" state, and then knows which metrics to observe, aka did the feature move the right needle and how much, and after a significant enough amount of time its clear it was good/bad/neutral and I can either do a cleanup or leave it there or report success, which then updates the underlying initial assumptions with strong real world evidence up or down, which naturally reorders the feature candidates list linked to the assumptions list, changing where to go next instantly
... essentially I built my own hamster wheel, reducing the "what to build next" into something that feels scientifically enough for people to trust it, with manual intervention through voting, and a stupid simple place to just drop any observations/ideas with slack without ANY special formatting or writing requirements and have it bubble up into concrete things to build autolinked with supporting/contradicting evidence from world news etc. The actual build/execution I do as you all then with guided vibecoding, reducing the cost of build to minutes/hours instead of days/weeks, with zero meetings needed, realtime and asynchronous.
... is this what you looked for? I streamlined product to become like 2 clicks from my end and zero product people needed beyond myself, and streamlined tech to vibecoding + grounded context, going into a kind of guided shipping loop with learning loops closing automatically along the way. and when AI takes it 30 minutes to flesh out a new thing its all the time freed up so I can talk ideas or feedback stuff with colleagues while the machinery is humming. Never been happier to be a devops/fullstack guy.
Just switched into a CPTO role in a industry I have limited domain knowledge about (gastro) and first week I build up some automated world modelling for my small company to get a handle on things:
- crawl all relevant news sites and subreddits hourly, rss fetch style, and for every new thing let a small LLM think about what X means in essence and derive insight nuggets or thoughts about "what does this mean for customer group X", super fast and cheap oneliners essentially but at times surprisingly good snippets.
- add in designated slack channels and allow direct tagging of a bot so that internal discussions inputs also go into a database with a similar LLM treatment of extracting the interesting bits and pieces
- have a hourly cronjob look at the new stuff in those databases and create proper hypothesis/assumptions for customer behavior and needs, lean startup style, or link new stuff to existing stuff and refine it.
- have another hourly cronjob go through the new/changed hypothesis + some knowledge about our product/platform and formulate feature candidates worth building that are distinct and linked by evidence
- so far its all just code + LLM with some statistics for relevance, essentially a bayesian board of ranked experiments to try next, running fully on autopilot, and quality 80/20 is fully acceptable.
- I added a straightforward kano voting UI for internal folks on the most promising candidates to get explicit human judgements factored in, again some math formulas of weighing stuff changing the scoring significantly depending on vote results, and everyone can give their input/judgement in a asynchronous way, human touch decentralized, future planned as in-product little feedback ask widgets to get thecustomer voices in it, too
- I am the only one who can pick "the next thing to build" from the prepared and ranked list without much emotion but with my own judgement so I can also pick not just the highest ranking item but for good reason a different thing slightly lower. onclick github issue is created and framed with a user sotry and the hypothesis + all the context accumulated bottom up nicely formatted as markdown
- my codex/claude code on the platform repo knows how to work with GH issues and what the processis, so I pick up the next formulated ticket and brainstorm together HOW to build the hypothesis in a semi-supervised way (vibecoding, but with precise context and me driving it with my own judgement/ideas for how to make it happen in the best/leanest way that is faithful to the hypothesis)
- shipping it closes the issue, goes back to the feature candidate, moves it into a "measure" state, and then knows which metrics to observe, aka did the feature move the right needle and how much, and after a significant enough amount of time its clear it was good/bad/neutral and I can either do a cleanup or leave it there or report success, which then updates the underlying initial assumptions with strong real world evidence up or down, which naturally reorders the feature candidates list linked to the assumptions list, changing where to go next instantly
... essentially I built my own hamster wheel, reducing the "what to build next" into something that feels scientifically enough for people to trust it, with manual intervention through voting, and a stupid simple place to just drop any observations/ideas with slack without ANY special formatting or writing requirements and have it bubble up into concrete things to build autolinked with supporting/contradicting evidence from world news etc. The actual build/execution I do as you all then with guided vibecoding, reducing the cost of build to minutes/hours instead of days/weeks, with zero meetings needed, realtime and asynchronous.
... is this what you looked for? I streamlined product to become like 2 clicks from my end and zero product people needed beyond myself, and streamlined tech to vibecoding + grounded context, going into a kind of guided shipping loop with learning loops closing automatically along the way. and when AI takes it 30 minutes to flesh out a new thing its all the time freed up so I can talk ideas or feedback stuff with colleagues while the machinery is humming. Never been happier to be a devops/fullstack guy.
[dead]
[dead]