Tends to be a problem. I've tried to mitigate these problems by using either external harnesses (aka GitHub actions that are "fixed" based on known-good) or by using n-number of witness agents (e.g. Kimi/Qwen/whatever <=> Claude/OpenAI/Google). Generally sucks more time and energy (and now token/$).
that being said, I still have a "fix the code, not the test" line somewhere in here...
believing that oneshot prompt is enough specification is pretty delusional.
I keep seeing people talk about the power of these SOTA models, yet keep reading the types of prompts that make no sense to anyone who understands the ludicrous number of decisions that would need to be made.
8 billion tokens? What did that cost?
Tends to be a problem. I've tried to mitigate these problems by using either external harnesses (aka GitHub actions that are "fixed" based on known-good) or by using n-number of witness agents (e.g. Kimi/Qwen/whatever <=> Claude/OpenAI/Google). Generally sucks more time and energy (and now token/$).
that being said, I still have a "fix the code, not the test" line somewhere in here...
believing that oneshot prompt is enough specification is pretty delusional.
I keep seeing people talk about the power of these SOTA models, yet keep reading the types of prompts that make no sense to anyone who understands the ludicrous number of decisions that would need to be made.