I don't trust these AI-only companies to be overnight experts in properly handling medical, financial and insurance data. They have no business providing these tools, unless they want to take all the risk too.
Claude's actually pretty great at this! I actually used to use Claude A LOT to answer interesting questions (which I'll be writing up on!) More generally, Claude is palpably different from most other agents. I'd recommend these models – especially Opus – without qualifications.
But there's a process risk here based on their current practises. I'm hoping those practises change so that I can recommend Claude to everyone I know, but as of now, there's existential risk exposure here that's greater than Google's.
And I say that as someone who likes how Anthropic has been training Claude and Opus. I just don't think they're prepared to be the trillion dollar company they've become. They are – in a very real way – suffering from success. Which is extremely inconvenient to be on the receiving end of when you're on a deadline.
Before AI, shipping code to production used to be a two-person task: one writes the code, another one reviews the code. Now with AI writing the code, the developer that was supposed to write the code, only has to review it. And this is because they are responsible for the code they ship.
Code review has become unbearable because before AI, developers were reviewing code as they went writing it in the first place. Granted, never perfect and why a second person reviewing code was (is?) a best practice. But effectively there was always some level of code review happening as developers wrote code.
I fear it is way more boring to review financial and medical documents completely written by AI than it is to write (and at the same time review) by yourself. And way more dangerous to ship mistakes than in most software.
I am/was writing up an interesting hypothesis with Claude's help. But I redid the most important parts of the data pipeline manually. As in went in and cmd-c + cmd-v'ed the data by hand to create a reference, and I'm randomly spot checking 33% of the larger records.
> and you won't get human support or Claude – even if you are an enterprise paying out of your nose. And there's 0 redressal unless you go viral on social media.
Sadly this sounds like par for the course when it comes to tech. Too many messages and requests for help depend on knowing someone in the right slack groups.
You wouldn't build a chat bot for that, imagine how easy it is to make that thing go off the rails and allow anyone to reactivate their account. Really, you can't trust it to do any business function...
At least, that's really the message this sends in my opinion
> If you have groundbreaking AI, you can offer groundbreaking support at scale
You're a funny one aren't you...
Meet "Fin" Anthropic's "where support questions go to die" so-called-support bot, created by Intercom but powered by Anthropic.
Maybe it's an internal in-joke in the Anthropic offices ... "Fin" in french means "End".
I don't know anyone who has had a positive experience with "Fin" .... or ever spoken to a human at Anthropic support for that matter, even if you ask "Fin" to escalate.
Customer support and safety are cost centers. It doesn’t scale like software does and no one’s KPIs are going to improve dramatically if you provide support beyond a point.
AI and LLMs are the cool tech, and the most important thing is to push the frontier. Money spent elsewhere is money not spent on R&D.
It would be hilarious if it wasn’t the GDPs of nations being spent on this.
They aren't even close to a 1T company, they're valued at <400bb and that's at like a 20x-30x multiple. They can probably raise money at a higher valuation but its literally just value based on hype, not revenue.
My experience has been quite the opposite. Some bank processes remain oral traditions about clicking excel filters by hand because any code would have to be extensively documented and tested.
the details are key here. there is plenty of automatable financial work, sure, but also when it comes to reporting finances/costs (formally or informally) and having a real human being be accountable for them, you REALLY need to trust that nothing is hallucinated.
Any idea how they ensure this doesnt happen? As in, how can a user verify that the model did not touch any of the numbers and that it only built pipelines for them.
what I've been telling my CFO who wants to get AI involved in things is that for a lot of accounting and finance work "Trust but verify" doesnt work because verify is often the same process as doing the work.
To be honest I am having a hard time remembering the last time a LLM hallucinated in our pipelines. Make mistakes, sure but not make things up. For a daily recon process this is a solved problem imo.
I see it hallucinate quite often in development but mostly in getting small details wrong that are automatically corrected by lint processes. Large scale hallucination seems better guarded but I also suspect it’s because latitude is constrained by context and harnesses like lint, type systems, as well as fine tuned tool flows in coding models to control for divergence. But I would classify making mistakes like variable names wrong or package naming or signatures wrong as hallucations.
Curious!
Could you elaborate a little bit on your pipeline as we are currently looking to solve this for our internal processes in which we have to deal with lots of financial information from outside, containing mass of numbers, like annual reports, bank statements, balance sheets etc.
Not who you’re replying for but I can give some thoughts.
For anything math, it’s much more reliable to give agents tools. So if you want to verify that your real estate offer is in the 90–95th percentile of offerings in the past three months, don’t give Claude that data and ask it to calculate. Offload to a tool that can query Postgres.
Similar with things needing data from an external source of truth. For example, what payers (insurance companies) reimburse for a specific CPT code (medical procedure) can change at any time and may be different between today and when the service was provided two months ago. Have a tool that farms out the calculation, which itself uses a database or whatever to pull the rate data.
The LLM can orchestrate and figure out what needs to be done, like a human would, but anything else is either scary (math) or expensive (it using context to constantly pull documentation.)
A recent episode of Matt Levine’s podcast (Money Stuff) covered this: apparently investment bankers spend a huge amount of time preparing pitch decks for companies that don’t want them. Apparently Claude is quite good at making a pitch deck that no one but your boss wants or cares about.
I feel like there’s a metaphor in there... maybe I’ll ask Claude about it.
Reads different to me. Some examples to go run with and build your own. Covers cases from the investment side and then the obvious ones in an accounting perspective. It would be highly surprising that any of these would be use in production without modification. I am sure it will happen but the intent to me is to take this and run with your own process.
On the spend management side of things, I've found pretty remarkable success in letting LLMs check "does this receipt match this reimbursement request and based on all the information about the user, the request, and our policy, is it appropriately allocated to appropriate GL, Location, Department, and Project codes?" If the verification step fails, it kicks it back and the user can either override it (which gets it flagged for AP review), or fix it. It does substantially better than the naive Bayes classifier I was using before.
Why? It sounds exactly like the design I would hope for. It automates what I'm going to do already without needing to wait. And it allows you to bypass it entirely and just revert to the manual process (along with waiting).
That all sounds reasonable until you realize that the same logic is how we ended up with customer support systems that try to walk you through a phone tree and if you are lucky, you will be able to press 0 to speak to a human without answering a bunch of questions first and being referred to the online help articles.
Do you enjoy using any of those systems? Do you want the world to be that way?
Maybe we are interpreting the GP differently. In this scenario, the phone tree is doing the same questions that the human agent is going to do but does it immediately when I call rather than "waiting for an operator" to ask me those questions. And as long as I can "press 0 to eject" (just like I can in the accounting scenario, then its completely kosher to me.
Regarding customer support on phone:
I usually have lock with just waiting and not responding to the tel bot, very often you are routed to a human at the end :-D
In many businesses, the employee is responsible for inputting most of that. If a LLM can get to 95% accuracy and flag exceptions, the employees (and AP team) would actually have less work and bureaucracy.
Though we’ve had a few incidents where employees have submitted AI-generated receipts for reimbursement which is another issue..
What is your point? This is pretty normal expense management in any company setting. I don’t know what is so bad about being on the other side of that. Hope I am not too inflammatory by asking what is the point but genuinely you pointed it out like it’s some archaic process flow but it’s part of almost every expense system.
I guess my current company’s processes may be easier to deal with than others. That or my position affords me some extra catering to.
The system is currently using a simple app to submit expenses and any issues gets a simple human chat request and a call if requested.
They try to avoid kicking anything back and if they do they make sure it’s reviewed first to make sure that it’s needed and to make sure the reason is understood.
Our company is also very large so I’m not sure how they manage but they do. People rave about the process instead of hating it.
Thanks for the thoughtful reply. To add some color… most expense systems are setup so that the user has to input a couple fields like the category or GL code. Some of the fields might be auto populated. Some companies might not care about the classification but usually the intent is to capture things like travel or software etc. What was described earlier is really not painful for users most of the time but a LLM helps automate so much of it these days.
Pretty good as a dev with finance stakeholders. We have skills in place acting over our automated month closing and it was able to provide manual checks and flag issues, for example.
Nowhere near self sufficient tools though, just great to answer questions over the data that would usually take a few hours of custom scripting/excel. I wouldn't trust our stakeholders using AI directly either, being frank.
Yes. On the accounting side agents can handle a lot of the low value work like recons and other ledger activity pretty well. On the investment side I think like you pointed out it’s going to be a lot of research, industry, company, macro etc. Value in letting run on top of the data you have and put together ideas at a quicker pace than a human can. There is still a human in the loop but it can do a nice job of lining up thought you might have otherwise missed.
Seen it used in some of the fraud models (I work in insurance). So that's both from the perspective of people trying to claim fraudulently and from suppliers over charging. I can't say how much of a lift we actually get vs existing ML models
Will the big labs leave anything for external competition?
This probably killed a thousand startups in this space.
in the early internet you wouldn't see google creating their own news site or facebook building their own animal farm.
what happened to platformication of everything?
Building a startup on an LLM is like building a house on a foundation of quicksand. As the LLM gets better it naturally erodes your moat. It's a completely different dynamic compared to the internet. It's why I'm watching this from the sidelines.
I have a close friend who is trying to build a company entirely on top of Claude. He doesn't know how to program. He can't do basic arithmetic. Yet, the company he's building is a "Data Science AI for the Government" because, according to him, all of the data scientists at NOAA don't know what they're doing.
I have given up on trying to get through to him how bad of an idea this is. He's unemployed and has been working on this for over a year.
> Will the big labs leave anything for external competition?
No, why would they if they have the choice?
> what happened to platformication of everything?
Business happened. The web works differently from how it used to. The users are different. LLM inference and AI tools is a different core product from search and ads. That, and we have the benefit of hindsight now. Maybe a Google newsroom would've actually been a good idea in 2006 in hindsight, who knows.
Also realistically you could say the same thing about Google Maps and Street View. That probably also killed some startups. Google isn't running a charity for startups.
This was their play all along with their unethical data collection practices: let others use the APIs to discover the applications, then use the data against them to offer integrated solutions in every vertical of interest. Cursor, once Anthropic’s biggest customer, was one of the early ones they screwed.
They are also fighting for their lives because these insane valuations simply aren’t justified by being dumb pipes. Fortunately, open weights models are widely available and have crossed a threshold of usefulness that cements their place as good substitutes.
I guess the argument is that a tool built by a company with actual insight into and focus for financial services, with Anthropic as inference provider, would lead to more adoption and more use of Anthropic models. Something Anthropic could achieve either by just leaving things alone and having the best models, or alternatively by starting some kind of incubator or something. AWS might be a good model
The issue with that is obviously that most of the generated value would be captured by that company in the middle, while Anthropic would stay in the cost-conscious inference market.
Why would anthropic at all prefer this approach when that middle man can switch and cost-arbitrage between countless other model providers.
We're not talking about what is best for the consumer (ex more competition to force iterations and improvements), but what Anthropic thinks is best for Anthropic.
Make up the lower margins by larger volume because you get much better market penetration. But you are right that this only works if you know the middle-men don't go to other model providers. That's where some kind of incubation program that provides capital or credits or whatever in return for long-term commitments might work
But I doubt staying a pure model provider is a winning move. It's a market nobody will win long-term. Almost all of the value to be captured isn't in inference APIs but in how to use them to generate business value. Claude Code was already the right approach, they "just" need to show they can repeat this for other kinds of tasks
> Almost all of the value to be captured isn't in inference APIs but in how to use them to generate business value.
If the business value can be generated with a few thousand words in a SKILL.md on top of a commoditized model it doesn't sound like that's a market anyone can win long-term either, and the business value is ultimately going to accrue elsewhere (the customer, the inference hardware provider, etc)
> Will the big labs leave anything for external competition?
Is this a serious question?
Without the big labs with deep pockets investing to change the consumer mindset do you think a small company with no funding has any chance of even existing?
I remember when paying $1.99 for a mobile game on iOS was considered too expensive and now it seem most consumers are primed to spend more on in-app purchases every week. That mind-shift did not happen overnight.
It was not that long ago $200 for ChatGPT subscription was considered extravagant but now even wrappers can charge this price without hesitation - some of them do.
What Anthropic is doing is priming the market of which they will be potentially one of the main beneficiaries as long as they can continue existing. But I don't think anyone will go to Anthropic directly to source their financial services agent. They will go to financial service companies that use Anthropic to build the capabilities.
I'm not sure if this was tongue-in-cheek or not, but Yahoo created its own news site in 1996: https://en.wikipedia.org/wiki/Yahoo_News and FB had Zynga's Farmville as well.
History suggests otherwise. railroads, telecoms, search all consolidated. The natural equilibrium for transformative infrastructure is winner take all. AGI/ASI won’t be different but will be nearly every vertical and governments will legislate too little too late.
Nothing natural about it. Such monopolies were propped up by the state using public funds and profits captured by the capital class. Many benefitted by the arrangement and so it became normalized. But it’s a choice people made to structure things that way.
The car industry, oil and gas… all could have played out differently if different players had gained wider adoption or if governments used a different economic model.
local models are going to win and therefore the hardware providers, Apple and nvidia.
There isn't going to be any moat for the hosted providers besides hardware scale. They can run your request on shared 1TB memory hardware, or whatever.
But local hardware is going to catch up, the hosted providers are going to become commoditized, and the costs are just going to be compute whether its your hardware or theirs.
And your laptop is going to be powerful enough to be good enough for most cases.
Local hardware catching up doesn’t matter if the thing worth having never leaves the building. Enterprise services are hard, moat is in distribution and know how.
I am not sure if people are using claude design, security review stuff and other tools they have built so far.
Building is the easy part. There are lot of service level stuff that I am sure anthropic will not be able to provide, therefore they are trying to partner with other orgs in that realm.
I am very skeptical about their stuff now.
If you are builder, I believe you should avoid anthropic, it can be default to monopolistic behavior, I am not saying they are doing it, but they could, where in they see what you are building, if you have traction, position a product in that realm. Just saying.
> Will the big labs leave anything for external competition?
Unfortunately no.
The TAM for Anthropic and OpenAI is anything that runs software or a screen.
Any software or technology business that has high margins that Anthropic and OpenAI are not doing will be a target.
After both their IPO's mandates Wall Street them to push for more growth by competing in other technology business areas or they will get punished in the markets.
Great to see more insurance hype! We've been working on AI to solve the consumer search problem in the industry for the past 3 (almost 4) years and it's great to see the big labs getting their hands dirty and building tools for practitioners in the space.
More industry exposure to well-managed agentic experiences will create oodles of opportunities to reduce premiums for consumers and offput some inflation-driven increases in cost of coverage.
"ready-to-run agent templates for the most time-consuming work in financial services: building pitchbooks, screening KYC files, and closing the books at month-end"
Ok, maybe you can squeeze a vaguely passable pitchbook out of Claude.
But screening KYC files or closing books at month-end ?
"I'll have some of what they're smoking" as the cool kids say.
No regulator or tax office on this planet is going to accept the "but Claude said it was ok" excuse.
The only people who are going to profit out of this are Anthropic, Lawyers and Governments (through increased fines).
Of course - finance is the best domain to depkiy a stochastic parrot which hallucinates and forgets stuff frequently and doesn’t follow your instructions - even with SOTA models. One where you need absolute accuracy and auditabikity.
Does anyone else think "agents" are the wrong abstractions? Agents look like UI wrappers over LLM's - they are inherently not composable. Tailor made agents for UI's don't seem to scale. I predict they wont take off.
What I predict instead is that we will have a common UI layer plugin and a "protocol" than can speak to ui elements -- this might be more composable.
There was a paper lately, claiming that bank & insurances are going to layoff around 200k in the next years globally.
(which would be according to them a reduction of 3-4% of finance people)
Why would this be useful in a zero sum environment like markets, why would you want to use the same tool that everyone else has access too? Top performers will always be the people that hand craft their solutions, just like why the top performers in the watch space are the people that make handmade watches in Switzerland not the guys make 100k watches a month in China.
Making the most convoluted and idiotic insurance process on earth and then delegating that process onto an AI that requires huge buzzing data centers.. Is there an option to respawn in the non-clown world universe? It was funny at first but it gets tiring eventually.
What does a better insurance process look like? Outside of health insurance, which is complicated for a variety of reasons, most insurance is pretty easy to procure. I got an umbrella policy recently and it took about 30 minutes of talking with an agent and answering pretty reasonable questions.
1. A better insurance process is clearly out of the scope of a hn comment, and I have trouble believing you don’t know that too.
2. I’m almost certainly talking about health insurance, made obvious by you even mentioning that. There’s a HN guideline about discussing in good faith.
3. I find it humorous you hand-wave away our inhuman healthcare system as “for a variety of reasons”.
4. I see your career is in hedge funds, defense, and big tech. Best of luck ;)
I don't trust these AI-only companies to be overnight experts in properly handling medical, financial and insurance data. They have no business providing these tools, unless they want to take all the risk too.
Claude's actually pretty great at this! I actually used to use Claude A LOT to answer interesting questions (which I'll be writing up on!) More generally, Claude is palpably different from most other agents. I'd recommend these models – especially Opus – without qualifications.
But there's a process risk here based on their current practises. I'm hoping those practises change so that I can recommend Claude to everyone I know, but as of now, there's existential risk exposure here that's greater than Google's.
Anthropic's automated systems can and will ban you for pretty arbitrary things; and you won't get human support or Claude – even if you are an enterprise paying out of your nose. And there's 0 redressal unless you go viral on social media. Or know someone who knows someone. See: https://x.com/Whizz_ai/status/2051180043355967802 https://x.com/theo/status/2045618854932734260
And I say that as someone who likes how Anthropic has been training Claude and Opus. I just don't think they're prepared to be the trillion dollar company they've become. They are – in a very real way – suffering from success. Which is extremely inconvenient to be on the receiving end of when you're on a deadline.
Before AI, shipping code to production used to be a two-person task: one writes the code, another one reviews the code. Now with AI writing the code, the developer that was supposed to write the code, only has to review it. And this is because they are responsible for the code they ship.
Code review has become unbearable because before AI, developers were reviewing code as they went writing it in the first place. Granted, never perfect and why a second person reviewing code was (is?) a best practice. But effectively there was always some level of code review happening as developers wrote code.
I fear it is way more boring to review financial and medical documents completely written by AI than it is to write (and at the same time review) by yourself. And way more dangerous to ship mistakes than in most software.
I am/was writing up an interesting hypothesis with Claude's help. But I redid the most important parts of the data pipeline manually. As in went in and cmd-c + cmd-v'ed the data by hand to create a reference, and I'm randomly spot checking 33% of the larger records.
The analysis itself; I'm doing it by hand.
> the developer that was supposed to write the code, only has to review it.
But more often than not that developer ends up reviewing far more lines of code due to the typical verbosity of an LLM.
100%... that's why I say code review became unbearable!
> and you won't get human support or Claude – even if you are an enterprise paying out of your nose. And there's 0 redressal unless you go viral on social media.
Sadly this sounds like par for the course when it comes to tech. Too many messages and requests for help depend on knowing someone in the right slack groups.
Which is very confusing to me. If you have groundbreaking AI, you can offer groundbreaking support at scale.
You wouldn't build a chat bot for that, imagine how easy it is to make that thing go off the rails and allow anyone to reactivate their account. Really, you can't trust it to do any business function...
At least, that's really the message this sends in my opinion
> If you have groundbreaking AI, you can offer groundbreaking support at scale
You're a funny one aren't you...
Meet "Fin" Anthropic's "where support questions go to die" so-called-support bot, created by Intercom but powered by Anthropic.
Maybe it's an internal in-joke in the Anthropic offices ... "Fin" in french means "End".
I don't know anyone who has had a positive experience with "Fin" .... or ever spoken to a human at Anthropic support for that matter, even if you ask "Fin" to escalate.
Nope.
Customer support and safety are cost centers. It doesn’t scale like software does and no one’s KPIs are going to improve dramatically if you provide support beyond a point.
AI and LLMs are the cool tech, and the most important thing is to push the frontier. Money spent elsewhere is money not spent on R&D.
It would be hilarious if it wasn’t the GDPs of nations being spent on this.
They aren't even close to a 1T company, they're valued at <400bb and that's at like a 20x-30x multiple. They can probably raise money at a higher valuation but its literally just value based on hype, not revenue.
https://www.businessinsider.com/anthropic-trillion-dollar-va...
Check the secondaries market ;-)
The only reason they are doing it is because there are regulation for people but not for machines.
My experience has been quite the opposite. Some bank processes remain oral traditions about clicking excel filters by hand because any code would have to be extensively documented and tested.
This is objectively not true. You can’t get around HIPAA by saying “lol wasn’t me it was an Agent”
I would recommend you to not use these, if you are not willing to absorb the risk.
Luckily there is still a significant market for the services.
we tried it just before. it's interesting what it does. writing lots of python scripts.
however the result (excel/spreadsheet) looks different each time you run it. Which is annoying when you run it at the end of each month.
btw: this is not surprising when you look at the low details the skills have.
> We’re releasing ten ready-to-run agent templates for the most time-consuming work in financial services
The templates being: pitch builder, meeting preparer, earnings reviewer, model builder, market researcher, valuation reviewer, general ledger reconciler, month-end closer, statement auditor, KYC (Know Your Customer) screener.
Seems pretty scattershot. Reminds me of GPT Store.
the details are key here. there is plenty of automatable financial work, sure, but also when it comes to reporting finances/costs (formally or informally) and having a real human being be accountable for them, you REALLY need to trust that nothing is hallucinated.
Any idea how they ensure this doesnt happen? As in, how can a user verify that the model did not touch any of the numbers and that it only built pipelines for them.
what I've been telling my CFO who wants to get AI involved in things is that for a lot of accounting and finance work "Trust but verify" doesnt work because verify is often the same process as doing the work.
To be honest I am having a hard time remembering the last time a LLM hallucinated in our pipelines. Make mistakes, sure but not make things up. For a daily recon process this is a solved problem imo.
I see it hallucinate quite often in development but mostly in getting small details wrong that are automatically corrected by lint processes. Large scale hallucination seems better guarded but I also suspect it’s because latitude is constrained by context and harnesses like lint, type systems, as well as fine tuned tool flows in coding models to control for divergence. But I would classify making mistakes like variable names wrong or package naming or signatures wrong as hallucations.
Curious! Could you elaborate a little bit on your pipeline as we are currently looking to solve this for our internal processes in which we have to deal with lots of financial information from outside, containing mass of numbers, like annual reports, bank statements, balance sheets etc.
Not who you’re replying for but I can give some thoughts.
For anything math, it’s much more reliable to give agents tools. So if you want to verify that your real estate offer is in the 90–95th percentile of offerings in the past three months, don’t give Claude that data and ask it to calculate. Offload to a tool that can query Postgres.
Similar with things needing data from an external source of truth. For example, what payers (insurance companies) reimburse for a specific CPT code (medical procedure) can change at any time and may be different between today and when the service was provided two months ago. Have a tool that farms out the calculation, which itself uses a database or whatever to pull the rate data.
The LLM can orchestrate and figure out what needs to be done, like a human would, but anything else is either scary (math) or expensive (it using context to constantly pull documentation.)
> Any idea how they ensure this doesnt happen?
Build a deterministic query set and automate it for monthly or daily reporting reconcilliation.
Leave AI out of it.
I'll be honest, I thought the first few items on your list of time consuming work was sarcasm.
A recent episode of Matt Levine’s podcast (Money Stuff) covered this: apparently investment bankers spend a huge amount of time preparing pitch decks for companies that don’t want them. Apparently Claude is quite good at making a pitch deck that no one but your boss wants or cares about.
I feel like there’s a metaphor in there... maybe I’ll ask Claude about it.
Reads different to me. Some examples to go run with and build your own. Covers cases from the investment side and then the obvious ones in an accounting perspective. It would be highly surprising that any of these would be use in production without modification. I am sure it will happen but the intent to me is to take this and run with your own process.
I find all of these .md files released by the labs to be ai generated slop. The only exception being maybe the /simplify command
No surprise there. Of course the skill files are not human written.
The AI is an expert in both following and generating prompts.
For those in the finance space, are you actually seeing any real AI tools being used? Like for actual operational tasks?
I've really only seen it used for research / exploration thus far. Either for economic research slide deck or for exploring trading hypothesis
On the spend management side of things, I've found pretty remarkable success in letting LLMs check "does this receipt match this reimbursement request and based on all the information about the user, the request, and our policy, is it appropriately allocated to appropriate GL, Location, Department, and Project codes?" If the verification step fails, it kicks it back and the user can either override it (which gets it flagged for AP review), or fix it. It does substantially better than the naive Bayes classifier I was using before.
I’m not saying your implementation is bad or anything but my visceral reaction to this was “I’m glad I’m not on the other side of that”
Why? It sounds exactly like the design I would hope for. It automates what I'm going to do already without needing to wait. And it allows you to bypass it entirely and just revert to the manual process (along with waiting).
That all sounds reasonable until you realize that the same logic is how we ended up with customer support systems that try to walk you through a phone tree and if you are lucky, you will be able to press 0 to speak to a human without answering a bunch of questions first and being referred to the online help articles.
Do you enjoy using any of those systems? Do you want the world to be that way?
Maybe we are interpreting the GP differently. In this scenario, the phone tree is doing the same questions that the human agent is going to do but does it immediately when I call rather than "waiting for an operator" to ask me those questions. And as long as I can "press 0 to eject" (just like I can in the accounting scenario, then its completely kosher to me.
Regarding customer support on phone: I usually have lock with just waiting and not responding to the tel bot, very often you are routed to a human at the end :-D
In many businesses, the employee is responsible for inputting most of that. If a LLM can get to 95% accuracy and flag exceptions, the employees (and AP team) would actually have less work and bureaucracy.
Though we’ve had a few incidents where employees have submitted AI-generated receipts for reimbursement which is another issue..
What is your point? This is pretty normal expense management in any company setting. I don’t know what is so bad about being on the other side of that. Hope I am not too inflammatory by asking what is the point but genuinely you pointed it out like it’s some archaic process flow but it’s part of almost every expense system.
I guess my current company’s processes may be easier to deal with than others. That or my position affords me some extra catering to.
The system is currently using a simple app to submit expenses and any issues gets a simple human chat request and a call if requested.
They try to avoid kicking anything back and if they do they make sure it’s reviewed first to make sure that it’s needed and to make sure the reason is understood.
Our company is also very large so I’m not sure how they manage but they do. People rave about the process instead of hating it.
Thanks for the thoughtful reply. To add some color… most expense systems are setup so that the user has to input a couple fields like the category or GL code. Some of the fields might be auto populated. Some companies might not care about the classification but usually the intent is to capture things like travel or software etc. What was described earlier is really not painful for users most of the time but a LLM helps automate so much of it these days.
Pretty good as a dev with finance stakeholders. We have skills in place acting over our automated month closing and it was able to provide manual checks and flag issues, for example.
Nowhere near self sufficient tools though, just great to answer questions over the data that would usually take a few hours of custom scripting/excel. I wouldn't trust our stakeholders using AI directly either, being frank.
Yes. On the accounting side agents can handle a lot of the low value work like recons and other ledger activity pretty well. On the investment side I think like you pointed out it’s going to be a lot of research, industry, company, macro etc. Value in letting run on top of the data you have and put together ideas at a quicker pace than a human can. There is still a human in the loop but it can do a nice job of lining up thought you might have otherwise missed.
What does the integration look like on accounting? Is this a tool provided by the accounting software provider?
I'm in that space so naturally interested in what people are up to :)
Seen it used in some of the fraud models (I work in insurance). So that's both from the perspective of people trying to claim fraudulently and from suppliers over charging. I can't say how much of a lift we actually get vs existing ML models
We’re integrating AI tooling into the Bloomberg Terminal for everyone to use.
https://www.bloomberg.com/professional/insights/press-announ...
Nope If anything firms are pulling back (I know someone closely who works at blackrock).
I don’t just know someone who works in finance, I am someone who works in finance and I say you’re wrong.
pulling back as in setting more realistic token budgets, or something more drastic? I'm curious
Stopped using them altogether in the context of productivity - in essence they’re useless.
I can believe that. Gambler’s Ruin gets costly when you’ve actually got money on the line.
Will the big labs leave anything for external competition?
This probably killed a thousand startups in this space.
in the early internet you wouldn't see google creating their own news site or facebook building their own animal farm. what happened to platformication of everything?
Building a startup on an LLM is like building a house on a foundation of quicksand. As the LLM gets better it naturally erodes your moat. It's a completely different dynamic compared to the internet. It's why I'm watching this from the sidelines.
I have a close friend who is trying to build a company entirely on top of Claude. He doesn't know how to program. He can't do basic arithmetic. Yet, the company he's building is a "Data Science AI for the Government" because, according to him, all of the data scientists at NOAA don't know what they're doing.
I have given up on trying to get through to him how bad of an idea this is. He's unemployed and has been working on this for over a year.
Building a business on top of any SaaS platform is building on quicksand. I know that from experience.
> Will the big labs leave anything for external competition?
No, why would they if they have the choice?
> what happened to platformication of everything?
Business happened. The web works differently from how it used to. The users are different. LLM inference and AI tools is a different core product from search and ads. That, and we have the benefit of hindsight now. Maybe a Google newsroom would've actually been a good idea in 2006 in hindsight, who knows.
Also realistically you could say the same thing about Google Maps and Street View. That probably also killed some startups. Google isn't running a charity for startups.
This was their play all along with their unethical data collection practices: let others use the APIs to discover the applications, then use the data against them to offer integrated solutions in every vertical of interest. Cursor, once Anthropic’s biggest customer, was one of the early ones they screwed.
They are also fighting for their lives because these insane valuations simply aren’t justified by being dumb pipes. Fortunately, open weights models are widely available and have crossed a threshold of usefulness that cements their place as good substitutes.
Amazon Basics for Knowledge Work™
I guess the argument is that a tool built by a company with actual insight into and focus for financial services, with Anthropic as inference provider, would lead to more adoption and more use of Anthropic models. Something Anthropic could achieve either by just leaving things alone and having the best models, or alternatively by starting some kind of incubator or something. AWS might be a good model
The issue with that is obviously that most of the generated value would be captured by that company in the middle, while Anthropic would stay in the cost-conscious inference market.
Why would anthropic at all prefer this approach when that middle man can switch and cost-arbitrage between countless other model providers.
We're not talking about what is best for the consumer (ex more competition to force iterations and improvements), but what Anthropic thinks is best for Anthropic.
Make up the lower margins by larger volume because you get much better market penetration. But you are right that this only works if you know the middle-men don't go to other model providers. That's where some kind of incubation program that provides capital or credits or whatever in return for long-term commitments might work
But I doubt staying a pure model provider is a winning move. It's a market nobody will win long-term. Almost all of the value to be captured isn't in inference APIs but in how to use them to generate business value. Claude Code was already the right approach, they "just" need to show they can repeat this for other kinds of tasks
> Almost all of the value to be captured isn't in inference APIs but in how to use them to generate business value.
If the business value can be generated with a few thousand words in a SKILL.md on top of a commoditized model it doesn't sound like that's a market anyone can win long-term either, and the business value is ultimately going to accrue elsewhere (the customer, the inference hardware provider, etc)
I'm confused because I remember using Google News in 2006?
there has been a product called Google News since 2002. It was only aggregating information from news channels
> Will the big labs leave anything for external competition?
Is this a serious question?
Without the big labs with deep pockets investing to change the consumer mindset do you think a small company with no funding has any chance of even existing?
I remember when paying $1.99 for a mobile game on iOS was considered too expensive and now it seem most consumers are primed to spend more on in-app purchases every week. That mind-shift did not happen overnight.
It was not that long ago $200 for ChatGPT subscription was considered extravagant but now even wrappers can charge this price without hesitation - some of them do.
What Anthropic is doing is priming the market of which they will be potentially one of the main beneficiaries as long as they can continue existing. But I don't think anyone will go to Anthropic directly to source their financial services agent. They will go to financial service companies that use Anthropic to build the capabilities.
> in the early internet you wouldn't see google creating their own news site
Google News was definitely a thing (and actually still exists).
it's been a things since 2002. but it's a news aggregator not directly competing with newyork times
just looked up, it is still a thing - learn something new everyday!
I'm not sure if this was tongue-in-cheek or not, but Yahoo created its own news site in 1996: https://en.wikipedia.org/wiki/Yahoo_News and FB had Zynga's Farmville as well.
Why control part of the world when you can control it all?
Less cynically, you might say that "use AI to do <obvious thing>" is not really a viable startup pitch anymore. That's not necessarily bad.
This is premature caution/fear.
But Google did move into a lot of spaces: maps, mail, docs, etc.
[delayed]
History suggests otherwise. railroads, telecoms, search all consolidated. The natural equilibrium for transformative infrastructure is winner take all. AGI/ASI won’t be different but will be nearly every vertical and governments will legislate too little too late.
[delayed]
Nothing natural about it. Such monopolies were propped up by the state using public funds and profits captured by the capital class. Many benefitted by the arrangement and so it became normalized. But it’s a choice people made to structure things that way.
The car industry, oil and gas… all could have played out differently if different players had gained wider adoption or if governments used a different economic model.
local models are going to win and therefore the hardware providers, Apple and nvidia.
There isn't going to be any moat for the hosted providers besides hardware scale. They can run your request on shared 1TB memory hardware, or whatever.
But local hardware is going to catch up, the hosted providers are going to become commoditized, and the costs are just going to be compute whether its your hardware or theirs.
And your laptop is going to be powerful enough to be good enough for most cases.
Local hardware catching up doesn’t matter if the thing worth having never leaves the building. Enterprise services are hard, moat is in distribution and know how.
I am not sure if people are using claude design, security review stuff and other tools they have built so far.
Building is the easy part. There are lot of service level stuff that I am sure anthropic will not be able to provide, therefore they are trying to partner with other orgs in that realm.
I am very skeptical about their stuff now.
If you are builder, I believe you should avoid anthropic, it can be default to monopolistic behavior, I am not saying they are doing it, but they could, where in they see what you are building, if you have traction, position a product in that realm. Just saying.
> Will the big labs leave anything for external competition?
Unfortunately no.
The TAM for Anthropic and OpenAI is anything that runs software or a screen.
Any software or technology business that has high margins that Anthropic and OpenAI are not doing will be a target.
After both their IPO's mandates Wall Street them to push for more growth by competing in other technology business areas or they will get punished in the markets.
It is ROI or bust.
You’re advocating for less competition? AI startup valuations are out of control. People are raising $20m seed rounds.
If you can’t prove PMF and differentiation with $10m, I’m sorry but you’re not a serious enterprise.
And if what you’re building is “pitch deck AI”, I mean, come on.
> tfw you've been huffing your own copium so much that you forgot you're selling shovels
lol these agents are missing the point re. What people actually do in these jobs.
This is an attempt to inflate token generation to fool people into increasing anthropic’s valuation.
Given the quality of Claude code lately, I wouldn’t trust them in financial services.
Great to see more insurance hype! We've been working on AI to solve the consumer search problem in the industry for the past 3 (almost 4) years and it's great to see the big labs getting their hands dirty and building tools for practitioners in the space.
More industry exposure to well-managed agentic experiences will create oodles of opportunities to reduce premiums for consumers and offput some inflation-driven increases in cost of coverage.
Wow, really going for those white collar jobs. This is going to be an interesting few years.
I stopped reading at paragraph one:
"ready-to-run agent templates for the most time-consuming work in financial services: building pitchbooks, screening KYC files, and closing the books at month-end"
Ok, maybe you can squeeze a vaguely passable pitchbook out of Claude.
But screening KYC files or closing books at month-end ?
"I'll have some of what they're smoking" as the cool kids say.
No regulator or tax office on this planet is going to accept the "but Claude said it was ok" excuse.
The only people who are going to profit out of this are Anthropic, Lawyers and Governments (through increased fines).
Of course - finance is the best domain to depkiy a stochastic parrot which hallucinates and forgets stuff frequently and doesn’t follow your instructions - even with SOTA models. One where you need absolute accuracy and auditabikity.
Why didn’t I think of that.
Because the l on your keyboard is broken?
patagonia is gonna to lose some clientele
haha, insider! :-D
Just yesterday I told a colleague that he should by some of their vests for his company :-D
AI and finance --- what could possibly go wrong?
Better Call Saul when (not if) it does.
Well at that point you can use AI as legal help, right?
Everything is going to be slop and you're going like it.
Is the plan to have an LLM do everything? And do it worse?
"Oh yeah my Claude didn't agree with the pitch from their Claude"
The goal of current tech is to make humanity a gerbil running on a Claude wheel
At that point what even is the point of doing anything at all? Like, it’s less than useless.
That is what people like Thiel actually believe, that humanity is just a cradle to bring about a machine god.
I don't necessarily disagree with that but doing it through LinkedIn slop companies? Come on man you know better than that
Does anyone else think "agents" are the wrong abstractions? Agents look like UI wrappers over LLM's - they are inherently not composable. Tailor made agents for UI's don't seem to scale. I predict they wont take off.
What I predict instead is that we will have a common UI layer plugin and a "protocol" than can speak to ui elements -- this might be more composable.
Next couple weeks - financial and insurance services announce layoffs!
There was a paper lately, claiming that bank & insurances are going to layoff around 200k in the next years globally. (which would be according to them a reduction of 3-4% of finance people)
How long until Anthropic or OpenAI builds an interview platform around AI tools, where candidates build a feature end to end using AI?
As someone who has been interviewing lately, I think this is the next step after leetcode and whiteboard style interviews.
Why would this be useful in a zero sum environment like markets, why would you want to use the same tool that everyone else has access too? Top performers will always be the people that hand craft their solutions, just like why the top performers in the watch space are the people that make handmade watches in Switzerland not the guys make 100k watches a month in China.
Making the most convoluted and idiotic insurance process on earth and then delegating that process onto an AI that requires huge buzzing data centers.. Is there an option to respawn in the non-clown world universe? It was funny at first but it gets tiring eventually.
What does a better insurance process look like? Outside of health insurance, which is complicated for a variety of reasons, most insurance is pretty easy to procure. I got an umbrella policy recently and it took about 30 minutes of talking with an agent and answering pretty reasonable questions.
1. A better insurance process is clearly out of the scope of a hn comment, and I have trouble believing you don’t know that too.
2. I’m almost certainly talking about health insurance, made obvious by you even mentioning that. There’s a HN guideline about discussing in good faith.
3. I find it humorous you hand-wave away our inhuman healthcare system as “for a variety of reasons”.
4. I see your career is in hedge funds, defense, and big tech. Best of luck ;)