> Haskell gives you tools to encode these incantations in types so they cannot be forgotten. This is, for my money, the single most valuable thing the language offers a production engineering organization.
Haskell is admittedly, probably the most powerful widely (or even somewhat widely) used language for doing this, but this general pattern works really well in Rust and TypeScript too and is one of my very favorite tools for writing better code.
I also really like doing things like User -> LoggedInUser -> AccessControlledLoggedInUser to prevent the kind of really obvious AuthZ bugs people make in web applications time and time again.
I've found this pattern to be massively underutilized in industry.
This is more of a question of Affordances than type systems as per se e.g. you can do this quite happily in C# or something it's just that the amount of visual clutter is more than the actual type definition.
This isn't specific to Rust or Typescript. You can do this in basically any language.
Imagine you have to distinguish between unescaped and escaped strings for security purposes. Even with a dynamically typed language, you can keep escaped strings as an Escaped class, with escape(str)->Escaped and dangerouslyAssumeEscaped(str)->Escaped functions (or static methods). There's a performance cost to this, so that's a tradeoff you have to weigh, but it is possible.
Another way of doing this is Application Hungarian[1], though that relies on the programmer more than it does on the compiler.
That part is (de facto) required for dynamically typed languages, but not for statically typed ones where the newtype constructor/deconstructor can be elided at compile time. Rust and C++ especially both do the latter by having true value types available for wrappers that evaporate into zero extra machine code.
But then just this moment I wondered: do any major runtimes using models with no static type info manage to do full newtype elision in the JIT and only box on the deopt path? What about for models with some static type info but no value types, like Java? (Java's model would imply trickiness around mutability, but it might be possible to detect the easy cases still.) I don't remember any, but it could've shown up when I wasn't looking.
Well, java can do escape analysis, so a wrapper with a single field may end up as a local variable of the embedded field.
As for other JVM languages like Kotlin and Scala, they have basically what "newtype" is, but it can only be completely erased in the byte code when they have a single field.
Escape analysis that sinks a local allocation is great in itself, but for newtypes for things like “trusted HTML vs plain text”, I feel like the primary benefits are deeply interprocedural. The type constraint is encoding a promise that can be carried from one end of the code base to the other, and where you can know for sure when you're writing a module whether you're on one side of a barrier or the other. I would tend to expect this to result in patterns that aren't well-handled by escape analysis.
What I'm imagining for my curiosity about the dynamic case would look more like “JS/Lua/whatever engine detects that in frob(x) calls, x is always shaped like { foo: ‹string› } and its object identity is unused, so it replaces the calling convention for frob internally, then propagates that to any further callers”, and it might do the same thing when storing one of those in fields of other objects of known shapes, etc. until eventually it hits a boundary where the constraint isn't known to hold and has to be ready to materialize the wrapper object there.
Kotlin and Scala sound like they're doing the Rust/C++ thing at the bytecode level, if it's being “erased”, so just the static case again but with different concrete levels for machine vs language.
> This isn't specific to Rust or Typescript. You can do this in basically any language.
This just isn't true.
In any dynamic language you would not get these guarantees at compile time. You'd get random failures at runtime. That's not safety of any kind.
Also, part of the goal of languages like Haskell is that they help you think about your code before it runs. All of that is lost.
> Imagine you have to distinguish between unescaped and escaped strings for security purposes
That would be a nightmare in many languages. You'd have to rewrite large parts of the code to be compatible with one or both. And in many languages you'd have to duplicate your code entirely.
In other languages, the result would so ugly, you would never want to touch that code. Imagine doing this with say, templates in C++.
>There's a performance cost to this
There is no performance cost in Haskell! This is entirely undone by the compiler.
Also, because the compiler understands what's going on at a much higher level, you can do things like deriving code. You can say that your classified strings behave like your regular strings in most contexts, like say, they're the same for the purpose of printing but not for the purpose of equality, in one line.
What you cannot do is compile-time safety guarantees, and in languages like Rust type system isn't strong enough to do some advanced compile-time guarantees (via types). So no, you cannot do this in basically any language (unless you turn it into Haskell).
And categorically: the issue isn’t what “I’d” do, my habits often match my habits, it’s what other project members will be doing (including future degenerate versions of myself assumed to be some combination of busy, tired, stressed and drunk).
The Confucian philosophy that people act like water coming down a mountain, seeking the path of least resistance comes to play.
Haskell, OCaml, F#, and their ilk can yield beautiful natural domain languages where using the types wrong is cost prohibitive. In languages without those guarantees every developer needs discipline to avoid shortcuts, and review needs increase, and time-pressure discussions rehashed.
I'm not convinced it really works well in typescript. the lack of nominal types requires you to remember some pretty hacky incantations if you want something like a newtype wrapping a primitive type
my experience is that ocaml is more powerful than rust for enforcing this sort of type safety, because you have gadts that give you more expressive power, and polymorphic variants and object types (record row types) that give you more convenience. and the module system and functors of course.
you also avoid some abstraction limitations/difficulties that come from the rust borrow checker for places where garbage collection is just fine
It really feels like we’re solving the wrong problem sometimes. If a bad type can crash your application, sure, type safety is one answer but I have to admit I like the erlang approach; if something unexpected happens crash the process (not os process, erlang process) which has a very small blast radius on a well architected system (maybe doesn’t even fail the individual request that caused it). I wish more languages had this let it crash philosophy, it really allows for writing code exclusively for the happy path, safe in the knowledge that a -1 where a “string” should be isn’t going to take down production.
Somehow, it feels like a better solution than these complicated type systems. Does any other language do this outside BEAM?
In a way I agree with you, and I'm not sure that what popular languages embrace or make it easy to follow this philosophy. My sense is that Erlang is still the leader.
But I did want to add something the article also touches on: types can be not only about ensuring safety or correctness at runtime, but also about representing knowledge by encoding the theory of how the code is supposed to work as far as is practical, in a way that is durable as contributors come and go from a codebase.
Admittedly this can come at the cost of making it slower to experiment on or evolve the code, so you have to think about how strongly you want to enforce something to avoid the rigidity being more painful than valuable. But it's generally a win for helping someone new to a codebase understand it before they change it.
Edit: another thought I had is that type mistakes do not always causes crashes. Silent corruption can be much more insidious, e.g. from confusing types which mean something different but are the same at the primitive level (e.g. a string, number or uuid)
EDIT: previously the example in the parent comment was:
type NewType<T> = T & { __brand__ : Symbol }
---
This seems wrong; the type spelled `Symbol` refers to the boxed interface for symbols[0]. I suspect you meant to write `unique symbol` there, but it can't be used in that position.
I'm not sure if `NewType` in your comment is supposed to stand in for a specific newtype (in which case it probably doesn't need to be generic[1]) or if it's supposed to be a general-purpose type constructor for any newtype (in which case it should take a second type parameter to let me distinguish e.g. `EmailAddress` from `Password`[2]). The use of `unique symbol`s is also only really necessary if you want to keep the brand private to force users to go through a validation function or whatnot, otherwise you can just use string literal types.
I agree these incantations aren't big problems (it all falls out naturally from knowledge of TypeScript's type system, and can be abstracted away as per my comment in [2]), but the fact that you goofed in the very comment where you were trying to make that point is causing me to second-guess myself.
Right. Besides getting this incantation right, as gp did only after editing their comment, you also have to cast to create values of NewType. But generally you want to avoid casting in typescript if you care about type safety, so now everybody has to remember the rule that in this particular circumstance it's the right thing to do.
There are helper libraries to ease this (zod supports branded types, I think?), but I guess my general point is that while typescript might give you the ingredients you need to implement type safety in cases like this if you try really hard and remember all your rules everywhere, it doesn't come naturally so it's hard to maintain at scale.
I was on the Tube and wanted to get my reply in before entering a tunnel. I already corrected it whilst I was underground.
I think the point still stands - is this really a big problem? I guess I couldn't recite the syntax from memory, because I usually use a utility type for this
Agreed. Clojure gets this with Mali and Spec. That said, types are such a productivity boost over time that I think they should only be discarded for very good reasons.
I think you csn also goa long way with C++ and templates to represent sny kind of restricted type in the type system. Variants are somewhat clumsy without pattern matching but most tools you can make use of are already there I would say.
In my backend system I represent users with different variant states to avoid a lot of unrepresentable states.
As for underutilization, I think only functional languages, Rust and C++ support variants and that might be one reason: people just make blobs of state and choose which fields to use instead of encoding states and make some combinations unrepresentable. Javascript, Java, C# or Python do not have Variant types to the best of my knowledge.In Ocaml and Haskell and with pattern matching they are very natural. In Rust with enums, same. In C++, they are so so but still usable compared to the others that do not have.
In my load tests I even went, since I launch thousands of clients, with a boos.MSM to drive the test behavior. One state machine per user.
And of course Rust and TypeScript were heavily influenced by Haskell... they just don't mention it and call things differently, to avoid the "monads are scary, I need to write a tutorial" effect. Though it's less about monads and more about things like type classes.
Rust's influence was OCaml, not Haskell. Its first compiler was written in OCaml. Its syntax directly looks like OCaml and C++ had a baby. It's got ML smells all over it. Haskell is not the sum of Hindley Milner-esque languages.
Personally, never enjoyed Haskell's syntax (or lack of it) and tendency to overthinking. But I did enjoy SML/NJ and OCaml to some extent.
Haskell type classes are not classes (like Java or PHP classes); they are comparable to Rust traits -- which are different from PHP traits which are comparable to Java/C# interfaces (with default impls; if you just want contracts you have... PHP interfaces).
A fundamental difference is that you can instantiate/implement a type class (or Rust trait) for any* type, compared to interfaces where each class declares the interfaces it implements. You can therefore create generic (forall) instances, higher kinded type classes, etc.
> Actually in modern Java you can simulate type classes approach with a mix of interfaces and default methods implementations.
Can you? The beauty of traits/type classes is that you can attach them to any type - in a world where 90% of the functionality of any piece of software is supplied by dependencies - external types which you cannot change - this is a vital feature.
You can't in Haskell either! For example, any function could secretly call `unsafePerformIO` to cause a side effect (and that's not the only example).
I believe `const` functions in Rust are actually be guaranteed to be pure, though I haven't followed that feature closely and there may be nuances.
In most languages purity is a norm rather than enforced by static analysis. I definitely agree that it's much safer to assume that an arbitrary Haskell function is pure than it is to assume that of an arbitrary TypeScript function.
You can compile Haskell code with the -XSafe flag, and this is communicated in the package, so something like backpack can be told to load only safe modules. Still, there's probably plenty of code that's safe but not pure, but that's as good as we're likely to get.
I loved working in Haskell for a few years. I wasn't actively looking it, but the opportunity just sort of landed in my lap. It was exciting and mentally stimulating. But the unfortunate fact is, I am easily twice as productive in Rust as I am Haskell, even after 3 years of nothing but Haskell. There are more pitfalls in Haskell that you have to just know how to avoid. It can be very difficult to digest as the language can be borderline write-only at times, depending on the author of the code. The tooling is often married to Nix, which is it's own complex beast. And it feels like language extensions are all over the place. Cabal files are not my favorite. And the compiler errors take some time to get get used to.
Pretty surprising -- I had much the opposite experience.
On our last product, we decided to start switching from Typescript to Rust on the backend because we got tired of crashes. I consider that to be one of the greatest technical mistakes I've made ever, as our productivity slowed massively. I'll just share two time-draining issues that only occur in Rust: (1) Writing higher-order functions (e.g.: a function to open a database connection, do something, and then close it -- yes, I know you can use RAII for this particular example), which is trivial in Haskell and TypeScript and JavaScript and C++ and PHP, turned out to be so impossible in Rust [even after asking Rust-expert friends for help], that I learned to just give up and never try, though it sometimes worked to write a macro instead. (2) It's happened many times that I would attempt a refactoring, spend all day fixing type errors, finally get to the top-level file, get a type error that's actually caused somewhere else by basic parts of the design, and conclude the entire refactoring I had attempted is impossible and need to revert everything.
On top of that, Rust is the only modern language I can name where using a value by its interface instead of its concrete type lies somewhere between advanced and impossible, depending on what exactly you're doing.
I came away concluding that application code (as opposed to systems or library code) should, to a first approximation, never be written in Rust.
I appreciate Rust for making affine types mainstream, and having at least the C++ community start caring about security, even if half hearted.
However I share your conclusion, outside scenarios where having automated resource management as the main approach is either technically impossible, or a waste of time trying to change pervasive culture, I don't see much need for Rust.
In fact those that write comments about wanting a Rust but without borrow checker, the answer already exists.
I think Rust would be fine for application code if it kept the borrow checker, but had greater allowance for dynamically-sized variables, or even garbage collection. The reason calling things through an interface is so tough in Rust is because doing so requires having a pointer to a value of unknown size, which involves either heap allocation or alloca(), neither of which are very happy in Rust. Many of the other things I complained about are also downstream of this decision. Affine types are useful both in high-level state management as well as in low-level memory management. But it's Rust's focus on static memory layout that really cements it as a low-level systems language, not its inclusion of borrowing.
Way back as an undergrad in 2011, I contributed to Plaid, a JVM language whose main feature is based on affine and linear types. I'm one of the very few people in the world who knew what borrowing is before Rust had it. So I know first-hand that borrow-checking is perfectly compatible with garbage collection.
Exactly, and that is why after Rust's break into mainstream, several garbage collected languages are trying to mix advanced type systems with their approach to garbage collection (GC, RC, a mix of both, whatever).
This is also not strange for those in the Rust community with type systems experience, hence the Roadmap 2026 proposals for a more ergonomic experience.
Thus we have Linear Haskell, Swift 6 ownership, D ownership, Koka, Hylo, Chapel, OxCaml, Scala Capabilities, Ada/SPARK proofs, Idris, F*, Dafny,....
Interesting. But when I search `"roadmap 2026" rust`, I only get results for the video game.
Am familiar with Linear Haskell (and actually went on a walk through Tokyo with one of the authors just a few months ago). IIRC: still no resources allocated to add the things that would actually make it useful. Had not been aware of most of those, except the dependently-typed ones. Cool to know about the others. Yay linear types becoming known.
You seem interesting, and I'm curious about your background now. All I can find so far is that you're from Europe and used to lead C++ in some major corporation.
Maybe it depends on the application, but web servers are effortless with something like axum. Libraries can do a lot of heavy lifting to expose straightforward coding patterns. Never had any problems like you desribed with database connections and such. In rust with db pools things just work and get closed on drop etc. I would never even consider making a higher order function for that.
Only other language that I think gets close to rust ergonomics is Kotlin, but it suffers from having too many possibilities for abstractions.
(1) Higher order functions are pretty much the same as all the other languages you mentioned, using closure syntax? What was the problem you ran into?
(2) In such situations the compiler (type system or borrow checker) is telling you that what you wanted to do has hidden bugs, and therefore refuses to compile. Usually that's a good thing.
(1) Oh sure, the syntax is easy. Getting it to borrow-check is somewhere between insane and impossible. As I said, I've had friends who are actual Rust experts give up trying.
(2) No, it stems from a compiler limitation (imposed in large part by the need for static memory layout), not because there's anything intrinsically buggy about doing this.
(3) Look up "dyn-compatibility", for the largest, but not the only, problem with doing this.
It seems to be a common reflex of Rust advocates that, whenever an issue with using the language is asked about, the response is "That's just a garbage-collected code pattern" followed by "and therefore you shouldn't want it." It's happened multiple times in this thread. [Edit: and both the times I was thinking of were from you, so need to weaken that conclusion]
Aside from having vibes of "I've chosen to get hit weekly in the face with a baseball bat, but have learned to like it, and so should you" it's also seldom true.
All three of these examples are also quite easy to do with C and C++. It's not about garbage collection.
That's pretty interesting. I was thinking about starting a new pet project and was considering doing it in Rust to learn as I never tried anything with it and after some small pocs I had the feeling it was too verbose to my taste, but wasn't sure it was just me and/or my lack of experience with Rust.
Still, wonder if it's still worth it to give a shot considering other positive elements of the language.
It’s worth learning, in my opinion, but I’ve been writing it professionally for the better part of a decade, so my opinion may be a bit skewed.
It’s my favorite language to write, and it gets much easier over time. As a first approximation, if you’re doing something and it feels insanely difficult like the GP is talking about, try to think of a different way to do it rather than fighting it. There’s usually a way to do almost anything, but it’s more pleasant to lean into the grooves the language pushes you towards.
Rust is definitely very verbose. I think it's a fine choice -- probably even the best choice -- if you're doing systems code or if performance is your most important feature. If not, I would pass.
That is a very unusual Rust experience. I find "application code" very pleasant to write in Rust. Of course there are things that aren't as ergonomic in Rust as in other languages (e.g. callbacks) but that's true of pretty much any language.
I have heard this reaction from others before. One of the Rust expert friends I consulted with told me "I'm not convinced you're not trying to write Haskell-style code in Rust;" I told him the patterns I was struggling with were both trivial and common in Java.
The things I found quite difficult or impossible in Rust were to me pretty basic patterns for modularity and removing duplication that it's really shocking that these complaints are not more common.
I currently have but two hypotheses for why.
First, the second problem I mentioned only comes from using tokio, which causes your top-level program to secretly be using a defunctionalized continuation data type, derived from where exactly in other files you put your await's, that might not be Send. If you're not using tokio, you won't experience that issue.
Second...I was kinda told to just give up on deduplication and have lots of copy+pasted code. This raises the very uncomfortable hypothesis that Rust afficionados are some combination of people who came to Rust early and never learned traditional software design and don't know what they're missing, and people who were raised on traditional good software engineering but then got hit with Rust's metaphorical baseball bat of lack-of-modularity over and over until they got used to being hit with a baseball bat as a normal pain of life.
I don't like either of these explanations (esp. with tokio seeming quite dominant), so I'm awaiting an explanation that makes more sense. https://xkcd.com/3210/
The most confusing thing that can happen with something like tokio is the failure-at-distance you can get from writing a non-Send future somewhere in the depths of a call stack and then having to figure out why your top-level spawn isn’t working. There’s a non-default lint I highly recommend turning on when working with tokio: clippy::future_not_send. Forces all your futures to be Send, unless you opt out, which really helps keep the reasoning local when you run into errors.
FWIW I write primarily rust, and I do not agree with the advice given in your second point, so I’d take it with a grain of salt were I you.
> difficult or impossible in Rust were to me pretty basic patterns for modularity
Many things are plainly not permitted, either because the borrow-checker isn't clever enough, or the pattern is unsafe (without garbage collection and so on).
Many functional/Haskell patterns simply can not be translated directly to Rust.
That "and so on" is doing a lot of work. You may accept rejecting garbage collection as a reasonable trade-off, but the bulk of the cost is coming from a much more aggressive tradeoff Rust is making with is at odds with the goals of most application code.
A deeply-baked assumption of Rust is that your memory layout is static. Dynamic memory layout is perfectly compatible with manual memory management, but Rust does not readily support it because of its demands for static memory layout.
A very easy place to see this is the difference in decorator types between Rust and other languages like Java. Java's legacy File/reader API has you write things like `new PrintWriter(new BufferedWriter(new FileWriter("foo.txt")))`, where each layer adds some functionality to the base layer. The resulting value has principal type `PrintWriter` and can be used through the `Writer` interface.
The equivalent code in Rust would give you a value of type `PrintWriter<BufferedWriter<FileWriter>>` which can only be passed to functions that expect exactly that type and not, say, a `PrintWriter<BufferedWriter<StringStream>>`. You would solve this by using a template function that takes a `T where T: Writer` parameter and gets compiled separately for every use-site, thus contributing to Rust's infamous slow build times.
It would be perfectly sane, and desirable for application code, to be able to pass around a PrintWriter value as an owned pointer to a PrintWriter struct which contains an owned pointer to a BufferedWriter struct which contains an owned pointer to a FileWriter struct. You could even have each pointer actually be to a Writer value of unknown size, and thus recover modularity.
In Rust, there is sometimes a painful and very fragile way to do this: have each writer type contain a Box<&dyn Writer>, effectively the same as the Java solution above. This works, except that, if one day you want to add a method to the Writer trait that breaks dyn-compatibility, then you will no longer be able to do this, and will need to rewrite all code that uses this type.
You can usually manage dyn compatibility issues in my experience by writing a base trait that is not dyn compatible and then an Ext trait that is, which is auto implemented for all implementers of the base trait. You see this pattern all over the place, including with several of the buffer traits you mentioned.
Mostly, this works out well enough: dyn compatibility pretty much just insists your methods can in fact work with just a reference to an unknown variant of the type.
Some people ask me why I do not use Rust as opposed to C++ if it is already safer and more modern.
But I see the forums (and I also trued some toy stuff at times) plagued with rigidity problems that in C++ have obvious solutions.
For example, I am not going to fight a borrow-checker all the stack up to get a 0.0005% perf improvement, if sny, when I can use smart pointers.
I am not going to use Result everywhere when I can throw an exception and get done with it instead of refactoring all the stack up for the intermediate return types (though I use expected and optional and like them, but it is a choice depending on what I am doing).
I am not going to elaborate safe interfaces for my arrays of data I need to send to a GPU: there is no vslue in it and I can get it wrong snyway, it os ceremony. I assume this kind of code is unsafe by nature.
I find C++ just more flexible. Yes, it has warts, but I use all warnings as errors, clang tidy and have a lot of flexibility. I use values to avoid any trace of dangling and when it is going to get bad, I can, most of the time, switch to smart pointers.
I really do not get why someone would use Rust except for very niche cases like absolutely no memory unsafety (but this is not free either, as some reports show: you need to really be careful about reviewing unsafe if your domain is unsafe by nature or uses bindings to keep Rust invariants or you write only safe code, in whcih case, if memory safety is critical, it does give you something).
But I do not see Rust good for writing general application code. At least not compared to well-written C++ nowadays.
“I’m not going to use Rust because I don’t like it” seems like what you’re saying, which is totally fine. Plenty of people, myself included, manage to write and enjoy writing general application code in Rust. You’re allowed to not get it, just like I’m allowed to dislike writing C++.
Good suggestion. I think started doing that kind of thing towards the end of my days with Rust. It's been close to a year now, and don't remember how well it worked out.
*`dyn Writer`. `impl Writer` can only be used in function parameters.
This was one of the example approaches I gave. This works...at first. The problem is that, if you want to add a new function to the Writer trait which makes Writer no longer dyn-compatible, such as, say, any async function, then you can no longer write `Box<dyn Writer>` and need to rewrite all code that uses it.
(although you can dig under the hood and specify a pinned-down Future type, covering one kind of awfulness with another)
> Rust afficionados are some combination of people who came to Rust early and never learned traditional software design and don't know what they're missing
This is definitely not the case and is unnecessarily insulting.
The truth is that some things are harder in Rust but a) often those things are best avoided anyway (e.g. callbacks), and b) it's worth the trade-off because of the other good things it allows.
Surely as a Haskell user of all things you must understand that sometimes making things harder is worth the trade-off. Yeay everything is pure! Great for many reasons. Now how do I add logging to this deeply nested function?
I know that it's insulting! And it doesn't make sense, because I generally think Rust programmers are smart people. But right now, it's the only explanation I've got, so it is alas necessarily insulting. So please, please, please give me a better explanation that actually makes sense.
> The truth is that some things are harder in Rust but a) often those things are best avoided anyway (e.g. callbacks), and b) it's worth the trade-off because of the other good things it allows.
This sounds like the seeds of a better explanation, but it needs a lot more to actually suffice. E.g.: why are callbacks best avoided anyway, when they're virtually required for a large number of important programming patterns? (In more technical language: they're effectively the only way to eliminate duplication in non-leaf-expressions. In even more technical language: they're the way to do second-order anti-unification.)
> Surely as a Haskell user of all things you must understand that sometimes making things harder is worth the trade-off. Yeay everything is pure! Great for many reasons. Now how do I add logging to this deeply nested function?
And this is a great illustration of the difference. First, you will seldom find Haskell programmers trying to argue that, actually, things like deeply-nested logging that everyone wants are actually "best avoided anyway." Second, you'll actually get a solution if you ask about them -- in this case, to either use MTL-style, to use a fixed alias for your monad stack, or that unsafePerformIO isn't actually that bad.
BTW, similar to my unpleasant conclusion for Rust above, I have another unpleasant conclusion for Haskell: Haskell is incredible for medium-sized programs, but it has its own missing modularity features that make it non-ideal for large programs (e.g.: >50k lines). But this is a much smaller problem than it sounds because Haskell is so compact that, while many projects can be huge, very few individual codebases will need to approach that size.
Look up "callback hell". Basically they encourage spaghetti.
> you'll actually get a solution if you ask about them
You got solutions to your problems didn't you? Macros are a perfectly reasonable thing to use in Rust, even if they are best avoided where possible. Exactly like unsafePerformIO.
If you were expecting Rust to work perfectly in every situation... well it doesn't. GUI programming in particular is still awkward, and async Rust has more footguns than anyone is happy with.
Despite that it's still probably the best language we have for a surprisingly large range of domains.
> Look up "callback hell". Basically they encourage spaghetti.
Ah. I think you're confusing the general idea of a callback with one particular style of use. "callback hell" refers to the deep indentation that occurs when trying to program in monadic style in languages without syntactic support for monads. It was mostly solved by adding async/await syntax, aka syntactic support for the continuation monad. "Callback hell" is not spaghetti in any deep sense, merely syntactically cumbersome.
But a "callback" is a more general term, sometimes a synonym for "function parameter," sometimes for more narrow kinds of function parameter (e.g.: void function, invokable once). Many people will refer to the function argument of the `map` function as a callback, but no-one would refer to that as "callback hell."
And when I did, I largely got it by figuring stuff out myself, while being told by multiple Rust experts that I either shouldn't care about the verbosity and lack of modularity, or that if I have a problem like "using the interface instead of the implementation" it must be because I'm a Haskeller.
Well, my ultimate solution was to start working on a new product, and to not use any Rust, except for some performance-heavy libraries. With the first product, the market had changed too much by the time we were ready for prime-time, and I'd put somewhere between 25% and 70% of the reason for that delay on our choice to start building new parts of the backend in Rust.
> Macros are a perfectly reasonable thing to use in Rust, even if they are best avoided where possible. Exactly like unsafePerformIO.
Good comparison!
> Despite that it's still probably the best language we have for a surprisingly large range of domains.
I agree with this. I just don't agree that that list of domains has a very large intersection with the set of applications.
I worked on a somewhat similar system in a fringe language (Scheme, and later, Racket) that got huge, but that remained manageable and high-velocity over a long period by a small team.
We didn't create many bugs, and usually functionality could be added very rapidly (e.g., we were the first to achieve a certain certification for hosting sensitive data on AWS).
Though occasionally functionality had to be added more slowly, because we had to write from scratch what would be an off-the-shelf component in a more popular platform. But once we did it, it worked, and we were back to our old velocity, and not slowed by the bloat and complexity of dozens of off-the-shelf frameworks. We could also adapt rapidly because we controlled a manageable platform, which is how we were able to move fast to AWS when there was a need.
The system also had some technical bits of architectural secret sauce from the start (for complex data, and Web interaction), which enabled a lot of rapid development of functionality, and also set the tone for later empowering smartness.
One difference with our system, from the Haskell fintech, was that our team size was very small (only 2-3 software engineers at a time, and someone who managed all the ops). So we didn't have the challenges of hundreds of people trying to coordinate and have a coherent system while getting their things done. Instead, there was usually one person doing more technical and architectural changes to the code, and a prolific other person doing huge amounts business logic functionality for complex processes.
With careful use of current/near-term LLM-ish AI tools, software development might find some related efficiencies of very small and incredibly effective teams. But the model that comes to mind is having a small number of very sharp thinkers keeping things on an empowering and manageable path -- not churning massive bloat to knock off story points and letting sustainability be someone else's problem.
It's a double-edged sword. Two million lines is a major feat. It's also represents a significant maintenance burden.
The advantages to Haskell are theoretically obvious. The downsides are harder to intuit.
The temptation is to model _everything_ as types. The codebase itseld becomes a _business specification_, not an application. Every policy change is a major refactor (some of which are shockingly high-touch thanks to Haskell safety).
The lesson is you cannot have your cake and eat it too. Eventually you become trapped by your types.
Haskell is really impressive and powerful, perhaps especially at this scale. However it brings its own unique problems. The temptation to model business logic as types leads to rigid structures. And the safety these structures bring can blind you to other classes of risk.
I think perhaps contrary to popular belief, Mercury choosing Haskell and their early leadership having such a storied experience in it probably played some non-insignificant role in their success.
As a customer of Mercury, it's truly one of the critical companies my toolkit, and I just can't help but feel that their choosing of Haskell made their progress, development and overall journey that much better. I realize that you can make this argument with most languages, and it's not to say that a FP lang like Haskell is a recipe for success, but this intentional decision particularly pre "vibe coding" and the LLM era seems particularly prescient, of course combined with their engineering culture that was detailed in the post.
I'd also wager that hiring generalists with no prior experience in the language actually helped them, because they got to instill their culture and style from the ground up with their new hires. Pre vibe-coding, most of those people would'nt have wanted to just jump in and hack away with zero instruction.
I am currently reading Real-World Ocaml and I am really learning more about functional programming, though I was already familiar with a few things.
Looks to me like you can build amazingly robust pieces of software with functional programming.
However, I am divided.
I have a backend that works in NiceGUI for a product. It does the job. The code is reasonable and MVVM. The most important task it does is connecting to a websocket per customer and consume data to present some analytics.
I will not have a great deal of customers, maybe in the tens or maximum hundreds visiting the website.
I also want REPL and/or hot reload, but I am aware that as I grow features (users admin panels, more analytics, etc) maybe functional programming can do a good job transforming data pipelines.
But Haskell or Ocaml are static. I guess if I want something later that grows and scales and is still dynamic Clojure or Elixir should be a good choice. But at the same time I am afraid that if at some point I need to refactor, things will go wrong.
Currently I use Python with Mypy. All is written in the backend: the frontend is generated by NiceGUI from the backend.
I once saw a real world Haskell code (from a huge investment bank) - the abundance of single letter variable names and short cryptic function names was striking.
Half the time the function names are operators, so you end up with an explosion of typographical symbols that makes perl look like cobol. Not my favorite aspect of the Haskell ecosystem.
My bestie works at this company and looking from the outside they have a good engineering culture. I do think Haskell is the right tool for the job, and they are playing to it's strengths, but part of me wonders if a lot of their success is attributable to the place just being well run in general.
That would not run counter to the popular (whether true or not) idea that by using functional programming languages you filter for a higher quality labor pool / applicant pool.
The version I've always heard is just well designed but less popular languages, but the ones I can think of were all functional (Haskell/F#/OCaml/Clojure/Elm/Erlang)
It’s hard to imagine what two millions lines of Haskell could possibly be doing. I mean that’s a lot of code and I have the impression that Haskell is “tight” meaning a little code can do a lot. Maybe they have a lot of libraries to do things like json serializing/deserializing, rest api frameworks, logging etc?
> The problem is that we cannot trust code we cannot instrument. If a third-party binding makes HTTP calls through concrete functions, we have no way to add tracing, no way to inject timeouts tuned to our SLOs, no way to simulate partner outages in testing, and no way to explain the 400ms gap in a trace except by squinting at it and developing theories. So we write our own. More work upfront, but the clients we write are observable by construction, because we built them that way from the start.
Nit: the quality of a language that you call "tight" is usually called "expressive." You can use few characters to express a relatively very abstract idea.
Some people call this "high-level," too.
I will say, though, that 2 million lines of code is much less code than it sounds like at first glance, especially for a company in a highly-regulated space like finance, plus a few years of progress.
Absolutely not an objective metric, but I have found that Haskell just has a different "aspect ratio". Line count may be somewhat lower, but the word count is essentially largely the same as more imperative OOP languages.
I obviously don't know what the codebase looks like, but
a) Haskell's reputation for terseness partially comes from its overrepresentation in academic / category-theoretic circles, where it's typically fine to say things like `St M -> C T`. But for real software it's a lot more useful to say things like `TransactionState Debit -> Verified Transaction` etc etc.
b) The other part of Haskell's terse reputation is cultural, something extending back to LISP: people being way too clever about saving lines with inscrutable tricks or macros. I imagine that stuff is discouraged at a finance company like Mercury in favor of clarity and readability: e.g. perhaps the linter makes you split monadic stuff into pedantic multiline do expressions even if you can do it in a one-liner with >> and >>=.
I don't believe I'm the target market (I'm plenty happy with my small CU), and seeing their billboards makes me want to never use them BUT: seeing this post and their culture and that they use Haskell is kinda changing my mind.
I think you have to get a Haskell job early in career and stick to Haskell jobs. Breaking in is really hard as you come without experience there will be plenty of others with Haskell experience to compete. And because the jobs are rare if it doesnt work out (company becomes bad to work for or layoff) you can be unstuck (or I guess you would switch to Rust, Scala or F#)
As somebody who has helped hire many Haskell devs, I can say that lots of Haskell experience isn't always a positive. We have to filter carefully to make sure that we end up with developers who want to build real things, not developers who just want to get paid for noodling around with Haskell. As far as I'm concerned, I'd much rather hire somebody with lots of experience building things who ended up coming to Haskell later because they viscerally understand the benefits and risks. Somebody with lots and lots of Haskell experience who never delivered much is a big risk.
haha i've abused this recruiting mindset for a decade
it's so easy to scout when a company has this haskell philosophy. either by the interviewers themselves or by the bloggers they hired to guide their team.
the trick? i just..lie. "oh yeah i'm super pragmatic. i'm not hardline about haskell. i don't think you should be fancy." see how easy it is? i am suddenly hired and got a fat raise. and if the company moves off haskell? i quit immediately, get another haskell job, and talk to my former coworkers on the way out to embolden them to do the same.
it helps that i have the "real world" stuff on my resume.
i rode the 2010s job hopping ride as a haskeller doing this. each time a 20-30% raise. and i get to still write haskell. and i am always a top percentile haskeller at the company so i can code however tf i want lolol. suddenly - singletons, Generics, HKD!
so here's to earning another million bucks "noodling around with Haskell" :cheers:
My fear with something like Haskell particularly and with hiring people who really love Haskell is that you risk ending up with a certain kind of personality who fetishizes the tool over the problem.
I've been this person, and I've worked with this kind of person, and been the victim of this kind of person. They love language X, or framework Y, and are convinced that so many problems in front of them are shaped in a way that would be solved through the application of it.
They now have a hammer and they go searching for nails to hit with it.
I've been in shops that used Haskell, and it was... fine? It's I guess nice for people who enjoy writing in it -- I prefer other FP languages personally. I like nerdy things like that and used to hang out on Lambda the Ultimate or whatever. But I don't think there's any real secret powers in Haskell or most other tools. I've been burned too many times by that kind of approach.
> [To lib authors] Nobody is obviously in charge in the way a fast-moving production team would mean "in charge," and that creates understandable hesitation around making breaking changes, even when experience has taught us better ways to design these systems.
> This is not a complaint about volunteer maintainers. It is simply one of the ambient risks of building serious systems on a smaller ecosystem.
And so instead of paying the lib authors who already have domain expertise and know their codebase, they chose to rewrite it from scratch/fork without contributing back. So classic.
Author here: I think you are projecting quite a bit. We do in fact hire a lot of people who maintain things, and even pay quite a lot for OSS development on things like the compiler and libraries we care about. But we still have business objectives to achieve, and sometimes it makes more sense to write things that better suit our needs.
I use Haskell a lot, but I notice that it's very hard to cross-compile it.
If only cross-compilation became easy so that I can develop on my chip Macs and deploy on x64/AMD Linux servers.
>statically linking Haskell binaries is quite a challenge
>build requirements really slow down the process. I have to use dockers to help cache dependencies and avoid recompiling things that have not changed, but it is still slow and puts out large binaries.
Also, the Docker-based deployment takes a lot of time as it needs to recompile each module. While you can cache some part of it, it's still slow.
Meanwhile with Go it's painless. And i am not the only one having this issue:
Hell yeah, I made a hyperbolic PDE solver in a bizarre constrained space in haskell! Unboxed vectors and performant algebriac systems out of functional programming is a blast.
I'm not particularly into Haskell or functional languages, but I came here to say it warms my heart to hear that people in finance actually embrace it (also J/APL). It seems like a good choice for banking.
The Mercury site also looks way better than most other banks I have ever used (load speed is also very good.) On the danger of seeming like a shill (I'm not), I'm tempted to try them out.
Mercury has been awesome, I've been using them for my business account for years and recently started using them for personal as well. I didn't know they used Haskell until well after I started using them, but it definitely tracks. The quality of their exposed software surface is at least a couple stddev above median.
Just wanted to say that rarely has a text make me want to work for a company as much as this one. This is the way engineering should be done. I have never worked in such an organisation.
I know this is not the point of the article, but I find the anecdote in the beginning about null pointer errors somewhat ironic. Haskell's solution to null pointers are option types (`Maybe x` in Haskell), but these are known to be suboptimal.
In languages with option types, if you want to weaken the type requirement for a function parameter, or strengthen the guarantee for a return type, you have to change the code at every call site. E.g, if you have a function which you can improve by changing
- a parameter Foo to Option<Foo> or
- a return value Option<Bar> to Bar
you would have to change the code at all call sites. Which could be anything between annoying and practically impossible.
In languages that solve null pointer errors instead with untagged union types (like TypeScript or Scala 3), this problem doesn't occur. So you can change
- a parameter Foo to Foo | Null or
- a return value Bar | Null to Bar
and all call sites of the function can remain unchanged, since the type system knows that weakening the type requirement for a parameter, or strengthening the promise for a return type, is a safe change than can't cause a type error.
So yes, option types do avoid null pointer exceptions, but they solve the issue in a very suboptimal way.
Mostly though if you do anything with the returned value at the call site you need to change that code anyways? If it is not just passing it on, and even then you might need to adapt its signatures. E.g. if you change from String | Null to String you remove the null handling. If you add Null you need to add Null handling?
If you were calling a function which might return null (String | Null), you will already have null handling at the call site, but if you now change that function such that it never returns null (String), you still have the (now unnecessary) null handling, but this doesn't hurt and you don't have to change anything at the call site.
Likewise, if you were passing a String to a function that doesn't accept null (String), the call site already made sure that the parameter isn't null, and if you change the function so that it does now accept null (String | Null), again nothing needs to be changed at the call site.
I agree that this can be nice when done right (Clojure), but null is a high price to pay for this convenience.
I must admit I’ve never had this problem in application development. In fact, I do want to change my callers because strengthening the contract is an opportunity to simplify the callsites - they no longer have to handle the optionality. The change might carry some semantic meaning too, why are you getting x instead of Maybe x all of the sudden? Are there some other things you should reconsider in the callers? I can see how it could be useful in library development, but there are also patterns to account for this that are idiomatic to Haskell.
> I agree that this can be nice when done right (Clojure),
I don't think Clojure has untagged union types like TypeScript or Scala.
> but null is a high price to pay for this convenience.
Why would it be? Untagged unions prevent null pointer errors just as much as option types do, only they don't have the discussed disadvantages of option types.
No, they don't reference any "high price to pay", only that they personally didn't need the advantages of untagged union types so far, and that Haskell (allegedly) has patterns that would play a similar role for libraries.
>you would have to change the code at all call sites.
Actually I think you can just change concrete argument `Foo` to type constraint in Haskell as well using a type class. So the function would be something like `foo :: ToMaybeFoo a => a -> .. ->`. And you would implment `ToMaybeFoo` instance for `Foo` and `Maybe Foo`.
Agree that this is more involved than typescript, but you get to keep `null` away from your code...
This is a neat idea, but it does require that you know up front the largest union that could ever be supported in that argument, so that you have the ability to narrow it down later. Worse, it in the limit it requires a combinatorial explosion of type classes, with one for each possible union! The `ToXYZorW` classes form a powerset over the available types.
Admittedly I don't really understand your construction. But this solution, if it works, doesn't look practical enough that it could be routinely used in practice like Foo|Null could be. By the way, some languages even shorten "Foo|Null" to "Foo?" as syntax sugar.
> but you get to keep `null` away from your code...
I don't think this would be desirable once we have eliminated null pointer exceptions with untagged unions.
>Admittedly I don't really understand your construction.
It is quite simple. Instead of accepting a concrete type `Foo`, the function is changed to accept types that can be converted to `Option<Foo>`. Since both `Foo` and `Option<Foo>` can be converted to `Option<Foo>`, the existing call sites that passes `Foo` would not require changing.
I really believe in FP and Haskell but I want to examine this objectively. Empirically speaking is what Mercury done successful truly because of Haskell? Do they have metrics that demonstrates clear superiority along some vectored trait like complexity, bug count, etc?
>A couple million lines of Haskell, maintained by people who learned the language on the job, at a company that moves huge amounts of money? The conventional wisdom says this should be a disaster, but surprisingly, it isn't. The system we've built has worked well for years, through hypergrowth, through the SVB crisis that sent $2 billion in new deposits our way in five days,1 through regulatory examinations, through all the ordinary and extraordinary things that happen to a financial system at scale.
This one is quite telling. Do people have counter examples?
Without having run the whole company twice in parallel, once using Haskell and again in some other language, and without having measured both runs exactly the same way, I don't think metrics like you're interested in could possibly have sufficient context to mean anything reliable.
Obviously Mercury is successful, and obviously Haskell is how they did it. So it's essential to their success. Would it be instrumental to anyone else's anywhere else doing anything else? Can't possibly know, I don't think.
The irony of these fancy FP languages that were designed to develop compilers or to get PL academics off is that they're actually also really good at the most mundane code imaginable.
Being able to minimize boilerplate and have strong refactoring and bug resistant types is a huge edge.
The only problem is their ecosystems are limited so you might spend more time than you like implementing an API or binding a system library.
It's probably more likely it comes down to: if your programmers are capable of learning and working in Haskell, they're likely a cut above in terms of keenosity and nerdiness and motivation around programming, and you're likely to early-cull the people who just got into CS because mom and dad told them it paid well. That's likely to help produce better overall code health.
not speaking in any official capacity, but: we great internal training material courtesy of some very thoughtful folks, and ultimately one hopes that most of the code is going to be pretty straightforward wherever possible.
There are two countervailing effects when you choose a more theoretically advanced programming language. On the one hand, your hiring pool shrinks. On the other hand, the quality of the remaining hiring pool goes way up, which acts as an excellent recruiting filter (for both employer and employee). Jane Street made a similar play with OCaml.
I just wanted to leave a glowing comment about Mercury itself, since this is one of the few times I’ll be able to.
I’ve been using Mercury for 5 years. In that time, I’ve been able to wire transfer money without having to worry it might disappear (functionally impossible at certain other banks), created hundreds of virtual debit cards each with their own limit and pulling from different accounts, created dozens of accounts (a “place to put money”) named by function (each of my household utilities gets its own account, with an automatic rule to pull in money whenever it gets paid out), and… well, I think that covers everything.
This has given me unprecedented insight into my financial life. I know exactly how much I spend on groceries, on each utility, and on entertainment. I can project ahead and get a burn rate for my household. And my ex wife uses it too, on the same login, which is as easy as “make an account named with her first name” and a corresponding virtual debit card.
I’m convinced the only reason people don’t use Mercury is that they don’t know what they’re missing.
You have to pay for personal banking (a couple hundred a year iirc), but the business banking is free. If you want to try them out, you can start an LLC for a few dollars (at least in Missouri) and get overnight access to Mercury. All that’s required is your EIN.
They’ve been one of the single best products I’ve ever used. The sole wrinkle was when they canceled all their existing virtual cards due to reasons, which threw my recurring billing into chaos. But every great company is allowed at least one mega annoyance, and that one was a blip.
If you’re wondering whether to try them out, the answer is yes, and I’m excited for you to discover how cool it is. https://www.mercury.com
Shockingly yes. Or at least it happened to me. Nat wanted to wire me $10k to fund Tensorfork, and all I had at the time was a crummy US Bank account. I spent over an hour on the phone with them trying to get a wire transfer number, and in the end the number they gave me was wrong. The money never showed up.
That’s how I found Mercury. I was looking for a bank that wasn’t amateur hour. Once I had an account the money showed up overnight.
I still don’t know if Nat even got his original $10k back. I hope so. But yes, the failure mode is very much “if your wire transfer details are wrong, the money is just gone”. Apparently.
> I’m convinced the only reason people don’t use Mercury is that they don’t know what they’re missing.
Very well could be true because I had no idea who or what they are.
Do they have strong low level automation support for the customer programmatically even for personal accounts? I use ledger for plaintext accounting for both personal and business and sync of data is slightly annoying, perhaps Mercury’s products solve that trivially?
I run a small business books using mercury and beancount. The API supports enough operations in the free tier to do so with ease, though mostly I’m just fetching transactions. I do pay the ~35USD / mo for the extended API to get invoicing, though that’s not something a personal user would need
Feel free to use it as it stores data on your browser's local storage only. For syncing between devices, you would be able to use Google firebase's free tier and export your accounts (after compressing and encrypting) there and import from another device. Let me know if you want to try it..
I badly want to try out Mercury, in fact I have an account already, but they don't yet support Zelle. I really hope their national bank charter application gets approved. (They applied for one last December.) Once they support Zelle, I'll drop U.S. Bank, as their app sucks and they don't have an API.
> Haskell gives you tools to encode these incantations in types so they cannot be forgotten. This is, for my money, the single most valuable thing the language offers a production engineering organization.
Haskell is admittedly, probably the most powerful widely (or even somewhat widely) used language for doing this, but this general pattern works really well in Rust and TypeScript too and is one of my very favorite tools for writing better code.
I also really like doing things like User -> LoggedInUser -> AccessControlledLoggedInUser to prevent the kind of really obvious AuthZ bugs people make in web applications time and time again.
I've found this pattern to be massively underutilized in industry.
This is more of a question of Affordances than type systems as per se e.g. you can do this quite happily in C# or something it's just that the amount of visual clutter is more than the actual type definition.
This isn't specific to Rust or Typescript. You can do this in basically any language.
Imagine you have to distinguish between unescaped and escaped strings for security purposes. Even with a dynamically typed language, you can keep escaped strings as an Escaped class, with escape(str)->Escaped and dangerouslyAssumeEscaped(str)->Escaped functions (or static methods). There's a performance cost to this, so that's a tradeoff you have to weigh, but it is possible.
Another way of doing this is Application Hungarian[1], though that relies on the programmer more than it does on the compiler.
[1] https://www.joelonsoftware.com/2005/05/11/making-wrong-code-...
> There's a performance cost to this
That part is (de facto) required for dynamically typed languages, but not for statically typed ones where the newtype constructor/deconstructor can be elided at compile time. Rust and C++ especially both do the latter by having true value types available for wrappers that evaporate into zero extra machine code.
But then just this moment I wondered: do any major runtimes using models with no static type info manage to do full newtype elision in the JIT and only box on the deopt path? What about for models with some static type info but no value types, like Java? (Java's model would imply trickiness around mutability, but it might be possible to detect the easy cases still.) I don't remember any, but it could've shown up when I wasn't looking.
Well, java can do escape analysis, so a wrapper with a single field may end up as a local variable of the embedded field.
As for other JVM languages like Kotlin and Scala, they have basically what "newtype" is, but it can only be completely erased in the byte code when they have a single field.
Escape analysis that sinks a local allocation is great in itself, but for newtypes for things like “trusted HTML vs plain text”, I feel like the primary benefits are deeply interprocedural. The type constraint is encoding a promise that can be carried from one end of the code base to the other, and where you can know for sure when you're writing a module whether you're on one side of a barrier or the other. I would tend to expect this to result in patterns that aren't well-handled by escape analysis.
What I'm imagining for my curiosity about the dynamic case would look more like “JS/Lua/whatever engine detects that in frob(x) calls, x is always shaped like { foo: ‹string› } and its object identity is unused, so it replaces the calling convention for frob internally, then propagates that to any further callers”, and it might do the same thing when storing one of those in fields of other objects of known shapes, etc. until eventually it hits a boundary where the constraint isn't known to hold and has to be ready to materialize the wrapper object there.
Kotlin and Scala sound like they're doing the Rust/C++ thing at the bytecode level, if it's being “erased”, so just the static case again but with different concrete levels for machine vs language.
> This isn't specific to Rust or Typescript. You can do this in basically any language.
This just isn't true.
In any dynamic language you would not get these guarantees at compile time. You'd get random failures at runtime. That's not safety of any kind.
Also, part of the goal of languages like Haskell is that they help you think about your code before it runs. All of that is lost.
> Imagine you have to distinguish between unescaped and escaped strings for security purposes
That would be a nightmare in many languages. You'd have to rewrite large parts of the code to be compatible with one or both. And in many languages you'd have to duplicate your code entirely.
In other languages, the result would so ugly, you would never want to touch that code. Imagine doing this with say, templates in C++.
>There's a performance cost to this
There is no performance cost in Haskell! This is entirely undone by the compiler.
Also, because the compiler understands what's going on at a much higher level, you can do things like deriving code. You can say that your classified strings behave like your regular strings in most contexts, like say, they're the same for the purpose of printing but not for the purpose of equality, in one line.
What you cannot do is compile-time safety guarantees, and in languages like Rust type system isn't strong enough to do some advanced compile-time guarantees (via types). So no, you cannot do this in basically any language (unless you turn it into Haskell).
What the parents describe can be done with almost any language.
> You can do this in basically any language.
You can do it in Assembly. That doesn't mean it's cost effective.
And categorically: the issue isn’t what “I’d” do, my habits often match my habits, it’s what other project members will be doing (including future degenerate versions of myself assumed to be some combination of busy, tired, stressed and drunk).
The Confucian philosophy that people act like water coming down a mountain, seeking the path of least resistance comes to play.
Haskell, OCaml, F#, and their ilk can yield beautiful natural domain languages where using the types wrong is cost prohibitive. In languages without those guarantees every developer needs discipline to avoid shortcuts, and review needs increase, and time-pressure discussions rehashed.
Costs are a skill issue ;-)
I'm not convinced it really works well in typescript. the lack of nominal types requires you to remember some pretty hacky incantations if you want something like a newtype wrapping a primitive type
my experience is that ocaml is more powerful than rust for enforcing this sort of type safety, because you have gadts that give you more expressive power, and polymorphic variants and object types (record row types) that give you more convenience. and the module system and functors of course.
you also avoid some abstraction limitations/difficulties that come from the rust borrow checker for places where garbage collection is just fine
It really feels like we’re solving the wrong problem sometimes. If a bad type can crash your application, sure, type safety is one answer but I have to admit I like the erlang approach; if something unexpected happens crash the process (not os process, erlang process) which has a very small blast radius on a well architected system (maybe doesn’t even fail the individual request that caused it). I wish more languages had this let it crash philosophy, it really allows for writing code exclusively for the happy path, safe in the knowledge that a -1 where a “string” should be isn’t going to take down production.
Somehow, it feels like a better solution than these complicated type systems. Does any other language do this outside BEAM?
In a way I agree with you, and I'm not sure that what popular languages embrace or make it easy to follow this philosophy. My sense is that Erlang is still the leader.
But I did want to add something the article also touches on: types can be not only about ensuring safety or correctness at runtime, but also about representing knowledge by encoding the theory of how the code is supposed to work as far as is practical, in a way that is durable as contributors come and go from a codebase.
Admittedly this can come at the cost of making it slower to experiment on or evolve the code, so you have to think about how strongly you want to enforce something to avoid the rigidity being more painful than valuable. But it's generally a win for helping someone new to a codebase understand it before they change it.
Edit: another thought I had is that type mistakes do not always causes crashes. Silent corruption can be much more insidious, e.g. from confusing types which mean something different but are the same at the primitive level (e.g. a string, number or uuid)
> some pretty hacky incantations
I don't really see a big problem here?EDIT: previously the example in the parent comment was:
---This seems wrong; the type spelled `Symbol` refers to the boxed interface for symbols[0]. I suspect you meant to write `unique symbol` there, but it can't be used in that position.
I'm not sure if `NewType` in your comment is supposed to stand in for a specific newtype (in which case it probably doesn't need to be generic[1]) or if it's supposed to be a general-purpose type constructor for any newtype (in which case it should take a second type parameter to let me distinguish e.g. `EmailAddress` from `Password`[2]). The use of `unique symbol`s is also only really necessary if you want to keep the brand private to force users to go through a validation function or whatnot, otherwise you can just use string literal types.
I agree these incantations aren't big problems (it all falls out naturally from knowledge of TypeScript's type system, and can be abstracted away as per my comment in [2]), but the fact that you goofed in the very comment where you were trying to make that point is causing me to second-guess myself.
[0]: https://github.com/microsoft/TypeScript/blob/v6.0.3/src/lib/...
[1]: https://tsplay.dev/N7rvBw
[2]: https://tsplay.dev/Ndep0m
Right. Besides getting this incantation right, as gp did only after editing their comment, you also have to cast to create values of NewType. But generally you want to avoid casting in typescript if you care about type safety, so now everybody has to remember the rule that in this particular circumstance it's the right thing to do.
There are helper libraries to ease this (zod supports branded types, I think?), but I guess my general point is that while typescript might give you the ingredients you need to implement type safety in cases like this if you try really hard and remember all your rules everywhere, it doesn't come naturally so it's hard to maintain at scale.
Yeah we just use Zod’s branded type and that pretty much handles it. No casts, use a refinement then slap a brand on it.
I was on the Tube and wanted to get my reply in before entering a tunnel. I already corrected it whilst I was underground.
I think the point still stands - is this really a big problem? I guess I couldn't recite the syntax from memory, because I usually use a utility type for this
“Parse don’t validate“ seems like the same idea
https://lexi-lambda.github.io/blog/2019/11/05/parse-don-t-va...
You do not need Haskell for that eg it works in Python (via pydantic, attrs data classes)
Agreed. Clojure gets this with Mali and Spec. That said, types are such a productivity boost over time that I think they should only be discarded for very good reasons.
I think you csn also goa long way with C++ and templates to represent sny kind of restricted type in the type system. Variants are somewhat clumsy without pattern matching but most tools you can make use of are already there I would say.
In my backend system I represent users with different variant states to avoid a lot of unrepresentable states.
As for underutilization, I think only functional languages, Rust and C++ support variants and that might be one reason: people just make blobs of state and choose which fields to use instead of encoding states and make some combinations unrepresentable. Javascript, Java, C# or Python do not have Variant types to the best of my knowledge.In Ocaml and Haskell and with pattern matching they are very natural. In Rust with enums, same. In C++, they are so so but still usable compared to the others that do not have.
In my load tests I even went, since I launch thousands of clients, with a boos.MSM to drive the test behavior. One state machine per user.
> works really well in Rust and TypeScript too
And of course Rust and TypeScript were heavily influenced by Haskell... they just don't mention it and call things differently, to avoid the "monads are scary, I need to write a tutorial" effect. Though it's less about monads and more about things like type classes.
Imitation is the sincerest form of flattery.
Rust has typeclasses so that can't be it.
Rust's influence was OCaml, not Haskell. Its first compiler was written in OCaml. Its syntax directly looks like OCaml and C++ had a baby. It's got ML smells all over it. Haskell is not the sum of Hindley Milner-esque languages.
Personally, never enjoyed Haskell's syntax (or lack of it) and tendency to overthinking. But I did enjoy SML/NJ and OCaml to some extent.
Are type classes scary? PHP has had them since 2012.
They are different things.
What are different things?
Eli5:
Haskell type classes are not classes (like Java or PHP classes); they are comparable to Rust traits -- which are different from PHP traits which are comparable to Java/C# interfaces (with default impls; if you just want contracts you have... PHP interfaces).
A fundamental difference is that you can instantiate/implement a type class (or Rust trait) for any* type, compared to interfaces where each class declares the interfaces it implements. You can therefore create generic (forall) instances, higher kinded type classes, etc.
That conflates type classes with extension types, in type theory.
Actually in modern Java you can simulate type classes approach with a mix of interfaces and default methods implementations.
In C# you can have the experience more straightforward with extensions types introduced in C#13.
Then we have yet another way to approach type classes in Scala, with traits and implicits.
And so on, as I haven't yet run out of examples.
> Actually in modern Java you can simulate type classes approach with a mix of interfaces and default methods implementations.
Can you? The beauty of traits/type classes is that you can attach them to any type - in a world where 90% of the functionality of any piece of software is supplied by dependencies - external types which you cannot change - this is a vital feature.
You can't enforce purity on the type level in TypeScript and IIRC neither in Rust.
You can't in Haskell either! For example, any function could secretly call `unsafePerformIO` to cause a side effect (and that's not the only example).
I believe `const` functions in Rust are actually be guaranteed to be pure, though I haven't followed that feature closely and there may be nuances.
In most languages purity is a norm rather than enforced by static analysis. I definitely agree that it's much safer to assume that an arbitrary Haskell function is pure than it is to assume that of an arbitrary TypeScript function.
You can compile Haskell code with the -XSafe flag, and this is communicated in the package, so something like backpack can be told to load only safe modules. Still, there's probably plenty of code that's safe but not pure, but that's as good as we're likely to get.
I loved working in Haskell for a few years. I wasn't actively looking it, but the opportunity just sort of landed in my lap. It was exciting and mentally stimulating. But the unfortunate fact is, I am easily twice as productive in Rust as I am Haskell, even after 3 years of nothing but Haskell. There are more pitfalls in Haskell that you have to just know how to avoid. It can be very difficult to digest as the language can be borderline write-only at times, depending on the author of the code. The tooling is often married to Nix, which is it's own complex beast. And it feels like language extensions are all over the place. Cabal files are not my favorite. And the compiler errors take some time to get get used to.
Pretty surprising -- I had much the opposite experience.
On our last product, we decided to start switching from Typescript to Rust on the backend because we got tired of crashes. I consider that to be one of the greatest technical mistakes I've made ever, as our productivity slowed massively. I'll just share two time-draining issues that only occur in Rust: (1) Writing higher-order functions (e.g.: a function to open a database connection, do something, and then close it -- yes, I know you can use RAII for this particular example), which is trivial in Haskell and TypeScript and JavaScript and C++ and PHP, turned out to be so impossible in Rust [even after asking Rust-expert friends for help], that I learned to just give up and never try, though it sometimes worked to write a macro instead. (2) It's happened many times that I would attempt a refactoring, spend all day fixing type errors, finally get to the top-level file, get a type error that's actually caused somewhere else by basic parts of the design, and conclude the entire refactoring I had attempted is impossible and need to revert everything.
On top of that, Rust is the only modern language I can name where using a value by its interface instead of its concrete type lies somewhere between advanced and impossible, depending on what exactly you're doing.
I came away concluding that application code (as opposed to systems or library code) should, to a first approximation, never be written in Rust.
I appreciate Rust for making affine types mainstream, and having at least the C++ community start caring about security, even if half hearted.
However I share your conclusion, outside scenarios where having automated resource management as the main approach is either technically impossible, or a waste of time trying to change pervasive culture, I don't see much need for Rust.
In fact those that write comments about wanting a Rust but without borrow checker, the answer already exists.
I think Rust would be fine for application code if it kept the borrow checker, but had greater allowance for dynamically-sized variables, or even garbage collection. The reason calling things through an interface is so tough in Rust is because doing so requires having a pointer to a value of unknown size, which involves either heap allocation or alloca(), neither of which are very happy in Rust. Many of the other things I complained about are also downstream of this decision. Affine types are useful both in high-level state management as well as in low-level memory management. But it's Rust's focus on static memory layout that really cements it as a low-level systems language, not its inclusion of borrowing.
Way back as an undergrad in 2011, I contributed to Plaid, a JVM language whose main feature is based on affine and linear types. I'm one of the very few people in the world who knew what borrowing is before Rust had it. So I know first-hand that borrow-checking is perfectly compatible with garbage collection.
Exactly, and that is why after Rust's break into mainstream, several garbage collected languages are trying to mix advanced type systems with their approach to garbage collection (GC, RC, a mix of both, whatever).
This is also not strange for those in the Rust community with type systems experience, hence the Roadmap 2026 proposals for a more ergonomic experience.
Thus we have Linear Haskell, Swift 6 ownership, D ownership, Koka, Hylo, Chapel, OxCaml, Scala Capabilities, Ada/SPARK proofs, Idris, F*, Dafny,....
Interesting. But when I search `"roadmap 2026" rust`, I only get results for the video game.
Am familiar with Linear Haskell (and actually went on a walk through Tokyo with one of the authors just a few months ago). IIRC: still no resources allocated to add the things that would actually make it useful. Had not been aware of most of those, except the dependently-typed ones. Cool to know about the others. Yay linear types becoming known.
You seem interesting, and I'm curious about your background now. All I can find so far is that you're from Europe and used to lead C++ in some major corporation.
Maybe it depends on the application, but web servers are effortless with something like axum. Libraries can do a lot of heavy lifting to expose straightforward coding patterns. Never had any problems like you desribed with database connections and such. In rust with db pools things just work and get closed on drop etc. I would never even consider making a higher order function for that.
Only other language that I think gets close to rust ergonomics is Kotlin, but it suffers from having too many possibilities for abstractions.
Depends very much on the market.
On my line of work we don't do Web servers from scratch, we use lego pieces like with enterprise integrations.
Think Sitecore, Dynamics, Sharepoint, Optimizely, Contentful, SAP, Mongolia, Stripe, PayPal, Adobe, SQL Server, Oracle, DB2,.....
Axum offers very little over existing .NET, Java, nodejs SDKs provided by those vendors.
(1) Higher order functions are pretty much the same as all the other languages you mentioned, using closure syntax? What was the problem you ran into?
(2) In such situations the compiler (type system or borrow checker) is telling you that what you wanted to do has hidden bugs, and therefore refuses to compile. Usually that's a good thing.
(3) &dyn Trait
(1) Oh sure, the syntax is easy. Getting it to borrow-check is somewhere between insane and impossible. As I said, I've had friends who are actual Rust experts give up trying.
(2) No, it stems from a compiler limitation (imposed in large part by the need for static memory layout), not because there's anything intrinsically buggy about doing this.
(3) Look up "dyn-compatibility", for the largest, but not the only, problem with doing this.
If your goal is to translate Haskell (or other garbage collected code) pattern-for-pattern into Rust, you will almost certainly burn out.
It seems to be a common reflex of Rust advocates that, whenever an issue with using the language is asked about, the response is "That's just a garbage-collected code pattern" followed by "and therefore you shouldn't want it." It's happened multiple times in this thread. [Edit: and both the times I was thinking of were from you, so need to weaken that conclusion]
Aside from having vibes of "I've chosen to get hit weekly in the face with a baseball bat, but have learned to like it, and so should you" it's also seldom true.
All three of these examples are also quite easy to do with C and C++. It's not about garbage collection.
That's pretty interesting. I was thinking about starting a new pet project and was considering doing it in Rust to learn as I never tried anything with it and after some small pocs I had the feeling it was too verbose to my taste, but wasn't sure it was just me and/or my lack of experience with Rust. Still, wonder if it's still worth it to give a shot considering other positive elements of the language.
It’s worth learning, in my opinion, but I’ve been writing it professionally for the better part of a decade, so my opinion may be a bit skewed.
It’s my favorite language to write, and it gets much easier over time. As a first approximation, if you’re doing something and it feels insanely difficult like the GP is talking about, try to think of a different way to do it rather than fighting it. There’s usually a way to do almost anything, but it’s more pleasant to lean into the grooves the language pushes you towards.
Rust is definitely very verbose. I think it's a fine choice -- probably even the best choice -- if you're doing systems code or if performance is your most important feature. If not, I would pass.
> performance
or less ressource hungry software
That is a very unusual Rust experience. I find "application code" very pleasant to write in Rust. Of course there are things that aren't as ergonomic in Rust as in other languages (e.g. callbacks) but that's true of pretty much any language.
I have heard this reaction from others before. One of the Rust expert friends I consulted with told me "I'm not convinced you're not trying to write Haskell-style code in Rust;" I told him the patterns I was struggling with were both trivial and common in Java.
The things I found quite difficult or impossible in Rust were to me pretty basic patterns for modularity and removing duplication that it's really shocking that these complaints are not more common.
I currently have but two hypotheses for why.
First, the second problem I mentioned only comes from using tokio, which causes your top-level program to secretly be using a defunctionalized continuation data type, derived from where exactly in other files you put your await's, that might not be Send. If you're not using tokio, you won't experience that issue.
Second...I was kinda told to just give up on deduplication and have lots of copy+pasted code. This raises the very uncomfortable hypothesis that Rust afficionados are some combination of people who came to Rust early and never learned traditional software design and don't know what they're missing, and people who were raised on traditional good software engineering but then got hit with Rust's metaphorical baseball bat of lack-of-modularity over and over until they got used to being hit with a baseball bat as a normal pain of life.
I don't like either of these explanations (esp. with tokio seeming quite dominant), so I'm awaiting an explanation that makes more sense. https://xkcd.com/3210/
The most confusing thing that can happen with something like tokio is the failure-at-distance you can get from writing a non-Send future somewhere in the depths of a call stack and then having to figure out why your top-level spawn isn’t working. There’s a non-default lint I highly recommend turning on when working with tokio: clippy::future_not_send. Forces all your futures to be Send, unless you opt out, which really helps keep the reasoning local when you run into errors.
FWIW I write primarily rust, and I do not agree with the advice given in your second point, so I’d take it with a grain of salt were I you.
Thanks! Very helpful. Pain is still there, but surfaced early.
> difficult or impossible in Rust were to me pretty basic patterns for modularity
Many things are plainly not permitted, either because the borrow-checker isn't clever enough, or the pattern is unsafe (without garbage collection and so on).
Many functional/Haskell patterns simply can not be translated directly to Rust.
That "and so on" is doing a lot of work. You may accept rejecting garbage collection as a reasonable trade-off, but the bulk of the cost is coming from a much more aggressive tradeoff Rust is making with is at odds with the goals of most application code.
A deeply-baked assumption of Rust is that your memory layout is static. Dynamic memory layout is perfectly compatible with manual memory management, but Rust does not readily support it because of its demands for static memory layout.
A very easy place to see this is the difference in decorator types between Rust and other languages like Java. Java's legacy File/reader API has you write things like `new PrintWriter(new BufferedWriter(new FileWriter("foo.txt")))`, where each layer adds some functionality to the base layer. The resulting value has principal type `PrintWriter` and can be used through the `Writer` interface.
The equivalent code in Rust would give you a value of type `PrintWriter<BufferedWriter<FileWriter>>` which can only be passed to functions that expect exactly that type and not, say, a `PrintWriter<BufferedWriter<StringStream>>`. You would solve this by using a template function that takes a `T where T: Writer` parameter and gets compiled separately for every use-site, thus contributing to Rust's infamous slow build times.
It would be perfectly sane, and desirable for application code, to be able to pass around a PrintWriter value as an owned pointer to a PrintWriter struct which contains an owned pointer to a BufferedWriter struct which contains an owned pointer to a FileWriter struct. You could even have each pointer actually be to a Writer value of unknown size, and thus recover modularity.
In Rust, there is sometimes a painful and very fragile way to do this: have each writer type contain a Box<&dyn Writer>, effectively the same as the Java solution above. This works, except that, if one day you want to add a method to the Writer trait that breaks dyn-compatibility, then you will no longer be able to do this, and will need to rewrite all code that uses this type.
You can usually manage dyn compatibility issues in my experience by writing a base trait that is not dyn compatible and then an Ext trait that is, which is auto implemented for all implementers of the base trait. You see this pattern all over the place, including with several of the buffer traits you mentioned.
Mostly, this works out well enough: dyn compatibility pretty much just insists your methods can in fact work with just a reference to an unknown variant of the type.
Some people ask me why I do not use Rust as opposed to C++ if it is already safer and more modern.
But I see the forums (and I also trued some toy stuff at times) plagued with rigidity problems that in C++ have obvious solutions.
For example, I am not going to fight a borrow-checker all the stack up to get a 0.0005% perf improvement, if sny, when I can use smart pointers.
I am not going to use Result everywhere when I can throw an exception and get done with it instead of refactoring all the stack up for the intermediate return types (though I use expected and optional and like them, but it is a choice depending on what I am doing).
I am not going to elaborate safe interfaces for my arrays of data I need to send to a GPU: there is no vslue in it and I can get it wrong snyway, it os ceremony. I assume this kind of code is unsafe by nature.
I find C++ just more flexible. Yes, it has warts, but I use all warnings as errors, clang tidy and have a lot of flexibility. I use values to avoid any trace of dangling and when it is going to get bad, I can, most of the time, switch to smart pointers.
I really do not get why someone would use Rust except for very niche cases like absolutely no memory unsafety (but this is not free either, as some reports show: you need to really be careful about reviewing unsafe if your domain is unsafe by nature or uses bindings to keep Rust invariants or you write only safe code, in whcih case, if memory safety is critical, it does give you something).
But I do not see Rust good for writing general application code. At least not compared to well-written C++ nowadays.
“I’m not going to use Rust because I don’t like it” seems like what you’re saying, which is totally fine. Plenty of people, myself included, manage to write and enjoy writing general application code in Rust. You’re allowed to not get it, just like I’m allowed to dislike writing C++.
Good suggestion. I think started doing that kind of thing towards the end of my days with Rust. It's been close to a year now, and don't remember how well it worked out.
I have to sit with the specific example, but a PrintWriter struct which owns a Box<impl Writer> and has no generics should be quite doable I'd think?
*`dyn Writer`. `impl Writer` can only be used in function parameters.
This was one of the example approaches I gave. This works...at first. The problem is that, if you want to add a new function to the Writer trait which makes Writer no longer dyn-compatible, such as, say, any async function, then you can no longer write `Box<dyn Writer>` and need to rewrite all code that uses it.
(although you can dig under the hood and specify a pinned-down Future type, covering one kind of awfulness with another)
Arc + clone or even just clone gets you 95% of the way to GC, no?
> Rust afficionados are some combination of people who came to Rust early and never learned traditional software design and don't know what they're missing
This is definitely not the case and is unnecessarily insulting.
The truth is that some things are harder in Rust but a) often those things are best avoided anyway (e.g. callbacks), and b) it's worth the trade-off because of the other good things it allows.
Surely as a Haskell user of all things you must understand that sometimes making things harder is worth the trade-off. Yeay everything is pure! Great for many reasons. Now how do I add logging to this deeply nested function?
> is unnecessarily insulting.
I know that it's insulting! And it doesn't make sense, because I generally think Rust programmers are smart people. But right now, it's the only explanation I've got, so it is alas necessarily insulting. So please, please, please give me a better explanation that actually makes sense.
> The truth is that some things are harder in Rust but a) often those things are best avoided anyway (e.g. callbacks), and b) it's worth the trade-off because of the other good things it allows.
This sounds like the seeds of a better explanation, but it needs a lot more to actually suffice. E.g.: why are callbacks best avoided anyway, when they're virtually required for a large number of important programming patterns? (In more technical language: they're effectively the only way to eliminate duplication in non-leaf-expressions. In even more technical language: they're the way to do second-order anti-unification.)
> Surely as a Haskell user of all things you must understand that sometimes making things harder is worth the trade-off. Yeay everything is pure! Great for many reasons. Now how do I add logging to this deeply nested function?
And this is a great illustration of the difference. First, you will seldom find Haskell programmers trying to argue that, actually, things like deeply-nested logging that everyone wants are actually "best avoided anyway." Second, you'll actually get a solution if you ask about them -- in this case, to either use MTL-style, to use a fixed alias for your monad stack, or that unsafePerformIO isn't actually that bad.
BTW, similar to my unpleasant conclusion for Rust above, I have another unpleasant conclusion for Haskell: Haskell is incredible for medium-sized programs, but it has its own missing modularity features that make it non-ideal for large programs (e.g.: >50k lines). But this is a much smaller problem than it sounds because Haskell is so compact that, while many projects can be huge, very few individual codebases will need to approach that size.
> why are callbacks best avoided anyway
Look up "callback hell". Basically they encourage spaghetti.
> you'll actually get a solution if you ask about them
You got solutions to your problems didn't you? Macros are a perfectly reasonable thing to use in Rust, even if they are best avoided where possible. Exactly like unsafePerformIO.
If you were expecting Rust to work perfectly in every situation... well it doesn't. GUI programming in particular is still awkward, and async Rust has more footguns than anyone is happy with.
Despite that it's still probably the best language we have for a surprisingly large range of domains.
> Look up "callback hell". Basically they encourage spaghetti.
Ah. I think you're confusing the general idea of a callback with one particular style of use. "callback hell" refers to the deep indentation that occurs when trying to program in monadic style in languages without syntactic support for monads. It was mostly solved by adding async/await syntax, aka syntactic support for the continuation monad. "Callback hell" is not spaghetti in any deep sense, merely syntactically cumbersome.
But a "callback" is a more general term, sometimes a synonym for "function parameter," sometimes for more narrow kinds of function parameter (e.g.: void function, invokable once). Many people will refer to the function argument of the `map` function as a callback, but no-one would refer to that as "callback hell."
Callbacks are quite universal, and most uses do not lead at all to callback hell. I've engaged with this topic a little bit at https://us16.campaign-archive.com/?u=8b565c97b838125f69e75fb... , above the header "Serious Business."
> You got solutions to your problems didn't you?
Mostly no :(
And when I did, I largely got it by figuring stuff out myself, while being told by multiple Rust experts that I either shouldn't care about the verbosity and lack of modularity, or that if I have a problem like "using the interface instead of the implementation" it must be because I'm a Haskeller.
Well, my ultimate solution was to start working on a new product, and to not use any Rust, except for some performance-heavy libraries. With the first product, the market had changed too much by the time we were ready for prime-time, and I'd put somewhere between 25% and 70% of the reason for that delay on our choice to start building new parts of the backend in Rust.
> Macros are a perfectly reasonable thing to use in Rust, even if they are best avoided where possible. Exactly like unsafePerformIO.
Good comparison!
> Despite that it's still probably the best language we have for a surprisingly large range of domains.
I agree with this. I just don't agree that that list of domains has a very large intersection with the set of applications.
Is the productivity 2x all across the board, or are there some parts that are less productive with Rust? Also, what do you mean by write-only?
>what do you mean by write-only?
I think they meant that in Haskell it is very easy to write externally unreadable code..
I worked on a somewhat similar system in a fringe language (Scheme, and later, Racket) that got huge, but that remained manageable and high-velocity over a long period by a small team.
We didn't create many bugs, and usually functionality could be added very rapidly (e.g., we were the first to achieve a certain certification for hosting sensitive data on AWS).
Though occasionally functionality had to be added more slowly, because we had to write from scratch what would be an off-the-shelf component in a more popular platform. But once we did it, it worked, and we were back to our old velocity, and not slowed by the bloat and complexity of dozens of off-the-shelf frameworks. We could also adapt rapidly because we controlled a manageable platform, which is how we were able to move fast to AWS when there was a need.
The system also had some technical bits of architectural secret sauce from the start (for complex data, and Web interaction), which enabled a lot of rapid development of functionality, and also set the tone for later empowering smartness.
One difference with our system, from the Haskell fintech, was that our team size was very small (only 2-3 software engineers at a time, and someone who managed all the ops). So we didn't have the challenges of hundreds of people trying to coordinate and have a coherent system while getting their things done. Instead, there was usually one person doing more technical and architectural changes to the code, and a prolific other person doing huge amounts business logic functionality for complex processes.
With careful use of current/near-term LLM-ish AI tools, software development might find some related efficiencies of very small and incredibly effective teams. But the model that comes to mind is having a small number of very sharp thinkers keeping things on an empowering and manageable path -- not churning massive bloat to knock off story points and letting sustainability be someone else's problem.
It's a double-edged sword. Two million lines is a major feat. It's also represents a significant maintenance burden.
The advantages to Haskell are theoretically obvious. The downsides are harder to intuit.
The temptation is to model _everything_ as types. The codebase itseld becomes a _business specification_, not an application. Every policy change is a major refactor (some of which are shockingly high-touch thanks to Haskell safety).
The lesson is you cannot have your cake and eat it too. Eventually you become trapped by your types.
Haskell is really impressive and powerful, perhaps especially at this scale. However it brings its own unique problems. The temptation to model business logic as types leads to rigid structures. And the safety these structures bring can blind you to other classes of risk.
Typescript too: https://www.richard-towers.com/2023/03/11/typescripting-the-...
Tbvh the biggest downside of a Turing complete type system is that you can theoretically implement an application that compiles to dust.
I think perhaps contrary to popular belief, Mercury choosing Haskell and their early leadership having such a storied experience in it probably played some non-insignificant role in their success.
As a customer of Mercury, it's truly one of the critical companies my toolkit, and I just can't help but feel that their choosing of Haskell made their progress, development and overall journey that much better. I realize that you can make this argument with most languages, and it's not to say that a FP lang like Haskell is a recipe for success, but this intentional decision particularly pre "vibe coding" and the LLM era seems particularly prescient, of course combined with their engineering culture that was detailed in the post.
I'd also wager that hiring generalists with no prior experience in the language actually helped them, because they got to instill their culture and style from the ground up with their new hires. Pre vibe-coding, most of those people would'nt have wanted to just jump in and hack away with zero instruction.
I have noticed that everything in their app Just Works. It's very satisfying coming from other services!
I feel the same way. I only started using Mercury about 6 months ago and I’m continually impressed that it just makes sense.
I am currently reading Real-World Ocaml and I am really learning more about functional programming, though I was already familiar with a few things.
Looks to me like you can build amazingly robust pieces of software with functional programming.
However, I am divided.
I have a backend that works in NiceGUI for a product. It does the job. The code is reasonable and MVVM. The most important task it does is connecting to a websocket per customer and consume data to present some analytics.
I will not have a great deal of customers, maybe in the tens or maximum hundreds visiting the website.
I also want REPL and/or hot reload, but I am aware that as I grow features (users admin panels, more analytics, etc) maybe functional programming can do a good job transforming data pipelines.
But Haskell or Ocaml are static. I guess if I want something later that grows and scales and is still dynamic Clojure or Elixir should be a good choice. But at the same time I am afraid that if at some point I need to refactor, things will go wrong.
Currently I use Python with Mypy. All is written in the backend: the frontend is generated by NiceGUI from the backend.
I once saw a real world Haskell code (from a huge investment bank) - the abundance of single letter variable names and short cryptic function names was striking.
Half the time the function names are operators, so you end up with an explosion of typographical symbols that makes perl look like cobol. Not my favorite aspect of the Haskell ecosystem.
My bestie works at this company and looking from the outside they have a good engineering culture. I do think Haskell is the right tool for the job, and they are playing to it's strengths, but part of me wonders if a lot of their success is attributable to the place just being well run in general.
That would not run counter to the popular (whether true or not) idea that by using functional programming languages you filter for a higher quality labor pool / applicant pool.
The version I've always heard is just well designed but less popular languages, but the ones I can think of were all functional (Haskell/F#/OCaml/Clojure/Elm/Erlang)
It’s hard to imagine what two millions lines of Haskell could possibly be doing. I mean that’s a lot of code and I have the impression that Haskell is “tight” meaning a little code can do a lot. Maybe they have a lot of libraries to do things like json serializing/deserializing, rest api frameworks, logging etc?
From TFA:
> The problem is that we cannot trust code we cannot instrument. If a third-party binding makes HTTP calls through concrete functions, we have no way to add tracing, no way to inject timeouts tuned to our SLOs, no way to simulate partner outages in testing, and no way to explain the 400ms gap in a trace except by squinting at it and developing theories. So we write our own. More work upfront, but the clients we write are observable by construction, because we built them that way from the start.
Nit: the quality of a language that you call "tight" is usually called "expressive." You can use few characters to express a relatively very abstract idea.
Some people call this "high-level," too.
I will say, though, that 2 million lines of code is much less code than it sounds like at first glance, especially for a company in a highly-regulated space like finance, plus a few years of progress.
> Haskell is “tight”
Absolutely not an objective metric, but I have found that Haskell just has a different "aspect ratio". Line count may be somewhat lower, but the word count is essentially largely the same as more imperative OOP languages.
I obviously don't know what the codebase looks like, but
a) Haskell's reputation for terseness partially comes from its overrepresentation in academic / category-theoretic circles, where it's typically fine to say things like `St M -> C T`. But for real software it's a lot more useful to say things like `TransactionState Debit -> Verified Transaction` etc etc.
b) The other part of Haskell's terse reputation is cultural, something extending back to LISP: people being way too clever about saving lines with inscrutable tricks or macros. I imagine that stuff is discouraged at a finance company like Mercury in favor of clarity and readability: e.g. perhaps the linter makes you split monadic stuff into pedantic multiline do expressions even if you can do it in a one-liner with >> and >>=.
A similar Haskell success story (from Bellroy) is the subject of an upcoming Melbourne Compose meeting: https://luma.com/uhdgct1v
I don't believe I'm the target market (I'm plenty happy with my small CU), and seeing their billboards makes me want to never use them BUT: seeing this post and their culture and that they use Haskell is kinda changing my mind.
I think you have to get a Haskell job early in career and stick to Haskell jobs. Breaking in is really hard as you come without experience there will be plenty of others with Haskell experience to compete. And because the jobs are rare if it doesnt work out (company becomes bad to work for or layoff) you can be unstuck (or I guess you would switch to Rust, Scala or F#)
As somebody who has helped hire many Haskell devs, I can say that lots of Haskell experience isn't always a positive. We have to filter carefully to make sure that we end up with developers who want to build real things, not developers who just want to get paid for noodling around with Haskell. As far as I'm concerned, I'd much rather hire somebody with lots of experience building things who ended up coming to Haskell later because they viscerally understand the benefits and risks. Somebody with lots and lots of Haskell experience who never delivered much is a big risk.
haha i've abused this recruiting mindset for a decade
it's so easy to scout when a company has this haskell philosophy. either by the interviewers themselves or by the bloggers they hired to guide their team.
the trick? i just..lie. "oh yeah i'm super pragmatic. i'm not hardline about haskell. i don't think you should be fancy." see how easy it is? i am suddenly hired and got a fat raise. and if the company moves off haskell? i quit immediately, get another haskell job, and talk to my former coworkers on the way out to embolden them to do the same.
it helps that i have the "real world" stuff on my resume.
i rode the 2010s job hopping ride as a haskeller doing this. each time a 20-30% raise. and i get to still write haskell. and i am always a top percentile haskeller at the company so i can code however tf i want lolol. suddenly - singletons, Generics, HKD!
so here's to earning another million bucks "noodling around with Haskell" :cheers:
So you've....worked hard. Understood the language and social landscape. Delivered what your employers wanted. And earned lots of money.
Congrats I guess? Not sure where the abuse/guilt comes from.
Interesting, I guess it then depends on the company (or recruiter) then.
This happened to me.
I've made all my money over a decade in Haskell. Millions. Paid for all my stuff.
It all started with a recruiter on LinkedIn
My fear with something like Haskell particularly and with hiring people who really love Haskell is that you risk ending up with a certain kind of personality who fetishizes the tool over the problem.
I've been this person, and I've worked with this kind of person, and been the victim of this kind of person. They love language X, or framework Y, and are convinced that so many problems in front of them are shaped in a way that would be solved through the application of it.
They now have a hammer and they go searching for nails to hit with it.
I've been in shops that used Haskell, and it was... fine? It's I guess nice for people who enjoy writing in it -- I prefer other FP languages personally. I like nerdy things like that and used to hang out on Lambda the Ultimate or whatever. But I don't think there's any real secret powers in Haskell or most other tools. I've been burned too many times by that kind of approach.
> [To lib authors] Nobody is obviously in charge in the way a fast-moving production team would mean "in charge," and that creates understandable hesitation around making breaking changes, even when experience has taught us better ways to design these systems.
> This is not a complaint about volunteer maintainers. It is simply one of the ambient risks of building serious systems on a smaller ecosystem.
And so instead of paying the lib authors who already have domain expertise and know their codebase, they chose to rewrite it from scratch/fork without contributing back. So classic.
Now you can develop the lib in the direction that you need and you have people on payroll that do it, this seems like good risk management.
Author here: I think you are projecting quite a bit. We do in fact hire a lot of people who maintain things, and even pay quite a lot for OSS development on things like the compiler and libraries we care about. But we still have business objectives to achieve, and sometimes it makes more sense to write things that better suit our needs.
I use Haskell a lot, but I notice that it's very hard to cross-compile it.
If only cross-compilation became easy so that I can develop on my chip Macs and deploy on x64/AMD Linux servers.
>statically linking Haskell binaries is quite a challenge
>build requirements really slow down the process. I have to use dockers to help cache dependencies and avoid recompiling things that have not changed, but it is still slow and puts out large binaries.
Also, the Docker-based deployment takes a lot of time as it needs to recompile each module. While you can cache some part of it, it's still slow.
Meanwhile with Go it's painless. And i am not the only one having this issue:
https://news.ycombinator.com/item?id=47957624#47972671
Such a shame Haskell is beautiful and performant language still build is slow.
Hell yeah, I made a hyperbolic PDE solver in a bizarre constrained space in haskell! Unboxed vectors and performant algebriac systems out of functional programming is a blast.
I'm not particularly into Haskell or functional languages, but I came here to say it warms my heart to hear that people in finance actually embrace it (also J/APL). It seems like a good choice for banking.
The Mercury site also looks way better than most other banks I have ever used (load speed is also very good.) On the danger of seeming like a shill (I'm not), I'm tempted to try them out.
Do it. You won’t be disappointed. I’ve been using them for about 5 years now.
> written using the mosaic theory of information and a range of journalistic tools
what does that mean?
Mercury has been awesome, I've been using them for my business account for years and recently started using them for personal as well. I didn't know they used Haskell until well after I started using them, but it definitely tracks. The quality of their exposed software surface is at least a couple stddev above median.
Just wanted to say that rarely has a text make me want to work for a company as much as this one. This is the way engineering should be done. I have never worked in such an organisation.
Haskell is like an instrument I can never quite master, yet I find it utterly fascinating and keep trying to learn it without success.
I know this is not the point of the article, but I find the anecdote in the beginning about null pointer errors somewhat ironic. Haskell's solution to null pointers are option types (`Maybe x` in Haskell), but these are known to be suboptimal.
In languages with option types, if you want to weaken the type requirement for a function parameter, or strengthen the guarantee for a return type, you have to change the code at every call site. E.g, if you have a function which you can improve by changing
- a parameter Foo to Option<Foo> or
- a return value Option<Bar> to Bar
you would have to change the code at all call sites. Which could be anything between annoying and practically impossible.
In languages that solve null pointer errors instead with untagged union types (like TypeScript or Scala 3), this problem doesn't occur. So you can change
- a parameter Foo to Foo | Null or
- a return value Bar | Null to Bar
and all call sites of the function can remain unchanged, since the type system knows that weakening the type requirement for a parameter, or strengthening the promise for a return type, is a safe change than can't cause a type error.
So yes, option types do avoid null pointer exceptions, but they solve the issue in a very suboptimal way.
Mostly though if you do anything with the returned value at the call site you need to change that code anyways? If it is not just passing it on, and even then you might need to adapt its signatures. E.g. if you change from String | Null to String you remove the null handling. If you add Null you need to add Null handling?
No that's not right.
If you were calling a function which might return null (String | Null), you will already have null handling at the call site, but if you now change that function such that it never returns null (String), you still have the (now unnecessary) null handling, but this doesn't hurt and you don't have to change anything at the call site.
Likewise, if you were passing a String to a function that doesn't accept null (String), the call site already made sure that the parameter isn't null, and if you change the function so that it does now accept null (String | Null), again nothing needs to be changed at the call site.
I agree that this can be nice when done right (Clojure), but null is a high price to pay for this convenience.
I must admit I’ve never had this problem in application development. In fact, I do want to change my callers because strengthening the contract is an opportunity to simplify the callsites - they no longer have to handle the optionality. The change might carry some semantic meaning too, why are you getting x instead of Maybe x all of the sudden? Are there some other things you should reconsider in the callers? I can see how it could be useful in library development, but there are also patterns to account for this that are idiomatic to Haskell.
> I agree that this can be nice when done right (Clojure),
I don't think Clojure has untagged union types like TypeScript or Scala.
> but null is a high price to pay for this convenience.
Why would it be? Untagged unions prevent null pointer errors just as much as option types do, only they don't have the discussed disadvantages of option types.
> Why would it be?
That's literally what they explain in the rest of the comment.
No, they don't reference any "high price to pay", only that they personally didn't need the advantages of untagged union types so far, and that Haskell (allegedly) has patterns that would play a similar role for libraries.
>you would have to change the code at all call sites.
Actually I think you can just change concrete argument `Foo` to type constraint in Haskell as well using a type class. So the function would be something like `foo :: ToMaybeFoo a => a -> .. ->`. And you would implment `ToMaybeFoo` instance for `Foo` and `Maybe Foo`.
Agree that this is more involved than typescript, but you get to keep `null` away from your code...
This is a neat idea, but it does require that you know up front the largest union that could ever be supported in that argument, so that you have the ability to narrow it down later. Worse, it in the limit it requires a combinatorial explosion of type classes, with one for each possible union! The `ToXYZorW` classes form a powerset over the available types.
See fundeps.
Admittedly I don't really understand your construction. But this solution, if it works, doesn't look practical enough that it could be routinely used in practice like Foo|Null could be. By the way, some languages even shorten "Foo|Null" to "Foo?" as syntax sugar.
> but you get to keep `null` away from your code...
I don't think this would be desirable once we have eliminated null pointer exceptions with untagged unions.
>Admittedly I don't really understand your construction.
It is quite simple. Instead of accepting a concrete type `Foo`, the function is changed to accept types that can be converted to `Option<Foo>`. Since both `Foo` and `Option<Foo>` can be converted to `Option<Foo>`, the existing call sites that passes `Foo` would not require changing.
https://play.haskell.org/saved/g4idq2zv
I really believe in FP and Haskell but I want to examine this objectively. Empirically speaking is what Mercury done successful truly because of Haskell? Do they have metrics that demonstrates clear superiority along some vectored trait like complexity, bug count, etc?
>A couple million lines of Haskell, maintained by people who learned the language on the job, at a company that moves huge amounts of money? The conventional wisdom says this should be a disaster, but surprisingly, it isn't. The system we've built has worked well for years, through hypergrowth, through the SVB crisis that sent $2 billion in new deposits our way in five days,1 through regulatory examinations, through all the ordinary and extraordinary things that happen to a financial system at scale.
This one is quite telling. Do people have counter examples?
Without having run the whole company twice in parallel, once using Haskell and again in some other language, and without having measured both runs exactly the same way, I don't think metrics like you're interested in could possibly have sufficient context to mean anything reliable.
Obviously Mercury is successful, and obviously Haskell is how they did it. So it's essential to their success. Would it be instrumental to anyone else's anywhere else doing anything else? Can't possibly know, I don't think.
I’m asking for solutions and answers. Yeah. I’m aware of how hard it is to get metrics.
You can still compare lines of code and bug rate over the same period of time.
The irony of these fancy FP languages that were designed to develop compilers or to get PL academics off is that they're actually also really good at the most mundane code imaginable.
Being able to minimize boilerplate and have strong refactoring and bug resistant types is a huge edge.
The only problem is their ecosystems are limited so you might spend more time than you like implementing an API or binding a system library.
It's probably more likely it comes down to: if your programmers are capable of learning and working in Haskell, they're likely a cut above in terms of keenosity and nerdiness and motivation around programming, and you're likely to early-cull the people who just got into CS because mom and dad told them it paid well. That's likely to help produce better overall code health.
risky move, what is the talent pool for Haskell devs these days?
not speaking in any official capacity, but: we great internal training material courtesy of some very thoughtful folks, and ultimately one hopes that most of the code is going to be pretty straightforward wherever possible.
There are two countervailing effects when you choose a more theoretically advanced programming language. On the one hand, your hiring pool shrinks. On the other hand, the quality of the remaining hiring pool goes way up, which acts as an excellent recruiting filter (for both employer and employee). Jane Street made a similar play with OCaml.
I just wanted to leave a glowing comment about Mercury itself, since this is one of the few times I’ll be able to.
I’ve been using Mercury for 5 years. In that time, I’ve been able to wire transfer money without having to worry it might disappear (functionally impossible at certain other banks), created hundreds of virtual debit cards each with their own limit and pulling from different accounts, created dozens of accounts (a “place to put money”) named by function (each of my household utilities gets its own account, with an automatic rule to pull in money whenever it gets paid out), and… well, I think that covers everything.
This has given me unprecedented insight into my financial life. I know exactly how much I spend on groceries, on each utility, and on entertainment. I can project ahead and get a burn rate for my household. And my ex wife uses it too, on the same login, which is as easy as “make an account named with her first name” and a corresponding virtual debit card.
I’m convinced the only reason people don’t use Mercury is that they don’t know what they’re missing.
You have to pay for personal banking (a couple hundred a year iirc), but the business banking is free. If you want to try them out, you can start an LLC for a few dollars (at least in Missouri) and get overnight access to Mercury. All that’s required is your EIN.
They’ve been one of the single best products I’ve ever used. The sole wrinkle was when they canceled all their existing virtual cards due to reasons, which threw my recurring billing into chaos. But every great company is allowed at least one mega annoyance, and that one was a blip.
If you’re wondering whether to try them out, the answer is yes, and I’m excited for you to discover how cool it is. https://www.mercury.com
Can confirm. I've been using Mercury for three years now for my LLC. Everything is just so smooth.
> I’ve been able to wire transfer money without having to worry it might disappear
I thought that was the whole point of banks - does money randomly disappear when people do wire transfers?
Shockingly yes. Or at least it happened to me. Nat wanted to wire me $10k to fund Tensorfork, and all I had at the time was a crummy US Bank account. I spent over an hour on the phone with them trying to get a wire transfer number, and in the end the number they gave me was wrong. The money never showed up.
That’s how I found Mercury. I was looking for a bank that wasn’t amateur hour. Once I had an account the money showed up overnight.
I still don’t know if Nat even got his original $10k back. I hope so. But yes, the failure mode is very much “if your wire transfer details are wrong, the money is just gone”. Apparently.
> I’m convinced the only reason people don’t use Mercury is that they don’t know what they’re missing.
Very well could be true because I had no idea who or what they are.
Do they have strong low level automation support for the customer programmatically even for personal accounts? I use ledger for plaintext accounting for both personal and business and sync of data is slightly annoying, perhaps Mercury’s products solve that trivially?
They sure do!
API: https://docs.mercury.com/docs/welcome
MCP: https://docs.mercury.com/docs/what-is-mercury-mcp
I run a small business books using mercury and beancount. The API supports enough operations in the free tier to do so with ease, though mostly I’m just fetching transactions. I do pay the ~35USD / mo for the extended API to get invoicing, though that’s not something a personal user would need
> I use ledger for plaintext accounting for both personal and business and sync of data is slightly annoying..
I made this to solve it https://sras.me/accounts/
Feel free to use it as it stores data on your browser's local storage only. For syncing between devices, you would be able to use Google firebase's free tier and export your accounts (after compressing and encrypting) there and import from another device. Let me know if you want to try it..
I badly want to try out Mercury, in fact I have an account already, but they don't yet support Zelle. I really hope their national bank charter application gets approved. (They applied for one last December.) Once they support Zelle, I'll drop U.S. Bank, as their app sucks and they don't have an API.
Ironically I dropped US Bank for Mercury too. You’ll be glad you did.
You could try them out while still having your US Bank account.
A couple million lines of code?
Try a better programming language next time, dagnabbit!!!
(There will be downvotes I suppose. More lines of code the better?)
Will AI agents turn Haskell into a commercially viable language?
huh i love haskell soooooo muchhhh
Cool article, wish it wasn’t AI written/edited.