Not sure I get the point; of course monads reduce to a functor with two natural transformations (η/μ) that satisfy the monad coherence conditions. That's ... literally what a monad is.
- η[i]: i → t i: this is return/pure.
- μ[i]: t (t i) → t i: this is join.
Now conform to the coherence conditions (aka the monad laws), and you have ... a monad. So why not call it that? It's convenient to have a name for it, and nothing stops you from passing around return/join as freestanding natural transformations if you really want to.
Well he's suggesting natural transformations as an improvement over monads (different ones won't compose without transformers), but he's scant on examples.
The system being demonstrated only has three effects: World, State, and Stop, and all the valid combinations of those are given at the very end of the document.
As far as I can tell, these natural transformations are equivalent to the type class instances required for monad transformers. You need some way to transform a computation in a component monad to a computation in the composed monad.
There's three combinations of World, State, and Stop expressed, with natural transformations for each specific effect within each combination, ie, "lift" for each part to the composed whole. Those particular effects don't sensibly compose any other way than the ones given: there's no (reasonable) interpretation of State of Stops, only for Stops of State, and no reasonable interpretation of World that isn't "on the inside", much like in mtl one always puts IO at the bottom of the stack.
If we had an Error effect, though, maybe we'd want to interpret errors within stateful computations and retain the state, maybe we'd want to interpret errors outside the stateful computation and abort the whole thing...
But sure, in an effects-based system you'd preferentially use effect composition over monads, and sure, monad transformers is an n^2 problem, but sadly natural transformations between functors obeying certain laws is also an n^2 problem.
I think. I do admit to getting lost on what the different squiggles mean, quite a lot.
While I use functors, applicatives and monads all the time in Haskell, I have no idea what half of these symbols mean. Are these specific to category theory?
The problem is, it explains in a language for people that know category theory, rather than people that merely use functors, applicatives and monads in Haskell
Indeed we could say that those programming interfaces don't need a lot of category theory to understand. For example, in Java a functor would be called Mappable (and actually it seems there is such a thing defined in some libs)
And the poor man's list, and the poor man's parser, and the poor man's state.
We need a poor man's interface to generalise them, as well as suitable poor man's laws to abide by, and pretty soon we'll have some sweet poor man's code re-use!
> https://github.com/iokasimov/ya/blob/main/Ya/Operators/Handc...
This is the most arcane codebase I've seen. It's on par with co-dfns compiler. The frontend syntax also looks like a cross between Haskell and APL.
Я is a fascinating project. Looks like an esolang but is actually just Haskell.
> ho, he, yo, yo_, yo__, yo___, ... yo__________, ha
Is that some kind of meme I don't understand?
I mean, it kinda looks like all the combinations of a bunch of operators that can be combined.
Apparently, yes: https://muratkasimov.art/Ya/Operators
I think it is related to category theory, namely "Yoneda" and Hom functors, but that's a wild guess
Not sure I get the point; of course monads reduce to a functor with two natural transformations (η/μ) that satisfy the monad coherence conditions. That's ... literally what a monad is.
- η[i]: i → t i: this is return/pure.
- μ[i]: t (t i) → t i: this is join.
Now conform to the coherence conditions (aka the monad laws), and you have ... a monad. So why not call it that? It's convenient to have a name for it, and nothing stops you from passing around return/join as freestanding natural transformations if you really want to.
Well he's suggesting natural transformations as an improvement over monads (different ones won't compose without transformers), but he's scant on examples.
Show me the behaviour of a State/Either stack!
The system being demonstrated only has three effects: World, State, and Stop, and all the valid combinations of those are given at the very end of the document.
As far as I can tell, these natural transformations are equivalent to the type class instances required for monad transformers. You need some way to transform a computation in a component monad to a computation in the composed monad.
There's three combinations of World, State, and Stop expressed, with natural transformations for each specific effect within each combination, ie, "lift" for each part to the composed whole. Those particular effects don't sensibly compose any other way than the ones given: there's no (reasonable) interpretation of State of Stops, only for Stops of State, and no reasonable interpretation of World that isn't "on the inside", much like in mtl one always puts IO at the bottom of the stack.
If we had an Error effect, though, maybe we'd want to interpret errors within stateful computations and retain the state, maybe we'd want to interpret errors outside the stateful computation and abort the whole thing...
But sure, in an effects-based system you'd preferentially use effect composition over monads, and sure, monad transformers is an n^2 problem, but sadly natural transformations between functors obeying certain laws is also an n^2 problem.
I think. I do admit to getting lost on what the different squiggles mean, quite a lot.
This is beautiful whether you interpret it genuinely or as satire.
While I use functors, applicatives and monads all the time in Haskell, I have no idea what half of these symbols mean. Are these specific to category theory?
They were made up by the author, but actually kind of make sense
It's, like, an ideographic alphabet
https://muratkasimov.art/Ya/Operators explains a bit
The problem is, it explains in a language for people that know category theory, rather than people that merely use functors, applicatives and monads in Haskell
Indeed we could say that those programming interfaces don't need a lot of category theory to understand. For example, in Java a functor would be called Mappable (and actually it seems there is such a thing defined in some libs)
Monads are the poor man's early return.
And the poor man's list, and the poor man's parser, and the poor man's state.
We need a poor man's interface to generalise them, as well as suitable poor man's laws to abide by, and pretty soon we'll have some sweet poor man's code re-use!
It's hard to understand his syntax, but I think he's arguing for something similar to the way that Rust uses From and Into.
What?
[dead]