-
Notifications
You must be signed in to change notification settings - Fork 107
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
In defense of unary functions #233
Comments
I've been really busy over the past few days, and will continue to be, so I can't respond this in depth, but since I'm the primary individual making this argument, I did just want to drop a quick line. I will just add that I spelled out a chunk of this argument in this blog post as well, so just wanted to drop that reference for completeness. Otherwise, I appreciate the perspective and will give it a think & response when I get some time. |
To the best of my knowledge, this isn't generally true. It definitely could be for some libraries (it appears that your libraries were written this way for this reason), but a number of libraries such as RxJS were written in this style because it was the best tool to achieve their goals within the existing JS syntax. In particular, those goals were:
Under existing JS syntax, "fluent programming" requires the operations to be written as methods, but that requires them all to be stored on the class prototype, which means they aren't importable separately. "Tree-shaking" requires free functions so they're importable separately, but then using them is really annoying with stacks of nested functions rather than the nice linear code of a "fluent" API. Using unary-returning functions and a
It is true that there is some symmetry, but it's important to look at the situations thru the lens of usage numbers, which breaks that symmetry. Right now, there are two ways of defining and calling functions in "common" JS, that you'll see across the vast majority of written code: // method style
Obj.prototype.foo = function(a, b) {...}
Obj.prototype.bar = function(c, d) {...}
obj.foo(1, 2).bar(3, 4);
// Can technically be called as a free function
// with Obj.prototype.foo.call(obj, 1, 2), but rare.
// Can be called in F#-pipe with `obj |> x=>x.foo(1, 2) |> x=>x.bar(3, 4)`
// free-function style
function foo(obj, a, b) {...}
function bar(obj, c, d) {...}
bar(foo(obj, 1, 2), 3, 4);
// Can't realistically be called in method-ish style.
// Can be called in F#-pipe with `obj |> x=>foo(x, 1, 2) |> x=>bar(x, 3, 4)` Unary-returning is a new, third pattern: function foo(a, b) { return obj=>{...} }
function bar(c, d) { return obj=>{...} }
obj |> foo(1,2) |> bar(3,4)
obj.pipe(foo(1,2), bar(3,4))
// Already looks similar to method-ish style.
// Can technically be called in free-function style
// with `bar(3,4)(foo(1,2)(obj))`, but rare Today, unary-returning is a rare pattern. It's used, as I said above, for some libraries that want tree-shaking and a fluent interface, getting the benefits of both methods and free functions. It's also used for some libraries that genuinely enjoy the benefits of HOF programming and have a library of functional combinators to play with. But the vast majority of JS written in the world does not write their functions in that style or call functions written in that style. Choosing to base a pipeline operator on enabling and encouraging unary-returning functions is definitely possible, and would be the most direct translation for existing codebases using that style today. But we could get the same non-HOF-specific benefits by instead basing it on either of the other patterns: the "bind operator" proposal encouraged writing free functions that used The existing two common defining/invoking styles already cause problems for authors sometimes. Learning isn't too bad, because "method-style" is so ubiquitous across languages that it's something one is virtually guaranteed to encounter and learn anyway. But usability suffers, because tooling written with the assumption of free functions doesn't work great with methods - (A bind-operator would allow you to write functions in a somewhat familiar style, looking like a method, but would still involve invoking them in a third, new way - they're convenient with the bind operator but a little weird to call "normally", as In languages with auto-currying, unary-returning and free-functions aren't distinct patterns; you write the function the same way, and it's merely a matter of whether the user calls the function with all of its arguments or leaves some off. So in Haskell, F#, etc, you don't have these third-way problems. But JS doesn't have that, and likely never will. (Heck, langs like Haskell or Common Lisp don't even have the second way - their "methods" are just free functions with type signatures that cause the right version to be invoked on the right type of object. They use function composition or some variety of pipeline to get "fluent"-style syntax.) So, wrapping this up:
This is indeed symmetrical, but the implication that it's symmetrical in effect is false, and that difference is what we're concerned about. The vast majority of code is "non-functional", and/or written in data-first non-curried style, including most particularly the entire Web Platform's API surface. Favoring that pattern over one that's used much less often isn't a neutral choice. We could support HOFP more. But for it to be good for authors, we need a lot more support, at both the syntax and library level; until then it'll always be a less convenient syntax for JS authors. That additional support is unlikely to ever come in any significant way (much of it has to be built into the language in a pretty fundamental way, and the Web Platform's api can't just be swapped out for a HOF-focused version). One individual change in favor of HOFP won't significantly impact HOFP's usability, but it will make the change less useful to the web that doesn't use HOFP in a major way. |
Thank you for the thoughtful and extensive response @tabatkins 😃.
This makes sense, and I suppose counters my argument regarding the symmetry. I guess what bothered me is that my perspective as a member of the minority which does prefer HOFP wasn't being reflected in most of the arguments being made.
I am aware of this problem, but I never saw the pipeline operator as a proposal which aims to solve it. Instead, the F# pipeline operator solves a very specific problem within the HOFP space, and somehow through what feels like scope creep became a solution to problems that exist in other spaces at the cost of no longer solving the original problem.† But as you also said:
So maybe this "original problem", as I thought was being addressed by the initial pipeline proposal, isn't really as widespread as I imagined. In this case I guess it comes down to usage numbers again. On the other hand, I have seen a prevalent RxJS contributor (author?) mention that Hack isn't useful to them and assumed that they too were looking forward for this "original problem" to be addressed.
I think this is the strongest argument directly against F# that I've seen. The Hack operator doesn't encourage any new style of function definition, while still allowing functions to be called in a somewhat linearized way (I say "somewhat" because when reading code I find matching each topic reference to each pipe operator to be a very non-linear process). I guess the Hack operator allows the JavaScript community to continue to converge on a single function definition style, while the F# operator would cause further divergence in this space. On the other hand, convergence further marginalizes the minorities (of which I myself am a part) that continue to favour HOFP. In the beginning, I was hopeful that the F# operator would contribute to the popularization of HOFP in JS, but instead it has turned into a feature which appears to aim for the opposite.† † I apologize for the tone of these statements. What comes through is an expression of a sort of disappointment with the way that something that looked like it was really great news for devs like me, morphed into something which seems to actually be fairly bad news for devs like me. I think @js-choi captured that quite well in #215 though so no need to process that here 😉. PS: When your response appeared, I was typing a whole piece about composability, what it is, and how it's another reason for choosing HOFP without using HOFP as a means to solve linearization. Based on your response, I don't think I need this piece any more to make my point, as my argumentation for favouring HOFP has not been questioned. If anyone's interested though, I can still refine it and then post it. 🤷 EDIT: I did end up posting it in the first section of this comment: #233 (comment) |
I'm not sure of exactly which comment you'd be talking about (I just got back from a week's vacation and have been quickly digesting the threads, but there's Just. So. Much. Text.), but at least in personal conversation with Ben Lesh he suggested that, had Hack-style pipelines been present at the time that RxJS were being written, they'd definitely have used them in the way I've been suggesting (free functions with "idiomatic JS" function signatures, taking the context object as the first arg). At this point switching over to them would mean a lot of churn, which is understandable, of course; from what I can tell the RxJS committee is currently not planning to make any changes in response to the pipe operator.
Yup, correct in all regards.
I wonder how much of this is due to reading examples of existing tacit libraries being naively adapted to use the pipeline, aka |
(Edit: @js-choi just accidentally deleted a big comment from themselves about their own experience as a Lisper with functional combinators and lambda, which was immediately above this comment and I was responding to. :( ) Yup, @js-choi's experience in Lisp mirrors my own. The big combinator I used a lot in Lisp was the pair of Yeah, some built-ins on the |
If I may quickly (edit: ok, maybe not that quickly...) chime in to tie this last part back to the argument I've been making, the differences in usage of these tools is a consequence of the affordances of the language. Because Clojure has lightweight tools for making the changes to these functions with anonymous functions, function combinators are less popular. Functional JavaScript's history reaches all the way back to pre-ES6, where anonymous functions were syntactically quite heavy, so the affordances of the language made those mathematical tools quite useful. A combinator like foo
|> funcThatCreatesAnotherFunc(^)
|> ^(bar) // thrush
|> baz(qux, ^) // flip
|> baz(^, qux) // or maybe this is flip? who knows, doesn't matter, put the args where you want! Functional combinators are functions in math, but it's not a requirement that they be expressed in functions in the language. If we can accomplish the goals of these mathematical concepts with syntax, we should, especially when the resulting code is more idiomatic for the language at large. The preferred syntax to do that is a question of the affordances of the language. If you have to write
I think it's defeatist to view this as marginalization. The functional JavaScript community has been incredibly creative with its approach to adapting these functional techniques & tools in a language that is only somewhat hospitable to the style. Ramda; Sanctuary; Fluture; these are all incredible libraries that let you do wonderful things in JavaScript. The introduction of the Hack pipe is an opportunity to rethink functional programming in JavaScript and adapt our tools in new creative ways. I recognize that this means churn in the functional ecosystem, and I appreciate that a lot of pain that will result as the ecosystem adapts to the new syntax and the style that results from it. I also think the result of this will be a better, more idiomatic, functional JavaScript overall and a resulting ecosystem that can share more of its tools & techniques with mainstream JavaScript. I'm excited to see where that goes. |
GitHub deleted my comment from yesterday, and I’m not sure why. I’m going to try to reproduce it… Like @mAAdhaTTah, I like your post a lot. I think it cuts to the meat of some important issues. I certainly agree that algebras based on unary functions are very elegant and have many elegant properties. And programming languages that happen to be based on auto-curried unary functions gain these elegant properties. I certainly did not mean to downplay this elegance when I co-wrote the explainer—any such downplaying would be a deficiency of the explainer. One thing I’ve been thinking about is that my perspective comes from Lisps (especially Clojure). Like JavaScript, the Lisps usually are non-auto-currying, n-ary FP languages. After all, they’re based on lists. In these Lisps’ non-curried n-ary functional programming, many of the function combinators in your Gist are simply not used often. This doesn’t mean that tacit programming or other FP does not occur in these Lisps. Function composition is common. Constant functions are common. Monads are common. But (in Lisps) we simply don’t use many of the other combinators in your Gist. I can’t remember the last time I’ve used flip in a Lisp, for example…I just use anonymous functions, and it’s clear enough. Furthermore, many FP combinators do not necessarily act on unary functions. For example, monadic binding (with type MX → MY) is not a unary function. Monadic binding requires two arguments (the input value, with type MX, and the binding function, with type X → MY). The fact that curried languages use monadic binding in a curried style like Haskell’s (Speaking of your Gist, I also am hoping to propose …Anyways, I appreciate this post. Hopefully my own Lispy perspective brings some color: many of these combinators over unary functions (as elegant as they are) are not required for functional programming, at least from my Lispy experience. |
Huh. Indeed! I did get a chance to read your original comment ~10 hours ago, but have been too busy with work to respond. Thank you for reposting it. I really appreciate all the time and effort you all are putting into explaining your points of view. I'm tired now but I'll take it all in tomorrow and try to provide feedback. |
Haha, yeah. The amount of passion around this proposal is immense and the text just keeps flowing in. Anyway, I was referring to comments like "IMO, the hack proposal isn't useful enough to justify the additional syntax. #228 (comment)" and "Unfortunately, the direction that the proposal is going isn't particularly useful to the community I serve. #218 (comment)". From these comments I assumed a preference for the F# proposal, but that may have been a misinterpretation.
My background in functional programming, interestingly, is in JavaScript itself. I always had good intuition for HOFs (even back when I did mainly PHP, first via callables, then with lambdas) and when I discovered functional programming through Ramda it all just clicked and made a lot of sense for me personally to program in the HOFP style: As in, somehow it just aligns well with my way of thinking. From there I started solving the problems I was encountering with this style of programming in JavaScript. The most painful thing about it was the error messages like: (Bear with me because I'm going somewhere with this) Sanctuary is a library that adds a full blown runtime type system (heavily inspired by the Haskell type system) and monadic branching types (Maybe and Either; also Haskell inspired) to an otherwise "Ramda style" HOFP lib. The library now acts a bit like a stepping stone for developers coming from Haskell to JavaScript, or going from JavaScript to Haskell or PureScript. Over the years, what is idiomatic in Haskell - where there is syntax support for do-notation, pattern matching, list comprehensions, etc - have had counterparts implemented in Fluture/Sanctuary. But because adding these through language features has not been an option, we've basically used functions for anything. For example, Sanctuary implements case analysis (or "destructuring") functions for every data type as an answer to pattern matching ( Where I wanted to go with all this is that through lack of language support, Sanctuary leans heavily on functions to implement language features, and being able to use function combinators reliably within this landscape, and to rely on functions always having the same shape (functions with one input, one output, and no side effects), is a big factor for the success of a HOFP code base and became part of the idiom. In summary, the Sanctuary "function for everything" approach has brought about an "idiomatic HOFP in JS" style where function combinators are very common ground. Interestingly, although we've strived to "make JavaScript feel like Haskell" with this lib, when I actually program in Haskell I find myself missing Sanctuary's destructuring functions, list generator functions, and other Haskell-language-features-implemented-as-functions. It turns out that when you have those, and you have function pipelining, you really don't need anything else; And then switching from thinking about functions to some arbitrary language feature to do the job can even be jarring. This also shows how much rests, from my perspective, on the F# pipe operator feature. It's something which would be used at the core of this style of programming. Of course, I know that the group consisting of those who program in this style is probably way too small to cater a language feature to. But I appreciate having been able to share my perspective and provide a window into a JavaScript sub-community that the committee possibly didn't know much about.
While it's true that the binding operator itself can be non-unary, I was talking about unary functions being used as the operands:
This is what I meant by "unary functions can themselves act as Monads": that there exists a valid monadic bind implementation for unary functions. Although, I just did some testing, and was unable to exclude the possibility that an equally valid bind implementation could be created for functions of other arities. In Haskell, non-unary functions are just unary functions that take tuple inputs, so the bind operation for unary functions automatically also works on non-unary functions. In JavaScript, non-unary functions would need a specific bind implementation for each arity. Anyway, to get back to the point:
These are valid arguments against the reasons I gave for favouring unary functions. Since you're addressing my reasons, I do think I should add my fourth reason after all. The fourth reason is summarized as "unary functions compose". Sadly, I have misplaced the wall of text I wrote regarding composition. The gist of it is as follows though: Since functions only have one output, if you want to compose them together, you need something that only takes one input. I also made a case (in the text I lost) for why I think composability is important. I'd love to expand upon it but it's getting late again, so perhaps another day. I'm going through comments in the order they were (originally, @js-choi ;)) posted. I still haven't gotten to @mAAdhaTTah comment, which I regret because they're the person I first addressed with this thread. I want to reply to their comment too and I will, but for now I'm off to bed. Thank you for reading. :) |
On composability as a reason for favouring unary functions
In my view, something "composes", in the general sense, when it can connect together to form a greater unit which in turn composes in the same way as its parts. Click here to reveal how I view and explain composition. Feel free to skip this part if you're familiar.Composition is an operation typically denoted using the A way I like to explain (function) composition sometimes is through the analogy of connecting different adapter cables together. For example, you can connect a USB-C to USB-2 adapter to a USB-2 to Ethernet adapter to create a new USB-C to Ethernet adapter:
If we would want to capture this operation with types, we'd need an Adapter type with two generic types describing the two sides of the cable. We composed
If we captured this in types, we did
This adapter analogy maps nicely into unary functions. If we replaced After composing this adapter with another, you reach a point where it cannot be composed again in the same way you were composing adapters earlier. You'd need a different composition operator that somehow maps two outputs to two inputs. When designing an abstraction of any sort, allowing consumers of that abstraction to compose two units of abstraction to form a greater one is a very desirable property, in my opinion. It allows the user of the abstraction to compose and decompose their program in any way they see fit, giving them a very comfortable level of control over the organization of their code. It also allows the program to evolve in a very elegant way, where any completed program can itself become a unit within an even larger program through composition. Think of a React app, where your root level This brings me to unary functions. Unary functions have this same desirable property that they can be composed together. If you have a function with a specific output type, and another with the same type as its input type, then voila, you can create a function with the input type of the one, and the output type of the other, which can in turn be composed in exactly the same way. Functions of greater arity don't share this property. Functions are inherently limited to having a single output, and so for them to be composed with other functions, the other functions must only take a single input. Since unary functions compose, and unary functions can act as binary functions by simply returning another unary function, why would I ever give users of my abstraction a non-unary function? The only thing it achieves (besides the performance benefits, for now) is that it limits these users in their ability to define their program as a composition of the units that I gave them. Responding to more commentsRight with that out of my system, I'll get back to the comments I wanted to respond to.
EDIT: The following response came out of misunderstanding of the scope of the proposal. See #233 (comment) for an updated response Outdated response based on misunderstandingAdding these combinators to the core JavaScript API, while serving as a friendly nod to the FP in JS community, and perhaps even as inspiration to try FP for a very small amount of people who might stumble onto them, wouldn't in my opinion necessarily present much benefit to the FP in JS community overall. Firstly, these combinators are very trivial to define in userland: there is no lack of language features limiting anyone from doing so. But more importantly, based on my observations of the TC39 process, consensus seems to often be reached via concessions. In this case, I'd be concerned that when these combinators are formally proposed, they'll be muddied by this process, and end up with "features" (or flaws, if you ask me) like not being unary, not just taking unary-function inputs, having special cases for async functions, etc. Formally adding combinators to the language with any of these flaws could once again work to the detriment of HOFP proponents. So while I appreciate the idea, I'm very wary of the potential losses coming out of it, especially when compared against the potential gains - it essentially looks like a risky move to me, just as proposing the F# pipeline operator turned out to have been a risky move.
I'm sure that this could have contributed to the adoption of function combinators in the early days. But I hope that I managed to convey, with much of the writing above, that communities exist (of which I am a member) where function combinators are still used and appreciated even after the introduction of arrow functions. For us, it's often easier to recognize patterns like Full
|
@Avaq I've got lots more thoughts, but if you're still around, I've got one direct question: if you prefer |
@mAAdhaTTah because of a limitation that is inherent to using braces in function calling style ( * This could be an infix pipe operator, or an infix compose operator. Either would let us drop the braces and remove nesting. EDIT: ...oh, unless you meant |
For what it’s worth, I’ve already added a proposal for unary Function helpers at Stage 0 to the agenda for the October TC39 plenary meeting. The explainer is still unfinished, but feel free to take a look at it and file issues. Although standardizing Function helpers might indeed serve as a “friendly nod” to the FP-in-JS community, that would not be its primary goal. The primary goal would simply be to standardize some useful, common helper functions that get downloaded from NPM a lot. Yes, They’re included in the standard libraries of even n-ary non-auto-curried languages like Clojure for a reason, despite their being easily implementable in userland: because people do reach for them and would use them. Indeed, I am hopeful that standardized Yes, an infix operator would also be nice. But TC39 considers syntax to be very expensive, and F# is much more likely to standardize As for your last paragraph’s concerns about TC39 muddying |
I've read the explainer and realize now that the scope of the proposal is not so much about adding function combinators to JavaScript core, and solving the problem of being unable to do combinatory style programming out of the box, but rather to add some of the common functional utilities, including The concerns that I raised were mostly about the "combinatorial logic" scope, and not the "functional utilities" scope. Consider my concerns alleviated. I hope I didn't come across too negatively. I'm actually quite happy to see these things potentially making their way into the language. So thank you :). I do have some other ideas and suggestions that I'll leave under the new proposal. |
@Avaq To start, I want to say I appreciate you sharing your thoughts here. There's definitely an aspect to this that I had not considered and have been pushing F# adherents to provide, but given the discussion's overall reliance on more trivial examples, it was unclear to me why the syntax would be harmful to functional programming. This has been incredibly enlightening, so thank you for that. I'll also say upfront that one of the things I'm getting from this, given your use of "HOFP", is that we're kind of addressing a "subset of a subset"; namely, that HOFP itself is a subset of FP in JS more generally, which is itself is one of several approaches to programming in JavaScript one can take, based on the problem being solved & stylistic preferences of the author. If this is wrong, or a misinterpretation, it'll severely undermine the rest of this post, so I want to acknowledge that up front as a potential weakness as well. All of that said, you're probably right, there will be a split within the FP-in-JS community between those who take lighter/more syntactical approaches to FP (e.g. preference for Part of what I expect to happen in a world with is that much of functional JavaScript, the part that "feels less strongly about the benefits of HOFP", can & will move its coding style to look more like mainstream JavaScript, and that, in turn, mainstream JavaScript can & will adopt some functional techniques in a more idiomatic fashion. As a result of this thread, I'd reframe my argument thus: it's not that there will be no divide between functional & mainstream JavaScript, but that the dividing line will shift to incorporate more functional techniques in mainstream JavaScript and thus contribute to the mainstreaming of functional programming in JavaScript more generally. That dividing line will shift into the community from which you're speaking, causing a divide between "mainstream functional" and "HOFP", the latter of which will be marginalized further. This is what I mean by a difference in goals. I, personally, want to see functional programming and its techniques become more widespread. Insofar as HOFP has its own distinct idioms ( By contrast, my read is that enabling your current idioms more fluently is your goal for the F# operator. It is thus a feature of F# pipe that said dividing line doesn't shift because it solves a problem within, rather than between, the two communities. I might even go a step further: because F# pipe largely doesn't solve problems had by non-functional programmers, the introduction of F# pipe would actually exacerbate that divide, further siloing functional JavaScript from the mainstream. Frankly, I think this is bad for JavaScript as a language, as a community, as an ecosystem. I think functional JavaScript has a lot to offer mainstream JavaScript and the further down the HOFP/curry/combinator rabbit hole you go, the more difficult it becomes to then adapt those tools in non-functional code. I think the React example (embedding So yeah, there will still be a divide, I was wrong about that, and again, I thank you for sharing that here. I will try to avoid minimizing the resulting pain, and hope that this provides some insight into why I think this is beneficial for mainstream JavaScript as well as some subset of functional JavaScript, even if your community is likely to be adversely affected. |
@mAAdhaTTah Thanks for pointing me to this comment. Your reasoning seems well-intentioned, but as I argued in #225, I do believe the proposal will face serious pushback from the mainstream JavaScript crowd as well. As such, you may even be risking creating a divide on both ends of the spectrum. I suppose it is possible there is a large-enough mainstream group in between the two sides to justify your course of action, but as someone who has never seen a Hack proponent "in the wild"... I'm still skeptical. |
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
Can we just get a pipe, of any kind, tyvm :) |
I'm creating this issue to address a line of reasoning primarily put forth by @mAAdhaTTah and others to a lesser extent.
I've been following most of the issues in this repository (a lot of reading, I know) and although I agree with a lot of good arguments made recently in favour of Hack, one argument I see being made over and over again is one which implies that the curried, data-last, unary functions exist solely to fascilitate some kind of
pipe
operation. The reasoning then continues to favour the Hack proposal because "as opposed to the F# variant, it doesn't make people choose a calling style".For example here (#215 (comment)), @mAAdhaTTah writes:
And the remainder of his argumentation rests on the statement above.
An example of the continuation of this line of reasoning can be seen, among many other places, here in @tabatkins comment:
While this is indeed true for the F# operator, what I'd like to clarify in this thread is that it's also true for the Hack operator, and thus somewhat of a non-argument.
Is another comment that I view as an argument within the same line of reasoning. The implication here is that point-free programming accommodates the pipeline operator, rather than the pipeline operator accommodating point-free programming. Here's some more examples of this line of reasoning:
These comments seem to stem from a belief that there is no virtue to function currying other than to tackle the code linearization problem. I'd like dispel this belief and in doing so, provide a counter argument to this line of reasoning.
I am one of the "library authors" referred to in various comments about my supposed motivation for providing curried functions. I am the author of Fluture and a core maintainer for Sanctuary - both libraries that leverage unary curried functions very heavily.
My motivation for using curried, data-last, unary functions is not purely for code linearization. In fact, code linearization could be left out of this consideration altogether. Let's examine some of the reasons for sticking with unary functions without going into code linearization.
Firstly, the unary function can be treated as data like none other. Let me elaborate:
Being able to treat functions as data enables a form of meta programming not easily achieved in environments where functions vary a lot in their arity.
Secondly, data-last, curried functions encourage code reuse. A simple example is the definition of
increment
by (partial) application of a curriedadd
function to1
. But of course this principle extends to functions of far greater complexity.Thirdly, and perhaps more generally, curried unary functions are an incredibly simple unit of abstraction. This last point is difficult to bring across but it goes a bit like this: When working with simple units of abstraction, the cognitive load from reasoning about the abstractions themselves is lowered, leaving more head space to deal with the complexities of the code you're editing. Having experienced both styles of programming quite heavily, I can only really vouch from my own experience for what a major difference this makes. To summarize this last point: another reason for using unary functions is to optimize for simplicity of abstraction.
EDIT: I added a final reason (composability) a ways down this thread: In defense of unary functions #233 (comment)
The three reasons listed above don't go into code linearization, but are enough for me to favour this style and provide libraries which encourage this style. This means that the assumption that library authors would have chosen non-curried, data-first function APIs if it wasn't for the code linearization problem is, at least in my case, false.
So even without trying to tackle the code linearization problem, people like me already choose to be in a world of curried, data-last, unary functions. The idea that unary functions are a means to an end, where the end is "code linearization" is incorrect. A functional programmer will use data-last curried functions either way, and the
pipe
functions and eventual|>
-proposal grew from a need to facilitate code linearization within an ecosystem that already favours curried, data-last functions.Solving the code linearization problem within this context has proven problematic:
pipe
andcompose
, and FP-TS'pipe
andflow
are impossible to type properly or fully in TypeScript..pipe
method used in Fluture and RxJS provide a solution for code linearization which, while not suffering from the problems listed above, is limited only to the data type it's implemented on.Now this, in my perspective, is what the need for a binary infix function application operator (
|>
) stems from. Not to solve code linearization within the greater JavaScript ecosystem (which has other solutions, such as fluent method chaining), but specifically to address the code linearization problem within the growing functional programming in JavaScript "niche".This means that arguments made from the position that "thanks to the Hack pipeline operator, functional programmers can now finally give up on the inconvenience of using unary functions" completely miss the point. Unary functions are not an inconvenience we deal with for linear code, but rather are at the core of the functional paradigm. With that in mind, I hope you can see that the idea that Hack doesn't make people choose a calling style, where F# does, is also wrong:
A small note on the topic of this thread: I am aware of good arguments against implementing a feature in JavaScript that accommodates a code style used only within a specific niche. We don't need to go into that here as it has been discussed in many other threads. I specifically wanted to address a common line of reasoning that I've seen used across the forum which I believe to be false, and haven't seen properly addressed. I'm looking forward to either change the perspective of those using this line of reasoning, or to be corrected myself.
The text was updated successfully, but these errors were encountered: