-
Notifications
You must be signed in to change notification settings - Fork 64
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Restructure ramda typings code base #165
Conversation
Thanks for opening the PR in an early stage. This looks pretty interesting, maybe it could also facilitate exposing the interfaces for export as well. On unused interfaces, I know I tried adding some based on the fantasy-land spec, as part of that had been used in Ramda, currently they didn't appear to have to have an existing home yet, and I haven't been sure if I could just import them from elsewhere. But yeah, I see your point on keeping things clean. The consistency/structure point you mentioned, would you mind elaborating on that? The texts I guess were currently largely based on those from the docs (though for the most part they'd been added before I joined in, I largely just edited from there). Testing: On To give you a bit of background, the standard way people appear to try to test their TS typing repos has been through TSC -- if it compiled without errors, it was deemed to be okay. I considered this approach to be woefully inadequate: in some cases where the user is mis-using things I would want for the compiler to alert the user to this, while if we were to inadvertently change the typings such that all types would suddenly be inferred as As a result of this, I looked at the way TypeScript did this, but as they did not expose their better approach to typings authors, ended up with
Could you make this more concrete? My intention has been to consider actual return values as the ideal types, but I'm not sure if you might've envisioned this differently. Codegen concern: I've ended up getting a bit less time for open-source now, but the effort I'd currently been half-way in was switching the typings to generating through that scripts.js file. The intention there had been to ensure that currying could be taken care of by codegen, which would otherwise make any manual edits a massive pain. Current progress there was pretty much "missing a few functions but otherwise probably mostly able to generate the typings minus currently comments". I'd wonder if this could potentially further complicate automatic generation there with additional concerns like imports. Perhaps my codegen ambitions there had been naive, since obviously your suggestions here look pretty good as well, but if we could somehow combine them that'd be pretty sweet. |
There is also this stuff not used, so old interfaces for currying You may want to elaborate on this.
To be honest the whole code base actually seem to be not very consistent.
I propose tape as just a very simple framework for composing tests. I don't actually agree on this point with you. Let's take an example consider typing function var values = { x: 1, y: 2, z: 3 };
var prependKeyAndDouble = (num, key, obj) => key + (num * 2);
R.mapObjIndexed(prependKeyAndDouble, values); //=> { x: 'x2', y: 'y4', z: 'z6' } correct (current) typing for internal function is: (fn: (value: T, key: string, obj?: M) => V, say that while typing we had a mistake and typed argument to be this: fn: (key: string, value: T, obj?: M) // key goes first instead of value Then in tests we just didn't take the oficial example for tests but composed our own (or something might even changed in ramda's API) - what ever - we got inconsistency. so we may have a situation when typing and TS tests code is consistent, but the typings itself is inconsistent with JS implementation, so we should check it. When implementatin goes separate from typings, if it compiles it really doen't mean that it works. Taking just code test from examples and checking typing by contemplating are not enough to cover this case. Also I would add onother point TS typings should fully (if possible) correspond official ramda's
I think it is absolutely ok to have this approach as I said by this you can ensure consistency between implementation and exiting typings.
Could you create code snippert referring to current repo that shows advantage of using
In general I like the approach that automate routine tasks even, if it is code itself, but there is always should be a balance maintained. If it is a task that should be performed periodically it should be definitly automatated as much as it possible, if it is one time task - well it depends on if it is can be done manually with a proper level of error proneness, or it is really better to use automation to generate code to avoid possible manuall mistakes. In this particular case I need to look closer to make any conclusions if it is an overkill (will report in comments later). |
Right. The On the consistency points you mentioned, I guess overall they definitely show that this repo is still pretty much WIP, with quite some ideas that haven't properly reached their potential yet, and as you noted, failing tests. I'm not gonna lie about that -- I wouldn't describe the stage we're at as positive myself. The most positive recent achievement has been the addition of You're right about there being a bunch of code without explanations or just commented out. In the latter case, one thing I consistently added commented out were attempts to capture function typings in a single Some of the type definition attempts I also commented out for just not working in their current state, so I suppose they're more like WIP ideas.
This is a fair point. I suppose currently the question would be if there would be an effective way to conciliate Ensuring compliance with Ramda would definitely be nice as well, but the more of these concerns we're taking on, the harder it gets to successfully juggle all of them. Worse yet, we're already having trouble as-is. Technically, we could create branches for each combination of Ramda version and TS version. And in fact, for some TS versions we already have branches. But even just doing one combination is already a lot here.
This is one point where I'd beg to differ. The Hindley-Milner notations in Ramda are nice for human readers, and might have been taken straight from Haskell sources, but what TS typings do is infer what comes out of a function based on what you're putting in. It requires one to look at what TS wants to accomplish that. This likely requires different numbers of generics (usually more), meaning it would likely become harder to mirror the Ramda type notations anyway. There's another more minor difference there as well: the HM notation uses lowercase a/b/c, while in TS types are generally expected to use uppercase. That said, for the generics that do match, it's definitely a possibility to match their names, though it may become confusing if some generics appear to just use a/b/c, while others would not.
Well, the challenge has been how to detect these, as ideally, when a big PR comes in, we'd need an effective way to judge whether something is improving inference or accidentally just making everything
It was a big undertaking, yeah, but it was addressing a primary concern -- opening our eyes so we'd have any way at all to judge whether edits were good or bad, as By comparison, I haven't considered ensuring we're synced with Ramda as as big of a concern -- manually keeping up with Ramda release notes is fairly doable, while people would quickly report these issues as well. But you seem to be of a different view, so perhaps I'm not fully seeing everything you're seeing. Feel free to comment. On currying variations, I hope you're willing to take codegen into consideration there, as maintainability I'll agree should be among our top priorities. To elaborate, code generation started before that, as some of the typings were just more effort to write manually than to generate. For the other functions it was about currying variations, but the larger concern was that this being a one-time thing seemed no more than an illusion. TS improves over time, and we'd keep editing the typings as well. The intention would be for the codegen to ensure that one edit would only need to be one edit, not ten. |
@tycho01 what do you think about replacing It will just lookup something like test('props: one argument', (t) => {
const res = props(['str'], obj)
// comon tape test will check value equality
// and type-check script will check types here
t.is(res[0], 'str' as string)
}) |
Yeah, I'm definitely open to input on both testing and codegen -- we'd probably need to find common ground on both of those. I love the idea of having the same method cover both run-time and compile-time checks. Good job :), if it were agnostic to the testing framework it might actually catch on as a way of testing typings! I'm confused by the redundancy of Would you be able to commit a sample output file from your type-check script to show how it compares to the existing one? |
On the output, as you can see from the code, script just throws when the first type mismatch is met, but it can be customized any way needed, for example:
|
Okay :), I prefer for it to err in the direction of precision by default, so fair enough there. In my mind, perfect type inference would mean inferring the exact return values, though I realize that would usually not be realistic. On just having it error rather than log all results to put into version control, that sounds like it might not really work until we'd reach 0 imperfections. Unless we'd put our head in the sand over a bunch of issues, I fear we haven't quite reached that yet... Until that point it would always inform us of only one issue, rather than the many we have. This would result in a huge loss of information, and as a result make it hard to judge whether any significant overhaul would be for the better or for worse... |
I've replaced tape with mocha, as it seem to be more appropriate for this case, output can be like this: Output can be of cource saved to file, though I think test output in a file is kind of redundant, and everything actually should be made via CI, where you can see the latest status and errors. I will tune it. |
That looks good, yeah. The reason I liked having output in version control would be so as to be able to see diffs between commits (ideally the test would automatically be run before a commit could be pushed): whether the failed asserts would go up or down. I mean, yeah, I've been among errors for long enough I'm taking their presence for granted... the primary problem there is it's not even so much an issue of just fixing all of them for a second, but rather that quite a few aren't even our fault, but rather issues with TypeScript. |
From the diff use-case, perhaps you'll also get more of an idea on how I structured the output there, including code snippets though not line numbers, which I suppose would differ from the Mocha output here. That seems irrelevant, especially as the line numbers make for a more elegant output, but that was intentional. The reason there was that line numbers don't go with output diffs well. If one were to add/remove a line near the start of the test file, all the line numbers would change. If that happens, you get diffs on each line in the output file if only for the line numbers. If results had otherwise changed as well, this could mean the actual diffs would no longer be visible as they'd be drowned out by the line number changes, making it hard to see which inferred types changed, and how. That said, the numbers passing / failing would remain as a means of judging whether things changed for better or for worse... But if this change would mean sacrificing information, then I'd wonder, might we not be better off leaving the type-check as-is, while still adding the run-time checks as you proposed? I'm all for elegance, but the information may well be worth not throwing out. |
test/props.ts
Outdated
equal(res[1], 1 as number); | ||
it('ts error: unkonwn props not allowed', () => { | ||
typeFail(` | ||
import props = require('ramda/src/props'); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@tycho01 this allows to test error cases that should be caught by TS.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pretty cool! I'll gladly take that over $ExpectError
then, I'll agree having that render the test files unable to compile (presuming other errors would be cleared out) was definitely a downside.
The output diffs and codegen points are still concerns to me, but I'm pretty positive about all the new ideas here!
|
Yeah, if we can get the tests to pass, I'll concur diffs definitely become a moot point. The obstacle until that point appears to be not all problems are really our fault (see generic degeneration on
The pattern: someone edits a typing of a curried function
well, the one-time part for manual currying is done (same for the generation, minus an edge case). it's still 'once every edit', but yeah. If anyone's willing to put up with that though, then that's fine. |
In a stable version there should be tests that can actually pass. If you are waiting that test that are not passing today will be passing tomorrow because of some typescript change, such cases should be moved out to some other place separate from stable ones, and they could be periodically launched. Maybe in another branch to keep things clean,. |
To confirm, should we move any unresolved issues (including outstanding ones here) to a separate branch such that the tests in main branch only confirm the status quo? i.e., can I take this to the corollary, and say a version is deemed stable once it has kicked out all of its failing tests? |
@tycho01 btw you may look at my version of code gen in latest commit 11dab2e
I consider such code gen approach very useful when need make versions of params (as in example above). It allows to easily have docs for each version etc. And most of allow allows to be sure that if it tests for two params it will work for 8 params either. The only problem is formatting (probably could be fixed using TS compiler API), but anyway this is built version, it even should not be in SC. |
Had you considered the original That said, there are differences of course:
To further elaborate on the last point, thing actually get convoluted here, since you'll need to ensure generics are declared only when the info becomes available, while disregarding any combinations where the generics info is not yet available at the moment it's needed (this problem plagues curried versions of typings utilizing In case your version hadn't taken care of path lengths here yet (e.g. if we were dealing with a simpler function than To be fair, I'm not even sure in which cases I'd had to deal with variations * currying already, so props on that. That said, we may need to ensure maintainability would be further improved. You may notice that I'd had this general tendency toward separating data and logic, such that for the most part the common logic could just be abstracted out. I hope we could somehow find a golden mean there. |
Generally, I prefer to use TypeScript over JS even for simple scripts, is it is much less error prone. My version of templates more verbose because it is more robust.
I too prefer functional approach and there data becomes actual logic. You may look at flatten typings I made and they work. Aslo, I made an experimental version that types |
That's fine, yeah. Context: I used JS cuz I ran it in the browser console but running through Node (allowing TS) was definitely the next step. I definitely wouldn't insist on JS there. Perhaps with a few different templates of your method we could further look into the patterns across typings though, so as to further contract the bits needed per individual typing. We may have use for conciliating our approaches there. (On my approach, I'm primarily referring to the larger list there rather than the older custom ones with their own functions at the top there -- those older ones would probably get more elegant as well if generated such as to plug into the more general one.) On P.S.: pretty interesting, good job on that |
In my current interpretation, the primary reason you felt forced to switch to a more verbose codegen method was the switch to outputting to separate files, which added new challenges (primarily imports). What if instead, we'd enable I know you wanted source in separate files as well to facilitate editing. I think that part has been possible either way though. |
Just saw your commit, guess our approaches are starting to look gradually more similar. Had you checked my above question though? I wonder if flipping it would suffice for your purposes, as I'd imagine it'd mean our approaches could be fully reconciled. It feels like we're just solving the same problems all over again. |
Actually, I don't really need so much separate files by myself as so far I always imported the whole ramda, but:
|
I respect the desire to make the source more maintainable, yeah. If we might consider the generated form to be a compiled bundle in the same vein as At that point, might it not suffice to have the separate template files just be a split up version of the current scripts file, e.g. like this? |
assocPath: [
['T', 'U'],
{
path: 'Path',
val: 'T',
obj: 'U',
},
'U'
], It is not completely correct to represent params as object, as props in object are not actually ordered, so when parsing you are not ensured that keys So curried version would be like this; assocPath: [
['T'],
[{path: 'Path'}],
[{val: 'T'}], [
[U],
[{obj: 'U'}],
'U'
]
], Well, I think it is really quite compact and so so braket-overfilled. Probably could go with this vs util helpers.
I think there should be separate template versions and separate test files for very function, I even think that for simpliest functions still template could be used not |
Thanks for your response. Yeah, I know you're right about the order not being guaranteed. I'm hoping we could put off the bracket pollution until some new Node actually breaks it though. I presume the chances should be fairly low, so hopefully this concern will remain theoretical. On the generics, note that my version had already been checking such as to ensure that during currying generics would be added as late as possible. That's what had enabled me to allow specifying them in one go in that initial array here -- the point of this structure (as opposed to the generated typings) was that this input structure would be agnostic toward the effects of currying. I had put a bit of thought into it before getting to that. :) On separated tests: no objections!
Sorry, could you explain? The template files used for what instead of those? Is this about the tests still? |
I didn't hear about such intentions. So still need to ensure order. Well to minimize bracket poution, instead of {path: 'Path', val: 'T', obj: 'U'}, could do: ['path:Path', 'val:T', 'obj:U'] not sure if it worth though.
Yeah, I suspect generation fo curied versions could be made automatic in most cases.
I mean that even simple definitions for such methods as |
Yeah, I agree consistently using generation is probably the least confusing.
I'm not aware of planned implementation changes either -- afaik, for the foreseeable future it'll work, even if not technically guaranteed by spec. That way, is this really urgent already? |
You mean node currently ensures technically the order given in the code for |
No guarantee, just implementation. If I may consider the syntax pollution from a cost model, I'd be inclined to adapt when needed, otherwise wait it out hoping V8 won't do a major overhaul there soon. |
Any progress here? |
@whitecolor @tycho01 I think you can close this? |
This is an initial proposal and food for thoughts, comments are
welcomewanted.@tycho01
Goals
Tasks
src
foldertest
directorymocha
for tests. Tests should address appropriate different use cases (in terms of typings) and check test cases execution results to ensure consistency between typings and ramda's core functionalityCompleted functions