Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Restructure ramda typings code base #165

Closed
wants to merge 19 commits into from

Conversation

wclr
Copy link
Contributor

@wclr wclr commented Apr 2, 2017

This is an initial proposal and food for thoughts, comments are welcome wanted.

@tycho01

Goals

  • Make typings code base and tests more consistent and well structured
  • Ensure all functions are typed and tests
  • Allow to use/import ramda functions individually
    import merge from 'ramda/src/merge'

Tasks

  • Split typings into separate files per function, place them into src folder
  • Split tests into separate files per function, place to test directory
  • Use mocha for tests. Tests should address appropriate different use cases (in terms of typings) and check test cases execution results to ensure consistency between typings and ramda's core functionality
  • Make possible to check correctness of typings in the tests and also to test possible typings error
  • Move interfaces to separate file
  • Remove unused interfaces
  • Remove old tests, scripts, configs

Completed functions

Fn Moved Tests

@KiaraGrouwstra
Copy link
Member

Thanks for opening the PR in an early stage.

This looks pretty interesting, maybe it could also facilitate exposing the interfaces for export as well.

On unused interfaces, I know I tried adding some based on the fantasy-land spec, as part of that had been used in Ramda, currently they didn't appear to have to have an existing home yet, and I haven't been sure if I could just import them from elsewhere. But yeah, I see your point on keeping things clean.

The consistency/structure point you mentioned, would you mind elaborating on that? The texts I guess were currently largely based on those from the docs (though for the most part they'd been added before I joined in, I largely just edited from there).

Testing:

On tape for tests, this appears to be a library for testing JavaScript, which I fear is missing the point here. The issue is like, this repository isn't about run-time, it only does types. The run-time tests appear taken care of in the Ramda repository itself. What we're testing here instead are compile-time types.

To give you a bit of background, the standard way people appear to try to test their TS typing repos has been through TSC -- if it compiled without errors, it was deemed to be okay.

I considered this approach to be woefully inadequate: in some cases where the user is mis-using things I would want for the compiler to alert the user to this, while if we were to inadvertently change the typings such that all types would suddenly be inferred as any, I'd want for our test suite to cry out, not pretend things have become better.

As a result of this, I looked at the way TypeScript did this, but as they did not expose their better approach to typings authors, ended up with typings-checker, or rather, my fork of it, as the original turned out not to be meant for typings libraries that still had failing tests. I'm under the impression this may have been among the more ambitious typings libraries out there so far, and as a result, we haven't had much to go by from the ecosystem.

Tests should address appropriate different use cases (in terms of typings) and check test cases execution results to ensure consistency between typings and ramda's core functionality.

Could you make this more concrete?

My intention has been to consider actual return values as the ideal types, but I'm not sure if you might've envisioned this differently.

Codegen concern:

I've ended up getting a bit less time for open-source now, but the effort I'd currently been half-way in was switching the typings to generating through that scripts.js file. The intention there had been to ensure that currying could be taken care of by codegen, which would otherwise make any manual edits a massive pain.

Current progress there was pretty much "missing a few functions but otherwise probably mostly able to generate the typings minus currently comments". I'd wonder if this could potentially further complicate automatic generation there with additional concerns like imports.

Perhaps my codegen ambitions there had been naive, since obviously your suggestions here look pretty good as well, but if we could somehow combine them that'd be pretty sweet.

@wclr
Copy link
Contributor Author

wclr commented Apr 3, 2017

On unused interfaces,

There is also this stuff not used, so old interfaces for currying

You may want to elaborate on this.

The consistency/structure point you mentioned, would you mind elaborating on that?

To be honest the whole code base actually seem to be not very consistent.

  • having everything in one big file while development is not very pleasant for maintainability, SCM
    also ramda allows to import functions by one as stated in OP, placing each in separate file will correspond to original library structure and allow typed versions of individual functions to be imported
  • not used, not documented code (interfaces)
  • commented code not used
  • some comments that only add a mess
  • tests are actually not working

On tape for tests, this appears to be a library for testing JavaScript, which I fear is missing the point here. The issue is like, this repository isn't about run-time, it only does types. The run-time tests appear taken care of in the Ramda repository itself. What we're testing here instead are compile-time types.

I propose tape as just a very simple framework for composing tests. I don't actually agree on this point with you. Let's take an example consider typing function mapObjIndexed

var values = { x: 1, y: 2, z: 3 };
var prependKeyAndDouble = (num, key, obj) => key + (num * 2);
R.mapObjIndexed(prependKeyAndDouble, values); //=> { x: 'x2', y: 'y4', z: 'z6' }

correct (current) typing for internal function is:

(fn: (value: T, key: string, obj?: M) => V,

say that while typing we had a mistake and typed argument to be this:

fn: (key: string, value: T, obj?: M)  // key goes first instead of value

Then in tests we just didn't take the oficial example for tests but composed our own (or something might even changed in ramda's API) - what ever - we got inconsistency. so we may have a situation when typing and TS tests code is consistent, but the typings itself is inconsistent with JS implementation, so we should check it. When implementatin goes separate from typings, if it compiles it really doen't mean that it works.

Taking just code test from examples and checking typing by contemplating are not enough to cover this case.

Also I would add onother point TS typings should fully (if possible) correspond official ramda's

((*, String, Object) → *) → Object → Object

To give you a bit of background, the standard way people appear to try to test their TS typing repos has been through TSC -- if it compiled without errors, it was deemed to be okay.

I think it is absolutely ok to have this approach as I said by this you can ensure consistency between implementation and exiting typings.

I considered this approach to be woefully inadequate: in some cases where the user is mis-using things I would want for the compiler to alert the user to this, while if we were to inadvertently change the typings such that all types would suddenly be inferred as any, I'd want for our test suite to cry out, not pretend things have become better.

Could you create code snippert referring to current repo that shows advantage of using typings-checker? And when just having usual tests that check different conner cases (with typings) will fail. For now it is not obvious for me. If we find a place where something untyped (typed with any) that should be acually typed, it is quite easy to fix, and for now typings-checker approach seem to me as overkill.

Codegen concern:
I've ended up getting a bit less time for open-source now, but the effort I'd currently been half-way in was switching the typings to generating through that scripts.js file.

In general I like the approach that automate routine tasks even, if it is code itself, but there is always should be a balance maintained. If it is a task that should be performed periodically it should be definitly automatated as much as it possible, if it is one time task - well it depends on if it is can be done manually with a proper level of error proneness, or it is really better to use automation to generate code to avoid possible manuall mistakes. In this particular case I need to look closer to make any conclusions if it is an overkill (will report in comments later).

@KiaraGrouwstra
Copy link
Member

Right. The CurriedFn interfaces were actually a recent addition, but then turned out not to fully address the issues TS had with curry, so ended up unused again. They're generated, anyway, so ditching them definitely seems fair. Just pushed a commit for that.

On the consistency points you mentioned, I guess overall they definitely show that this repo is still pretty much WIP, with quite some ideas that haven't properly reached their potential yet, and as you noted, failing tests. I'm not gonna lie about that -- I wouldn't describe the stage we're at as positive myself. The most positive recent achievement has been the addition of typings-checker, meaning that we're now finally able to test for incorrectly inferred types.

You're right about there being a bunch of code without explanations or just commented out.

In the latter case, one thing I consistently added commented out were attempts to capture function typings in a single CurriedFunction interface, so as to beat currying duplication. TypeScript didn't actually allow it that way though, and since then I turned toward the idea of code generation as an alternative.

Some of the type definition attempts I also commented out for just not working in their current state, so I suppose they're more like WIP ideas.

if it compiles it really doen't mean that it works

This is a fair point.

I suppose currently the question would be if there would be an effective way to conciliate tsc with the current $ExpectError tests. Not that we have many of those though, but there's also a more practical problem -- the rest of the tests that are still giving compilation errors. I wish I could make these magically go away, but I'd feel like just commenting them out would be to stick our heads in the sand as well.

Ensuring compliance with Ramda would definitely be nice as well, but the more of these concerns we're taking on, the harder it gets to successfully juggle all of them. Worse yet, we're already having trouble as-is.

Technically, we could create branches for each combination of Ramda version and TS version. And in fact, for some TS versions we already have branches. But even just doing one combination is already a lot here.

Also I would add onother point TS typings should fully (if possible) correspond official ramda's

This is one point where I'd beg to differ. The Hindley-Milner notations in Ramda are nice for human readers, and might have been taken straight from Haskell sources, but what TS typings do is infer what comes out of a function based on what you're putting in.

It requires one to look at what TS wants to accomplish that. This likely requires different numbers of generics (usually more), meaning it would likely become harder to mirror the Ramda type notations anyway. There's another more minor difference there as well: the HM notation uses lowercase a/b/c, while in TS types are generally expected to use uppercase.

That said, for the generics that do match, it's definitely a possibility to match their names, though it may become confusing if some generics appear to just use a/b/c, while others would not.

If we find a place where something untyped (typed with any) that should be acually typed, it is quite easy to fix

Well, the challenge has been how to detect these, as ideally, when a big PR comes in, we'd need an effective way to judge whether something is improving inference or accidentally just making everything any. Currently, output for that can be found here. I'd intended for it to automatically trigger so any PRs would have their differences in test output show up in version control.

for now typings-checker approach seem to me as overkill.

It was a big undertaking, yeah, but it was addressing a primary concern -- opening our eyes so we'd have any way at all to judge whether edits were good or bad, as any could otherwise easily blind one to that.

By comparison, I haven't considered ensuring we're synced with Ramda as as big of a concern -- manually keeping up with Ramda release notes is fairly doable, while people would quickly report these issues as well. But you seem to be of a different view, so perhaps I'm not fully seeing everything you're seeing. Feel free to comment.

On currying variations, I hope you're willing to take codegen into consideration there, as maintainability I'll agree should be among our top priorities.

To elaborate, code generation started before that, as some of the typings were just more effort to write manually than to generate. For the other functions it was about currying variations, but the larger concern was that this being a one-time thing seemed no more than an illusion. TS improves over time, and we'd keep editing the typings as well. The intention would be for the codegen to ensure that one edit would only need to be one edit, not ten.

@wclr
Copy link
Contributor Author

wclr commented Apr 5, 2017

@tycho01 what do you think about replacing typings-checker with a custom simple script that does not require any special comments (requiring proper comments format is definitely a place for potential mistakes/errors).

It will just lookup something like t.is( expression in tests and compare types of arguments)

test('props: one argument', (t) => {
  const res = props(['str'], obj)
  // comon tape test will check value equality
  // and type-check script will check types here
  t.is(res[0], 'str' as string) 
})

@KiaraGrouwstra
Copy link
Member

KiaraGrouwstra commented Apr 5, 2017

Yeah, I'm definitely open to input on both testing and codegen -- we'd probably need to find common ground on both of those.

I love the idea of having the same method cover both run-time and compile-time checks. Good job :), if it were agnostic to the testing framework it might actually catch on as a way of testing typings!

I'm confused by the redundancy of 'str' as string, but I suppose what you're saying is we could either specify the expected types by implicitly letting it infer them from the return values, or by manually overriding those so as to grant it some leniency as required. That sounds good to me.

Would you be able to commit a sample output file from your type-check script to show how it compares to the existing one?

@wclr
Copy link
Contributor Author

wclr commented Apr 5, 2017

'str' as string is because available TS compiler methods determine 'str' type as 'str' (not string), and as I'm aware available API currenlty has not method to determine assignability or identity of types with different names.

  1. So this could be either handled with the custom code that can determine that 'str' type is actually string and etc
  2. Wait for official API, besides I think 'str' as string is more explicit, and TS won't allow putting something like 'str' as number so it is not a problem I think.

On the output, as you can see from the code, script just throws when the first type mismatch is met, but it can be customized any way needed, for example:

  1. first check everything
  2. throw if there were errors found
  3. save output to file in the desired format - (I'm not sure why it is needed though and why is it committed in the repo currently)?

@KiaraGrouwstra
Copy link
Member

Okay :), I prefer for it to err in the direction of precision by default, so fair enough there. In my mind, perfect type inference would mean inferring the exact return values, though I realize that would usually not be realistic.

On just having it error rather than log all results to put into version control, that sounds like it might not really work until we'd reach 0 imperfections. Unless we'd put our head in the sand over a bunch of issues, I fear we haven't quite reached that yet...

Until that point it would always inform us of only one issue, rather than the many we have. This would result in a huge loss of information, and as a result make it hard to judge whether any significant overhaul would be for the better or for worse...

@wclr
Copy link
Contributor Author

wclr commented Apr 5, 2017

I've replaced tape with mocha, as it seem to be more appropriate for this case, output can be like this:

image

Output can be of cource saved to file, though I think test output in a file is kind of redundant, and everything actually should be made via CI, where you can see the latest status and errors. I will tune it.

@KiaraGrouwstra
Copy link
Member

That looks good, yeah.
I agree CI plays an important role as well.

The reason I liked having output in version control would be so as to be able to see diffs between commits (ideally the test would automatically be run before a commit could be pushed): whether the failed asserts would go up or down.

I mean, yeah, I've been among errors for long enough I'm taking their presence for granted... the primary problem there is it's not even so much an issue of just fixing all of them for a second, but rather that quite a few aren't even our fault, but rather issues with TypeScript.

@KiaraGrouwstra
Copy link
Member

From the diff use-case, perhaps you'll also get more of an idea on how I structured the output there, including code snippets though not line numbers, which I suppose would differ from the Mocha output here. That seems irrelevant, especially as the line numbers make for a more elegant output, but that was intentional.

The reason there was that line numbers don't go with output diffs well. If one were to add/remove a line near the start of the test file, all the line numbers would change. If that happens, you get diffs on each line in the output file if only for the line numbers. If results had otherwise changed as well, this could mean the actual diffs would no longer be visible as they'd be drowned out by the line number changes, making it hard to see which inferred types changed, and how.

That said, the numbers passing / failing would remain as a means of judging whether things changed for better or for worse... But if this change would mean sacrificing information, then I'd wonder, might we not be better off leaving the type-check as-is, while still adding the run-time checks as you proposed?

I'm all for elegance, but the information may well be worth not throwing out.

test/props.ts Outdated
equal(res[1], 1 as number);
it('ts error: unkonwn props not allowed', () => {
typeFail(`
import props = require('ramda/src/props');
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@tycho01 this allows to test error cases that should be caught by TS.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pretty cool! I'll gladly take that over $ExpectError then, I'll agree having that render the test files unable to compile (presuming other errors would be cleared out) was definitely a downside.

The output diffs and codegen points are still concerns to me, but I'm pretty positive about all the new ideas here!

@wclr
Copy link
Contributor Author

wclr commented Apr 5, 2017

The output diffs and codegen points are still concerns to me

The reason I liked having output in version control would be so as to be able to see diffs between commits

  1. About test output, I don't really share such concern. Tests are here to be passed. They just should always pass in a normal situation. If there is chage in the code and tests break (will be seen in CI anyway) - it should be fixed. I don't see much value in the ability to contemplate on tests results diffs, they just should pass. Line numbers there just for a developer to quickly find a place in the source file and fix the error, not contemplate on it.

  2. On codegen
    From my current perspective, I wouldn't care about it much either, I would go without it for now as I don't see a pattern that really needs to be automated. There is just a needed some accurate manual work and tests that ensure that everything is correct. This seem to be almost one time work, if will seem impossible or very hard to accomplish manually then may be to think about something like this. I may change my mind, on this.

@KiaraGrouwstra
Copy link
Member

KiaraGrouwstra commented Apr 5, 2017

Yeah, if we can get the tests to pass, I'll concur diffs definitely become a moot point. The obstacle until that point appears to be not all problems are really our fault (see generic degeneration on R.pipe(R.identity)).

I don't see a pattern that really needs to be automated

The pattern: someone edits a typing of a curried function

almost one time work

well, the one-time part for manual currying is done (same for the generation, minus an edge case). it's still 'once every edit', but yeah. If anyone's willing to put up with that though, then that's fine.

@wclr
Copy link
Contributor Author

wclr commented Apr 5, 2017

Yeah, if we can get the tests to pass,

In a stable version there should be tests that can actually pass. If you are waiting that test that are not passing today will be passing tomorrow because of some typescript change, such cases should be moved out to some other place separate from stable ones, and they could be periodically launched. Maybe in another branch to keep things clean,.

@KiaraGrouwstra
Copy link
Member

To confirm, should we move any unresolved issues (including outstanding ones here) to a separate branch such that the tests in main branch only confirm the status quo?

i.e., can I take this to the corollary, and say a version is deemed stable once it has kicked out all of its failing tests?

@wclr
Copy link
Contributor Author

wclr commented Apr 6, 2017

@tycho01 btw you may look at my version of code gen in latest commit 11dab2e

src/props.d.ts is generated from tpl/props.ts

I consider such code gen approach very useful when need make versions of params (as in example above). It allows to easily have docs for each version etc. And most of allow allows to be sure that if it tests for two params it will work for 8 params either. The only problem is formatting (probably could be fixed using TS compiler API), but anyway this is built version, it even should not be in SC.

@KiaraGrouwstra
Copy link
Member

KiaraGrouwstra commented Apr 7, 2017

Had you considered the original props codegen source? It looks somewhat less verbose than the template you used.

That said, there are differences of course:

  • You already incorporated comments (needed)
  • You incorporated imports, needed to support the use-case of importing separate functions in the absence of tree-shaking capabilities. These could potentially be abstracted by checking for string containment based on a list of our types/interfaces.
  • You already abstracted over input array length here, while I hadn't yet in that one. I suppose in that sense it may be fairer to compare it to something where I had.
  • You left curried interfaces separated, which is good here but problematic for functions where the first argument matches. This was one of the points I had already taken care of in my codegen there. The reason separation is a no-no for curried typings where the initial parameters match is that TS will just pick the first matching interface, ignoring any alternative options. This was among the issues I'd been trying to deal with there.

To further elaborate on the last point, thing actually get convoluted here, since you'll need to ensure generics are declared only when the info becomes available, while disregarding any combinations where the generics info is not yet available at the moment it's needed (this problem plagues curried versions of typings utilizing keyof.

In case your version hadn't taken care of path lengths here yet (e.g. if we were dealing with a simpler function than props), I'd wonder if your codegen version might've become more verbose than the output result. Not that my codegen hasn't gotten cases like that, but the damage appeared more limited in general.

To be fair, I'm not even sure in which cases I'd had to deal with variations * currying already, so props on that. That said, we may need to ensure maintainability would be further improved. You may notice that I'd had this general tendency toward separating data and logic, such that for the most part the common logic could just be abstracted out. I hope we could somehow find a golden mean there.

@wclr
Copy link
Contributor Author

wclr commented Apr 9, 2017

Had you considered the original props codegen source? It looks somewhat less verbose than the template you used.

Generally, I prefer to use TypeScript over JS even for simple scripts, is it is much less error prone. My version of templates more verbose because it is more robust.

You may notice that I'd had this general tendency toward separating data and logic,

I too prefer functional approach and there data becomes actual logic.

You may look at flatten typings I made and they work.

Aslo, I made an experimental version that types path with object/lists, excluding tuples. Need to look at performance as typings are huge, though I didn't notice any problems while testing.

@KiaraGrouwstra
Copy link
Member

Generally, I prefer to use TypeScript over JS even for simple scripts, is it is much less error prone.

That's fine, yeah. Context: I used JS cuz I ran it in the browser console but running through Node (allowing TS) was definitely the next step. I definitely wouldn't insist on JS there.

Perhaps with a few different templates of your method we could further look into the patterns across typings though, so as to further contract the bits needed per individual typing. We may have use for conciliating our approaches there.

(On my approach, I'm primarily referring to the larger list there rather than the older custom ones with their own functions at the top there -- those older ones would probably get more elegant as well if generated such as to plug into the more general one.)

On path: my earlier experience was that it worked out in the test, but appeared to give more performance issues when I started trying to use it in an actual project. It did seem like cutting down on the permutations there should probably address that though.

P.S.: pretty interesting, good job on that flatten! I'd really struggled with how to handle NestedArray...

@KiaraGrouwstra
Copy link
Member

In my current interpretation, the primary reason you felt forced to switch to a more verbose codegen method was the switch to outputting to separate files, which added new challenges (primarily imports).

What if instead, we'd enable import merge from 'ramda/src/merge' by flipping it around and having those reference the big file, instead of the big file reference the smaller files?

I know you wanted source in separate files as well to facilitate editing. I think that part has been possible either way though.

@KiaraGrouwstra
Copy link
Member

Just saw your commit, guess our approaches are starting to look gradually more similar. Had you checked my above question though? I wonder if flipping it would suffice for your purposes, as I'd imagine it'd mean our approaches could be fully reconciled. It feels like we're just solving the same problems all over again.

@wclr
Copy link
Contributor Author

wclr commented Apr 26, 2017

In my current interpretation, the primary reason you felt forced to switch to a more verbose codegen method was the switch to outputting to separate files, which added new challenges (primarily imports).

Actually, I don't really need so much separate files by myself as so far I always imported the whole ramda, but:

  1. I think separate files is generally more correct approach here as it reflects the original structure of the lib.
  2. Separate files are cleaner and easier to maintain, TS enables easy refactoring, if you need to rename objects ect.,
  3. SCM is always better with small files, etc
  4. There is need to have clear and working test cases ideally for all typed methods, esp for the complex. This is a big work but I think it can be accomplished eventually.
  5. I don't want more verbose, but more consistent generation: https://github.com/whitecolor/typescript-ramda/blob/restructure/tpl/path.ts#L24

@KiaraGrouwstra
Copy link
Member

KiaraGrouwstra commented Apr 26, 2017

I respect the desire to make the source more maintainable, yeah.

If we might consider the generated form to be a compiled bundle in the same vein as index.d.ts, with separate importing needs accounted for by re-exporting separate typings through separate files, then I think that simplification could address the need for handling imports.

At that point, might it not suffice to have the separate template files just be a split up version of the current scripts file, e.g. like this?

@wclr
Copy link
Contributor Author

wclr commented Apr 26, 2017

assocPath: [
  ['T', 'U'],
  {
    path: 'Path',
    val: 'T',
    obj: 'U',
  },
  'U'
],

It is not completely correct to represent params as object, as props in object are not actually ordered, so when parsing you are not ensured that keys path, val, obj will be in the same order, this is quite theoretical issue, usually order is preserved, but still.

So curried version would be like this;

assocPath: [
  ['T'],
  [{path: 'Path'}],
  [{val: 'T'}], [
    [U],
    [{obj: 'U'}],
    'U'
  ]
],

Well, I think it is really quite compact and so so braket-overfilled. Probably could go with this vs util helpers.

At that point, might it not suffice to have the separate template files just be a split up version of the current scripts file, e.g. like this?

I think there should be separate template versions and separate test files for very function, I even think that for simpliest functions still template could be used not d.ts file.

@KiaraGrouwstra
Copy link
Member

Thanks for your response.

Yeah, I know you're right about the order not being guaranteed. I'm hoping we could put off the bracket pollution until some new Node actually breaks it though. I presume the chances should be fairly low, so hopefully this concern will remain theoretical.

On the generics, note that my version had already been checking such as to ensure that during currying generics would be added as late as possible. That's what had enabled me to allow specifying them in one go in that initial array here -- the point of this structure (as opposed to the generated typings) was that this input structure would be agnostic toward the effects of currying. I had put a bit of thought into it before getting to that. :)

On separated tests: no objections!

I even think that for simpliest functions still template could be used not d.ts file.

Sorry, could you explain? The template files used for what instead of those? Is this about the tests still?

@wclr
Copy link
Contributor Author

wclr commented Apr 27, 2017

I'm hoping we could put off the bracket pollution until some new Node actually breaks it though

I didn't hear about such intentions. So still need to ensure order. Well to minimize bracket poution, instead of

  {path: 'Path', val: 'T', obj: 'U'},

could do:

  ['path:Path', 'val:T',  'obj:U']

not sure if it worth though.

On the generics, note that my version had already been checking such as to ensure that during currying generics would be added as late as possible. That's what had enabled me to allow specifying them in one go in that initial array here

Yeah, I suspect generation fo curied versions could be made automatic in most cases.

Sorry, could you explain? The template files used for what instead of those? Is this about the tests still?

I mean that even simple definitions for such methods as add should probably be done through generation, without manually created .d.ts, may seem overkill, but it also would help to structure resulting code better, make dosc better, allow some analysis, etc.

@KiaraGrouwstra
Copy link
Member

Yeah, I agree consistently using generation is probably the least confusing.

I didn't hear about such intentions. So still need to ensure order.

I'm not aware of planned implementation changes either -- afaik, for the foreseeable future it'll work, even if not technically guaranteed by spec. That way, is this really urgent already?

@wclr
Copy link
Contributor Author

wclr commented Apr 27, 2017

I'm not aware of planned implementation changes either -- afaik, for the foreseeable future it'll work, even if not technically guaranteed by spec. That way, is this really urgent already?

You mean node currently ensures technically the order given in the code for Object.keys, is there a confirmation?

@KiaraGrouwstra
Copy link
Member

No guarantee, just implementation. If I may consider the syntax pollution from a cost model, I'd be inclined to adapt when needed, otherwise wait it out hoping V8 won't do a major overhaul there soon.

@goodmind
Copy link

Any progress here?

@KiaraGrouwstra
Copy link
Member

@goodmind: it looks like #173 has been incorporating the same features as proposed here.

@goodmind
Copy link

@whitecolor @tycho01 I think you can close this?

@KiaraGrouwstra
Copy link
Member

KiaraGrouwstra commented Aug 15, 2017

@goodmind: #190 just addressed this. Feel free to comment / open new issues if it turns out any of the concerns raised here are not adequately resolved with that.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants