-
Notifications
You must be signed in to change notification settings - Fork 63
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Token types - optional or required #63
Comments
I would vote for optional, as I currently prefer using one file per token type, such as |
I don't like requiring maintainers to do more work to keep their tokens working so my initial thought was optional. Requiring it now will allow consuming tools more consistent and easier time implementing tokens, which to me is a big win. Not having types would mean each tool will have to parse and type-check themselves which could lead to inconsistent results tool to tool. My vote is required. |
I disagree. Why do I have to meet a specification for a tool I may not be using? If I am using it, there would be a requirement from that tool to have the |
note, there are tools like Theo that let you set a type globally for all the tokens in a file, so it doesn't have to be on each individual token. if we allowed that to fulfill the requirement, would that help? I think types are pretty important. |
The question is, is the enforcement of the type for the betterment of the library or for the ease of integration of a 3rd party tool? Or maybe better said, is this still useable without the mandatory type? While I agree typing can be helpful, what is the driver for this? Unless there is a direct benefit to the lib itself, making this required is inappropriate IMHO. |
@blackfalcon I'm not 100% sure what you mean by "the lib" (the token output? the framework you're importing them in?) As aligned with our mission objectives, we want to be extensible. Design tokens are also meant to be agnostic so that they can be usable by various libs and tools. Part of our goal is to give design tools/vendors a way to integrate tokens in a predictable way that makes sense and has agreement with the community. This was a big motivator in why we established this community group in the first place. Types seem necessary to make this work. And in terms of the "lib", if you mean your design system codebase, how are you documenting that? I find that types being required help display tokens in the appropriate way in the docs (colors vs units vs whatever else). Additionally, when using JS/TypeScript or other languages beyond CSS/Sass, I've found it can be very useful for linting/testing/validation (which hopefully you're doing!) and testing certain attributes for accessibility. I think from a standards perspective, as long as you can define them globally so you don't have to do each individual token (which seemed to be one of your concerns in your original comment), that this shouldn't be too inconvenient to support? Not sure I'd find this "inappropriate"? :) |
The additional context is helpful. The motivator for I would look at how Style Dictionary works. AFAIK, the only thing that is required is I am of the school that I have a I'd ask @dbanksdesign where he sees the tradeoffs for this. I see your argument for linting/testing/validation based on data types, but if you are going to use that as a motivator and make it required, then there should be a list of allowed types. Without that vendors who create linting/testing/validators will have a hard time doing so IMHO. Either this body comes up with that list or there is a specification for authors to maintain that list of types. Regardless of all of this, allowing for types to be set globally versus individually, is a GREAT IDEA! Fully support that. I do something similar to that, but I wrote the code that appends the common data to the individual. @dbanksdesign I am not aware of a feature in Style Dictionary that supports that? If not, wow, that's a GREAT IDEA! |
I brought up linting/testing/etc to address your "Unless there is a direct benefit to the lib itself, making this required is inappropriate IMHO." comment. I do see that as a direct benefit. For what it's worth, at Salesforce, where design tokens originated, we relied on types to do transforms. For example, a hex code in a color type would convert to what we needed for Android or iOS (example: 8 digit hex for Android instead of rgba for CSS). I admit I'm not as familiar with how Style Dictionary does this, but global types (and other attributes) were a big part of how Theo worked. |
To elaborate more on the point I made above, making types required will set an expectation that any tool using the spec will have types present and to allow it to make better decisions while using the value. Whether that's transforming them to other formats, like the Style Dictionaries and Theos, displaying them in documentation, etc. From the example above, the tool can expect That seems like a good precedent to set early. |
Design tokens are always typed, explicitly or implicitly. Style Dictionary for example doesn't care how you type your tokens, but to transform tokens you need to tell it which transforms should apply to which tokens. The built-in transforms implicitly type tokens based on the token object structure, but that is not the only way to do it, you could explicitly add attributes to each token to type them to be targeted by specific transforms. One thing the format module editors have not tackled fully is separate/multiple files, which would be a necessity for a feature like 'all tokens in this file are of type color'. There were some very initial conversations around having some data inherit from a token group like so: {
"brand": {
"type": "color",
// type gets applied to all tokens in this token group
"primary": { "value": "#ff9900" },
"secondary": { "value": "#00ff00" }
}
} I would still consider this explicit typing because the user is specifying the types of tokens in the token file. The case for explicit typing in the design tokens spec would be to simplify implementations for library maintainers and tool makers. Without explicit typing, the specification would need to define how libraries and tools should infer design token types based on their value, which can get very complex. We have talked about not enforcing any kind of naming or grouping structure, therefore types could not be inferred based on name or group. Another use-case to consider is component tokens. Having a The design token spec should define how tools understand the types of tokens so they can do something meaningful with them. Figma needs to know what a color is versus a padding or border width. In my mind doing this explicitly will lead to less edge cases and potential bugs/discrepancies between tools. However I do see it being more cumbersome than implicit typing, but I would much rather leave that implicit stuff to tool-makers. I have an analogy that might help as well. I think of the token spec like the CSS spec. Many people write plain CSS just fine. Some reach to something Sass to improve the authoring experience, but it ultimately compiles to CSS. Libraries like Theo and Style Dictionary would be like Sass. Theo and Style Dictionary could output a valid and explicitly typed design token file from a syntax that infers design token types. Or conceivably Theo and Style Dictionary could understand a design token spec compliant file as well (like how SCSS is a superset of CSS). @blackfalcon there is nothing in Style Dictionary that applies attributes to all tokens in a file or in a group, but that could be achieved in a few ways. One would be to use a custom parser and modify the file's contents as it comes in to Style Dictionary. The other would be to use JS modules as the token source and do that application with a JS function. |
It may be worth pointing out that in the current draft spec, the
So, a tool that supports this format is required to interpret things as follows: {
"token with type": {
"value": "#ff0000",
"type": "color"
// This is a color token becuase we've set the type
},
"token without type": {
"value": "#00ff00"
// This is a string token because no type
// property is present
}
} In other words, every token in the file does have an unambiguous type (which, as others have already pointed out, is important for token files to be interoperable between different kinds of tools). However, the basic JSON types (string, number, boolean, etc.) probably aren't all that useful in the context of a design system (except perhaps numbers, which could be used for line-heights, aspect ratios, as inputs to modular scales, etc.). In practice I therefore expect most tokens will include a For manually authored token files, typing out The way it would work is that a token's type is determined as follows:
So folks wanting to have files that only contain one kind of token (e.g. |
Relevant to this conversation, I've created #72 to discuss file & group level properties. |
Reviewed by the spec editors on 2021-10-19. Decision Decision Action to be taken |
Update spec to reflect decision: #63 (comment)
Update spec to reflect decision: #63 (comment)
* update(spec): Add `type` property to file and group levels Update spec to reflect decision: #63 (comment) * Fix prettier errors Co-authored-by: Kevin Powell <[email protected]>
Closing this issue as the draft spec has been updated accordingly by PR #87. |
…86) * Changes composite type to be pre-defined rather than user-defined * Add border composite type * Add typography composite type * Add gradient composite type * Adds font-weight type * Removes font-weight from types to-do list * Adds stroke-style composite type * Adds transition type draft * Adds issue links for all composite types * Renames font to fontFamily Co-authored-by: Kevin Powell <[email protected]>
Are token types optional or required?
Example:
Resolution: #63 (comment)
The text was updated successfully, but these errors were encountered: