-
Notifications
You must be signed in to change notification settings - Fork 561
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Proposal] Additional fields to apply transforms to #909
Comments
@Sidnioulz thank you for this proposal! Token overrides, or tokens with context-specific values like you describe (light/dark or breakpoint specific), is something I have thought a lot about. I have gone back-and-forth with the 2 methods for token overrides, the single token approach as you point out in this proposal, as well as an 'overrides' or file-based approach. Both have their merits and downsides. Do you think you would be able to include example code for how you would expect others to use this new API? Would it be like this? module.exports = {
source: [],
platforms: {
css: {
valueTransformFields: ['value', 'value_lang-el', 'value_lang-jp', 'value_hc', 'value_breakpoint-md']
// ...
}
}
} I would expect this array field to be the same for all platforms if the token source is the same (they would all have the same value attributes). One other potential route is to have a generic, and dangerous, complete transform type that takes the entire token as input and returns the entire transformed token. I could see people shoot themselves in the foot with this approach, but it would also be more generic. StyleDictionary.registerTransform({
name: 'danger',
type: 'complete',
transformer: (token) => {
return {
...token,
value: /* some transformation */
}
}
}); |
Thanks for your feedback, @dbanksdesign! Your example of API usage is spot on, a simple array would be needed. I did forget to comment on it, I imagined calling it
I agree both approaches have pros and cons, and I think there are arguments in favour of a single file with multiple values. To me, the multi-values file is excellent for white-label systems or themes, where the whole value set is pulled in but value sets from different files may not necessarily be mixed. In contrast, some formats require all token values to be processed together. With Tailwind, we'd be looking at addComponents to generate a new class name that has conditional values, for instance. With multiple files, I'd need to output all my tokens to a JS or JSON output format and re-process them separately to build the Tailwind API. I'd completely lose the interest of having a Having all values bundled in a single file doesn't prevent making outputs that use a specific value set, e.g. if I declared {
"token": {
"value": "#111",
"value_dark": "#ccc"
}
} I could then make a run with:
And if I only had some tokens with a So I could technically replicate a multifile approach with my proposal, though the opposite isn't true. From that, I imply there could be examples where multi-file is more suited, and others where multi-value is more suited. I'd be happy to contribute extra examples to the codebase too alongside the code to implement the proposal. Re: the complete transforms, I had initially thought of storing the extra values as attributes, but if I did that, I'd lose the expressive power of declaring how to transform values. I'd have to redo all the value transforming manually in my formats. Similarly, if we had complete transforms, we wouldn't be able to directly use existing value transforms declaratively. For color, size, etc., we'd need to create complete transforms that re-apply both the value transform we seek (hsl, pxToRem, etc) but also the reference translating. That part is far, far from trivial to handle in transforms, and I believe it's best suited for the engine itself, especially as it may loop over to handle multiple layers of references. That's what pushed me to write the proposal: you already have all the logic for handling value referencing and transforming in a very easy way, and new use cases can be unlocked by widening what that logic applies to. It took me some time to get used to transforms applying only to attributes, name and value, but I now clearly see the added value (!!) in doing this rather than whole transforms and I'm grateful this structure is in place :) For me, one particular advantage of allowing multiple values in a file is that it allows us to locally see all possible values for a token. It makes it easier to audit issues in the token system, as some tools (e.g. Figma) can't yet be connected to a source of truth. If designers can review a single JSON block to fully audit a spec, it's easier on them than by opening many files. On referencing, there's one issue I can see happening, best explained with an example. Let's say we start with: {
"foo": {
"value": "#111"
},
"bar": {
"value": "{foo.value}"
}
} Everything is fine and our tokens are in sync. Then, we add: {
"foo": {
"value": "#111",
"value_dark": "#444"
}
} Suddenly, I have an idea on how to solve this, which would lead to side effects on token referencing (good or bad, that remains to be seen). I could open a separate issue for that to help keep this conversation on a single topic, if that's ok with you? |
@dbanksdesign is it ok with you if I start working on a PoC? |
Still WIP, and not yet ready to present it, but I'm writing the PoC on https://github.com/Sidnioulz/style-dictionary/tree/feat/transform-extra-fields, testable on an extra-value-fields example, which does transform multiple value fields, and adds the ability to use a specific value field (with or without fallbacks) for a format, which enables a good amount of workflows. I'll post again when I'm done porting the code, writing UTs, checking types and doc, etc. and I'll add code samples and sample outputs to this issue too when that's done. |
I'd like to propose a PR to allow applying all
value
transforms to additional fields.Underlying need
As design tokens are combinations of name-value pairs, and rules that make it intuitive how tokens can be combined, often based on their names. In our DS, we chose to keep the same token names and deliver slightly adjusted token values to different markets, eg. CJK font-family tokens. We use the same mechanism to adjust size tokens to device sizes, and we're looking into adding high-contrast support via this mechanism too.
This means we have tokens like:
The problem we're facing is that we cannot have the extra value fields transformed.
The proposal
I'd like to open a PR that lets us define a
valueTransformFields
array (defaulting to['value']
) at the platform options level.This array would then be read throughout the code where
value
is currently read, and transforms would apply to each property of the array when defined.If there are codepaths that throw when
value
is undefined for a token, they would then throw if no value field is transformed for that token. Alternatively, it'd also work for us to defineextraValueTransformFields
and keepvalue
mandatory throughout.Why this proposal
To me, this approach better represents the fact that we want to adjust values to a context. We aren't trying to do a white-label DS here, and we need to deliver all the font-family values together, for instance.
I believe this proposal better fits than re-resolving refs and re-transforming in our formats, as it keeps transforms focused on outputing values and formats on outputting output files.
I believe it also results in a cleaner token structure for our use case than defining multiple tokens with different names and a single value field. If we had sibling tokens encoding the "language-jp" bit in their name, our transforms would be significantly harder to write as we'd need to look for sibling tokens for every token being output, and then re-filter the sibling tokens out.
It also adds flexibility to the kind of operations transforms can do without impacting newcomers who do not need the option.
What this proposal isn't
Here, I'm not discussing composite tokens. They are a non-goal to me, I don't want to apply different transforms to the
value
fields I'm proposing. I'm perfectly oken with storing individual tokens in SD and then building my composite tokens in my formatters.I'm hoping to receive feedback from the maintainers here, and happy to commit to a PR. Thanks.
The text was updated successfully, but these errors were encountered: