-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Spike] Validation and type conversion at REST/Model/Database layer #1057
Comments
For validation, I was thinking of letting users provide a JSON schemas to validate their controller parameters and thought about using decorators for that purpose like this: class Controller{
createANumber(@validate(numberSchema) num: number) {
return num;
}
} Unfortunately, I don't think it's possible to access the arguments from a parameter decorator, so it needs to be done in a method decorator. Then the question to answer is, do we want this feature to be available at the OpenAPI operation decorators, or should this feature live in its own method decorator? In other words, how should validation be done from the following two options?: @get() // incorporate into `openapi-v3`
createANumber(@validate(schema) num: number) {} @validatable()
createANumber(@validate(schema) num: number) {} For reference, OpenAPI spec does not seem to support useful fields that are present in JSON Schemas like Additionally, a very rough PoC on type coercion is up on #1256. Please let me know if you like the approach or have any questions about it. Thoughts @strongloop/lb-next-dev @raymondfeng @bajtos? |
@shimks IIUC we are talking about Tier 1 validation. I think your approach sounds reasonable and like the choice of JSON schema representation for the validation. +1 for using method decorators for parameters. I'm not aware of any drawbacks to this. Based on what you said for lack fo support for regex patterns for types in OAI spec https://github.com/OAI/OpenAPI-Specification/blob/master/versions/3.0.0.md#data-types, I'd go with option 2 for having separate method decorator. Have you looked into LoopBack 3 and how it does this type of validation? |
@b-admike This is what we have for LoopBack 3 so far https://loopback.io/doc/en/lb2/Validating-model-data.html#using-validation-methods. Based on my short briefing with @raymondfeng, apparently this isn't good enough and we should be looking to replace the validation functions with something more robust. |
@shimks For the validation part, I think the job of parameter decorator and method decorator would be:
This is not related to the path decorator IMO, so let's keep it as a separate decorator. e.g. @validate()
createANumber(@validateLength(schema) num: number) {} And +1 for @b-admike 's comment
Reading the Overview....is this spike for the whole validation Epic? I may need more time to catch up the progress of type coercion. |
@shimks OpenAPI spec supports And could you share more details like what's the usage of |
@jannyHou I can't I believe I missed that ( I think since |
Here is how I see OpenAPI parameter validation works:
|
As for Tier1 validation, my understanding is that such validation rules will be declared at controller method level. In that case, I'd prefer to allow developers to use transport-specific decorators like OpenAPI spec Upsides:
Downsides:
Regarding the enforcement of validation at runtime: as long as we are talking about validation of inputs for controller methods (or function handler based routes), I'd personally prefer to implement the validation in the REST layer, most likely as part of
While this is doable, I am concerned about the ramifications of generating proxy methods:
|
Here's what I have down so far for tier 1 validation: 6b40a33. Please look at the test file for the use case. The PoC comprises of doing validation with JSON Schema, which I feel lines up better with our current support for bottom-up approach. This way we'd also have access to popular validators such as AJV. I do agree that when looking at validation from the top-down approach, it's much more streamline to do validation with an OpenAPI schema. We'd just need to either use an OpenAPI schema validator or use a OpenAPI schema -> JSON Schema converter and then use a JSON Schema validator. This would take care of both tier 1 and tier 2 validations. The PoC also shows validation done through a proxy method using decorators. (EDIT: I didn't know there was an actual
I don't think using decorators will impact performance more than it already does since all we're doing is running validation on the given arguments and then passing them back to the original method.
I also don't see problems with troubleshooting the proxied methods unless there are gaps in my knowledge.
I may need to investigate more for positional arguments stuff.
I think for JavaScript codebases, we can just offer simple validation functions that the users can incorporate into their controller methods. |
I'm about to start investigation for tier 3 validation and was wondering if |
An example that comes to my mind - I guess it would have to be implemented as a custom validator: Let's say we have a Maybe the example above is not a realistic one? Let's change it a bit: The
I did a bit of googling aroud SQL constraints and referential integrity and could not find any other use cases below these two that we have already identified: uniqueness and referential integrity. |
I think referential integrity can be accomplished at tier 2 (model) level as long as the schema provided to do the validation for the model contains all of the information. Do you think the spike should still investigate implementation for referential integrity at tier 3 validation? For the case of validation of uniqueness, @raymondfeng and I have found that it should be possible to update the juggler definition to support uniqueness for the DBs that support the constraint. For DBs that don't support it, we think it's ok to defer the validation feature for uniqueness. If the users need it, I think they can use our tier 4 validation feature to do it themselves. Thinking about these validations in our described 3rd tier, I feel that they can be accomplished at different tiers (2nd tier for referential integrity and 4th tier for non-native uniqueness support) Thoughts @bajtos and @raymondfeng? EDIT: we've found that support for UNIQUE constraint cannot be added in without access to |
Hope I didn't misunderstand the whole spike, I think we want to apply validator when given the OpenAPI spec, not the other way around, generate the OpenAPI spec according to validating rules.
@bajtos I still feel as the initial implementation, it would be good to have a general validator instead of an OpenAPI specific one. How about using composed decorator? e.g. async createUser(
@validateEmailFormat()
@requestBody()
newUser: User
): Promise<User> The OpenAPI decorator is get called first, so the validator decorator can consume the information in the spec. It guarantees the spec generation and validation are targeted on the same parameter, and also easy to switch to other protocols. Moreover, we also have a chance to modify the spec in the validator I am a little confused with tier 3, what does model collection level mean? What would be the validation that tier 1 and 2 could not do and have to pass to tier 3? |
@jannyHou What we want to do is set a baseline for other potential APIs to do their validation on (or at least that was my approach to it). It should be possible to integrate |
@raymondfeng @bajtos What kind of UX are we looking for in tier 4 validation? From what it sounds like, we need to be able to give users access to the arguments from the controller method to have them add in their own logic, but the easiest solution to this I feel like is for the users to just do this type of validation in the method itself, both for the users and for us. What am I missing? Was the purpose to see if we can find a way to implement tools for extension devs to provide users with out-of-the-box validation functions (like uniqueness, presence of FK, etc.)? |
Had a talk with @shimks regarding tier 3 and 4, take email validation as an example, there could be two kinds of validations:
If we can pass in the model and repository to the decorator function, then those two use cases don't have a difference, and moreover, if we can get access to the related models and repositories in decorator function, that's even better, tier 3 and 4 can be merged into tier 1 and tier 2, we only need two tiers: property level and method level. So the question would be: given a controller class constructor/prototype, can we get the context like its models and repositories? Or just pass them in as the inputs of decorator function? |
Sounds great 👍
I think we actually need both. Consider the case where a controller method is accepting a model data. In this case, most (if not all) validation rules are already described in LDL model definition. IMO, repository-json-schema should be able to take LDL/juggler validation rules and convert them to JSON Schema/OpenAPI spec. Implementation wise, I agree the REST layer should apply validation based on the OpenAPI spec provided by individual endpoints (controller methods, route handlers, etc.) |
Some thoughts around the 4 tiers of validation: Tier 1:
Tier 2:
Tier 3:
Tier 4:
Validations can be declarative (declaring constraints) and imperative (writing custom validator functions). The overlapping between parameter value and model instance validation is interesting: For example,
|
Spike findingsCoercion:
// in parseParams
let params: any[];
// ... parameter parsing
let coercedParams: any[];
for (param of params) {
const lbType = getLoopBackTypeBasedOnParamSpec(param, paramSpec);
coercedParams.push(lbType.coerce(param, paramSpec));
}
return coercedParams
Validation:
Miscellaneous suggestions from the spike
Proposed discussion points regarding the spike resultsValidation:
Proposed issues to come out of the spike
|
Follow up issue for this spike has been created in #1306. |
Timeboxed to 2 weeks
Overview
Validation
string
type, other constraints could be regex, length, email formatCustomer
modelType Conversion
Customer
stored in MongoDB where customerId is stored using objectId.Order
stored in MySQL which doesn't have this native type. How to store customer using objectId as PK, referencing that from order in MySQL which has to be a string.Acceptance Criteria
Questions to Answer
Validation
The goal is to be able to leverage legacy juggler as much as we can.
Do we need to open up extension for this purpose. e.g. in LB3, we have operational hooks that are used by users for additional validation.
Note: Due to the different aspect of this spike, spike owner can go through the user experience and present to the team first before going further.
References
One of the user scenarios in here: https://strongloop.com/strongblog/examples-of-validations-for-loopback-models/
cc @raymondfeng
The text was updated successfully, but these errors were encountered: