Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Request to add Item for checking JSON Interoperability related vulnerabilities to ASVS #1538

Closed
ImanSharaf opened this issue Feb 3, 2023 · 38 comments · Fixed by #1841
Closed
Assignees
Labels
1) Discussion ongoing Issue is opened and assigned but no clear proposal yet _5.0 - prep This needs to be addressed to prepare 5.0

Comments

@ImanSharaf
Copy link
Collaborator

I came across a research article that highlights the security risks associated with JSON interoperability and shows how JSON parsing inconsistencies can mask serious business logic vulnerabilities. The article explains how discrepancies across JSON parsers combined with multi-stage request processing can introduce serious vulnerabilities. Here are four of them:

  • Inconsistent Duplicate Key Precedence (Please check Validate-Proxy Pattern example in the provided reference. An attacker can buy stuff for free)
  • Key Collision: Character Truncation and Comments (Please check Validate-Store Pattern in the provided reference. This leads to a privilege escalation)
  • Float and Integer Representation (Pleas check Inconsistent Large Number Decode example in the provided reference)
  • Permissive Parsing and Other Bugs (Please check Denial-of-Service: Segmentation Faults in the provided reference)

Given the importance of JSON in web application communications and the security risks it poses, I believe it is crucial to add a new item to the ASVS to address these security risks. This would ensure that web applications that follow the ASVS guidelines are secure against JSON interoperability vulnerabilities.

@elarlang elarlang added the 1) Discussion ongoing Issue is opened and assigned but no clear proposal yet label Feb 19, 2023
@tghosth tghosth added 3) awaiting proposal There is some discussion in issue and reach to some results but it's not concluded with clear propos _5.0 - prep This needs to be addressed to prepare 5.0 and removed 1) Discussion ongoing Issue is opened and assigned but no clear proposal yet labels Mar 14, 2023
@tghosth tghosth self-assigned this Mar 14, 2023
@tghosth
Copy link
Collaborator

tghosth commented Mar 14, 2023

Agree this is an interesting question.

I need to review the attached research paper in the context of the items above.

Note that Jim suggested the following replacement.

5.3.6 [MODIFIED] Verify that the application protects against JSON injection attacks. (C4) 75

with

5.3.6 [MODIFIED] Verify that the application protects against JSON injection attacks by sanitizing JSON before JSON parsing. (C4) 75

Note that Elar also noted that a sanitization requirement should be moved to 5.2.

@ImanSharaf
Copy link
Collaborator Author

I believe that 5.3.6 doesn't fully cover JSON interoperability vulnerabilities, what do you think about this one:

  • Verify that the application consistently handles JSON interoperability vulnerabilities, including: inconsistent duplicate key precedence, key collision, character truncation, and comments and permissive parsing.

@tghosth
Copy link
Collaborator

tghosth commented May 31, 2023

This feels really hard to actually verify.

Again I would prefer to avoid a list of attacks but rather focus on the defense so maybe:

Verify that any different JSON parsers used in the application performing parsing in a consistent way to avoid JSON interoperability vulnerabilities.

@tghosth tghosth added 2) Awaiting response Awaiting a response from the original poster and removed 3) awaiting proposal There is some discussion in issue and reach to some results but it's not concluded with clear propos labels May 31, 2023
@tghosth
Copy link
Collaborator

tghosth commented Sep 21, 2023

This feels really hard to actually verify.

Again I would prefer to avoid a list of attacks but rather focus on the defense so maybe:

Verify that any different JSON parsers used in the application performing parsing in a consistent way to avoid JSON interoperability vulnerabilities.

@elarlang @ImanSharaf @jmanico do you approve of this wording?

@ImanSharaf
Copy link
Collaborator Author

ImanSharaf commented Sep 23, 2023

Verify that any different JSON parsers used in the application performing parsing in a consistent way to avoid JSON interoperability vulnerabilities.

This appears to be satisfactory. @tghosth

@elarlang
Copy link
Collaborator

I just wonder, how many people will understand, what it means and why this requirement exists.

@jmanico
Copy link
Member

jmanico commented Sep 25, 2023

I don't think there is anything wrong with using different JSON parsers in an app as long as they are updated. I actually... I actually..... agree... with Elar.

@tghosth
Copy link
Collaborator

tghosth commented Sep 26, 2023

@elarlang we could add the suggested article as a footnote https://bishopfox.com/blog/json-interoperability-vulnerabilities

@jmanico the problem isnt using different parsers, the problem is if they have different parsing behaviours which lead to vulnerabilities.

My question here would be do we want this to be more general than just JSON? I guess conceptually it could be a problem with any inconsistent parsing behaviour

@elarlang
Copy link
Collaborator

Link to materials makes sense - if someone reads the document.

Ok, after one parser gives one output, another parser gives another output.

But you don't use 2 parsers for the same message for decoding? After decoding the JSON, you need to validate all items. Technically it's interesting toy to play with, but maybe I have not connected all dots to understand the risk in everyday practice. Anyway, if one can prove it to be an issue, it's way to go.

@jmanico
Copy link
Member

jmanico commented Sep 28, 2023

Are you telling me that all JSON parsers are insecure? Let's be clear: having more parsers doesn't inherently make your system less secure. The real issue lies in using outdated or known insecure processors, and that's something we address separately.

Security isn't a one-size-fits-all game. It's about understanding the nuances and making informed decisions. So, let's not throw the baby out with the bathwater here. Instead, let's focus on keeping our parsers up-to-date and well-configured, and let's pay attention to known vulnerabilities. That's how you build a robust security posture, not by making sweeping generalizations.

@tghosth
Copy link
Collaborator

tghosth commented Oct 1, 2023

Hi @jmanico, I think there is a misunderstanding here. We are not talking about vulnerable JSON parsers, we are talking about a situation where security assumptions are made based on the behaviour of one parser but the in fact a different parser behaves differently. Not necesarily insecurely, just differently.

For example, look at this example from the blog post linked above:

image

@elarlang
Copy link
Collaborator

elarlang commented Oct 1, 2023

@tghosth , @ImanSharaf

But you don't use 2 parsers for the same message for decoding? After decoding the JSON, you need to validate all items. Technically it's interesting toy to play with, but maybe I have not connected all dots to understand the risk in everyday practice.

What are the pre-conditions for using this vulnerability to be a successful attack?

@tghosth
Copy link
Collaborator

tghosth commented Oct 3, 2023

@elarlang did you read through this link: https://bishopfox.com/blog/json-interoperability-vulnerabilities

It does give quite a few different examples

@elarlang
Copy link
Collaborator

elarlang commented Oct 3, 2023

I read it "diagonally". My point is, we should provide here "TLDR" and conclusion - what is the attack scenario we try to take down with this new potential requirement.

@tghosth
Copy link
Collaborator

tghosth commented Oct 5, 2023

OK we have two micro-services. It enforces some security control based on the "user_id" key in a JSON object, let's say it checks whether the user_id field matches the logged in user because we are going to do something sensitive for that user. It parses the JSON object, there are two "user_id" keys. One has a value of "fred" and the other has the value of "fake" and this particular parser's behaviour is to combine the key values. We are logged in as "fredfake" so this successfully matches and the check succeeds.

The JSON then gets passed to a different microservice. This one assumes that the check has already been done so it can just process the JSON. It parses the JSON but this particular parser's behaviour is to just read the first value and ignore the duplicate one so it reads "fred" as the "user_id" value and performs a sensitive operation on "fred" even though they are not the logged in user.

The overall concept is to avoid security assumptions being made on parser behaviour @elarlang

@maizuka
Copy link
Contributor

maizuka commented Oct 6, 2023

The architecture requirement 1.1.6 says that security controls should be centralized.

Verify implementation of centralized, simple (economy of design), vetted, secure, and reusable security controls to avoid duplicate, missing, ineffective, or insecure controls.

So is it recommended these?

  • only one parser is used for user input JSON
  • the input is not passed to other microservices as the raw format
  • if other microservices need JSON format, the parser reconstructs JSON from the parsed result

@tghosth
Copy link
Collaborator

tghosth commented Oct 8, 2023

In principle it is either about ensuring the multiple parsers have the same behaviour or that the check and the use is in the same parser I think

@jmanico
Copy link
Member

jmanico commented Oct 9, 2023

ChatGPT help:

Using two different JSON parsers in the same application can introduce a variety of security risks, particularly around inconsistency and complexity. Here's a more technical breakdown:

Attack Surface:
Increased Attack Surface: Two parsers mean two sets of vulnerabilities. If an attacker can't exploit one, they might be able to exploit the other.
Inconsistency:
Inconsistent Behavior: If the two parsers handle edge cases differently, it can create security vulnerabilities. For instance, if one parser is more lenient in accepting malformed JSON, it could be exploited for injection attacks.

Inconsistent Validation: It's possible for one parser to have a different validation mechanism than another. For example, one might automatically decode encoded entities, making it possible to bypass security controls like input validation filters.

Complexity:
Increased Complexity: More components generally mean more complexity, and complexity is the enemy of security. The more complex a system is, the harder it is to secure.

Configuration Challenges: Each parser might require its own set of configurations for secure operation. Managing these configurations, and ensuring they are securely implemented, becomes challenging.

Version Management: Keeping both parsers up to date is twice as much work. An outdated parser can have known vulnerabilities that are exploitable.

Data Flow: If JSON data is passed between the two parsers for some reason (e.g., one parser is used for an initial quick validation, and another is used for a more complex operation), you have to ensure that the security controls are consistent across the data flow, which is hard to manage.

Logging and Monitoring:
Disparate Logging: Two parsers may log differently, making it difficult to track malicious activity that spans both.

Error Handling: Inconsistent error messages between the two parsers could leak sensitive information or confuse administrators/developers.

Recommendations:
If you have to use two JSON parsers, consider the following:

Least Privilege: Configure both to operate on the principle of least privilege. They should only have the minimum necessary permissions to function.

Consistent Configuration: Ensure that both parsers are configured as similarly as possible to reduce the chances of inconsistent behavior.

Up-to-Date: Keep both parsers updated to their latest secure versions.

Security Auditing: Continuously audit and monitor both parsers for vulnerabilities and unexpected behavior.

Uniform Logging: Implement a unified logging mechanism that consolidates logs from both parsers in a consistent manner for easier analysis.

Thorough Testing: Both unit tests and security tests like fuzzing should be conducted to ensure that the parsers behave consistently and securely.

In summary, the main risk boils down to the increased complexity and inconsistency that come with using two different JSON parsers. You're not just doubling your work, but potentially squaring it, as you have to consider interactions between the two parsers as well. The "rule of least complexity" is usually a good principle to follow in security, and this situation is no exception.

@tghosth
Copy link
Collaborator

tghosth commented Oct 22, 2023

@elarlang

So maybe I would update the wording to:

Verify that different JSON parsers used in the application, perform parsing in a consistent way to avoid vulnerabilities caused by different parsing interpreting edge cases differently.

Do you think this makes it any easier to understand the reason here?

@elarlang
Copy link
Collaborator

Yes, it is (at least for me) more understandable, and maybe event to mention those "features" there as an example (JSON Interoperability)

@tghosth
Copy link
Collaborator

tghosth commented Oct 23, 2023

@elarlang do you mean like:

Verify that different JSON parsers used in the application, perform parsing in a consistent way to avoid vulnerabilities caused by different parsing interpreting edge cases differently. For example, how duplicate keys are processed.

@elarlang
Copy link
Collaborator

With proposing "JSON Interoperability" my intent was to provide "Google this keyword" solution to understand the requirement.

From proposed requirement text maybe we don't need "edge cases" - just if they work differently it's already problematic.

And now comes the next set of questions:

  • category?
  • cwe?
  • level?

@tghosth
Copy link
Collaborator

tghosth commented Oct 23, 2023

Ah ok, I see what you mean.

So I would go with:

Verify that different JSON parsers used in the application, perform parsing in a consistent way to avoid JSON Interoperability vulnerabilities caused by different parsers interpreting edge cases differently.

For category, V13.1 as this is about parsers and in fact is similar to 13.1.1/

For CWE, according to this it should potentially be CWE-436.

Level, I would say 1 for now.

@elarlang
Copy link
Collaborator

From proposed requirement text maybe we don't need "edge cases" - just if they work differently it's already problematic.

This "edge cases" is getting actually more.. undefined thing.

Level - I was more thinking more about between level 2 and level 3, clearly not level 1. It is not argumented for me based on testability or based on risks.

ok with CWE and category.

@tghosth
Copy link
Collaborator

tghosth commented Oct 25, 2023

So like this?

Verify that different JSON parsers used in the application, perform parsing in a consistent way to avoid JSON Interoperability vulnerabilities.

I could live with L2 since it is a little niche.

@elarlang
Copy link
Collaborator

yes

@elarlang elarlang added 5) awaiting PR A proposal hs been accepted and reviewed and we are now waiting for a PR and removed 2) Awaiting response Awaiting a response from the original poster labels Oct 25, 2023
@tghosth
Copy link
Collaborator

tghosth commented Oct 25, 2023

Sorry to drive you crazy, @elarlang @ImanSharaf

Do you think we should specify JSON parsers or make it more general like

Verify that different parsers used in the application for the same data type (e.g. JSON parsers, XML parsers), perform parsing in a consistent way to avoid issues such as JSON Interoperability vulnerabilities.

@elarlang
Copy link
Collaborator

this abstraction makes sense

@jmanico
Copy link
Member

jmanico commented Oct 25, 2023 via email

@elarlang
Copy link
Collaborator

elarlang commented Oct 25, 2023

As long as you keep your parsers up to date (which we already cover) this is not a security issue. You’re all chasing a functionality problem, not a security problem.

@jmanico - how it is aligned with your previous comment? #1538 (comment)

Even if it is "just a functionality problem" - incorrectly working functionality is cause of integrity (if one interpretator "translates" value one way and another another way)

@maizuka
Copy link
Contributor

maizuka commented Oct 25, 2023

I am asking questions for my understanding:

  1. How easy is it to guarantee that different parsers have the same behavior? The behavior of each parser is feature dependent, so they should not be responsible for behavior that is not defined in standards such as RFCs.

  2. Why would you need to allow multiple parsers for user JSON? In general, it's hard to imagine a situation where different parsers repeatedly decode the same object. There may be cases where multiple parses are performed in parallel processing for performance reasons, but this can be avoided by providing one parser at the front.

@elarlang
Copy link
Collaborator

I agree with @maizuka concerns and also previous comment (#1538 (comment)).

I also can understand technically, how the problem may occur, but I can find it so edge case, that I don't mind to keep this requirement away from ASVS. If it goes in, I'm ok with last proposed wording.

@tghosth
Copy link
Collaborator

tghosth commented Oct 31, 2023

  1. Why would you need to allow multiple parsers for user JSON? In general, it's hard to imagine a situation where different parsers repeatedly decode the same object. There may be cases where multiple parses are performed in parallel processing for performance reasons, but this can be avoided by providing one parser at the front.

Easiest example I can think of is if you different microservices in different languages but both need to parse JSON.

  1. How easy is it to guarantee that different parsers have the same behavior? The behavior of each parser is feature dependent, so they should not be responsible for behavior that is not defined in standards such as RFCs.

So this is a good question, how is that actually actionable.

I wonder if we should refocus this to require that only a single parser is used on the assumption that being sure about parser behaviour in edge-cases is problematic.

What do you think @maizuka @elarlang ?

@elarlang
Copy link
Collaborator

Easiest example I can think of is if you different microservices in different languages but both need to parse JSON.

And in this client communicates directly with both microservices? Not like sending JSON to one service and then this one service communicating with another service? and both services do input validation? Then we are against single vetted solutions (1.1.6) already.

So, yes, technically it's possible to do it that way, but... I'm not sure I could call it nice architecture choice. Maybe BFF (Backend for Frontend) idea suits here better.

@maizuka
Copy link
Contributor

maizuka commented Oct 31, 2023

The first thing that comes to mind from your example is when two sections in a page retrieve data from different WebAPIs. The second is when her two consecutive pages use different parsers.

The former can lead to inconsistent views within the page, similar to one of the HTTP parameter pollution issues. I can't think of any workaround other than using just one parser.

In the latter case, the problem is when different parsers receive a single user input in succession, so one parser must use the other parser's processing results.

@tghosth
Copy link
Collaborator

tghosth commented Oct 31, 2023

Ok so, my question was:

I wonder if we should refocus this to require that only a single parser is used on the assumption that being sure about parser behaviour in edge-cases is problematic.

@maizuka
Copy link
Contributor

maizuka commented Oct 31, 2023

I think so at the moment. I found little materials with discussion to mitigate the risk.

@elarlang elarlang added 1) Discussion ongoing Issue is opened and assigned but no clear proposal yet and removed 5) awaiting PR A proposal hs been accepted and reviewed and we are now waiting for a PR labels Nov 11, 2023
@tghosth tghosth linked a pull request Jan 24, 2024 that will close this issue
@tghosth
Copy link
Collaborator

tghosth commented Jan 24, 2024

Following this, #1538 (comment)

I think this requirement still makes sense and actually 13.1.1 should be merged into here.

I think L2 makes more sense.

Opened PR #1841

elarlang pushed a commit to elarlang/ASVS that referenced this issue May 9, 2024
elarlang pushed a commit to elarlang/ASVS that referenced this issue May 9, 2024
jmanico pushed a commit that referenced this issue May 9, 2024
* label correction for 13.1.1 + 5.5.5, #1538

* label correction for 11.1.7, 11.1.8 #1272

* label correction for 7.2.6 #1890, #1902

* label correction for 13.1.1 + 5.5.5, #1538

---------

Co-authored-by: Elar Lang <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
1) Discussion ongoing Issue is opened and assigned but no clear proposal yet _5.0 - prep This needs to be addressed to prepare 5.0
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants