Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Constant increase of requests against Dependencytrack #8

Closed
jon-rei opened this issue Dec 5, 2023 · 7 comments
Closed

Constant increase of requests against Dependencytrack #8

jon-rei opened this issue Dec 5, 2023 · 7 comments

Comments

@jon-rei
Copy link

jon-rei commented Dec 5, 2023

We have been running sbomreport-to-dependencytrack on our clusters for some time. What we can see is that after a dependencytrack restart, the ingress requests are constantly increasing.
In the code I can see that the readiness for uploads is checked every second. Would it be possible to make the interval configurable or even add a timeout? I have a suspicion (but cannot really prove it) that the readiness checks are sometimes running indefinitely.

I could see the requests rising to 4.000 requests/s. For relevance we are managing ~3k projects in Dependencytrack.
Screenshot 2023-12-05 at 09 16 44

I can also raise a PR myself for those features.

@takumakume
Copy link
Owner

@jon-rei

Thank you for your report!

If Dependency Track processes a large number of SBOMs, you can expect it to take some time from the Upload of the SBOM to completion.
In the meantime, sbomreport-to-dependencytrack checks every second after sending the SBOM. These processes may be piling up and resulting in a large number of requests.

I would add the ability to specify timeout and retry interval.

#9

@jon-rei
Copy link
Author

jon-rei commented Dec 5, 2023

Thanks for implementing this so quickly.
I think what's missing now is making these new configs configurable through helm.

@takumakume
Copy link
Owner

@jon-rei
Oops. I forgot.
helm has been released in 0.0.10!

@jon-rei
Copy link
Author

jon-rei commented Dec 8, 2023

Hi @takumakume,
Thanks again for the implementation of the new feature so far. Unfortunately, I've been distracted by other things and haven't really been able to test or investigate this issue much in the last few days.
But I can definitely see that the Dependencytrack API server ingress is still constantly going up. At the point where it goes down, the sbomreport-to-dependencytrack pod has been restarted.
In the logs of the pod I can see the following logs appearing occasionally:

2023/12/08 11:07:54 ERROR: server.uploadFunc: upload failed: template: :1:13: executing "" at <.sbomReport.report.artifact.tag>: map has no entry for key "tag".

Could it be that the check for the SBOM readiness is not properly aborted?

Screenshot 2023-12-08 at 12 29 18

@takumakume
Copy link
Owner

@jon-rei

Thank you for the report!

<.sbomReport.report.artifact.tag>: map has no entry for key "tag".

It doesn't occur in my environment.
Can I receive the SBOMReport when this error occurs? Please also tell me the version of trivy-operator.

Could it be that the check for the SBOM readiness is not properly aborted?

  • When you look at the Dependency Track access log, can you see the breakdown for each endpoint?
  • How many requests are there for this software compared to the number of webhook requests for Trivy Operator after SBOMReport generation?

others. This software will send the SBOM again if it receives a webhook even if it has been sent once. Therefore, if you prevent retransmission with cache, etc., there is a possibility that it will improve to some extent.


By the way, is it used in DeepL's infrastructure? I think there are a lot of 3000 projects, so I'm interested!

@jon-rei
Copy link
Author

jon-rei commented Dec 21, 2023

Hi @takumakume,
Sorry for taking so long to get back to you.
We've now rebuilt our dependency track from scratch, and since then the constant increase in requests has gone away and hasn't happened again.
I think it was just a weird behaviour of our old setup.
I'd suspect the dependency track itself if it happened again.
Thanks again for helping out so quickly. By the way, we use dependencytrack to collect SBOMs of all the images running in the cluster.

@takumakume
Copy link
Owner

Thanks!!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants