Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Generating azurerm provider for go exceeds 'hard' memory limit #1264

Closed
waxb opened this issue Nov 5, 2021 · 18 comments
Closed

Generating azurerm provider for go exceeds 'hard' memory limit #1264

waxb opened this issue Nov 5, 2021 · 18 comments
Labels
bug Something isn't working language/golang provider/azure Issue shows up using Azure provider waiting-on-answer

Comments

@waxb
Copy link
Contributor

waxb commented Nov 5, 2021

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

cdktf & Language Versions

waxb@mbp:cdktf-azure-go$ cdktf --version
0.7.0
waxb@mbp:cdktf-azure-go$ cat cdktf.json 
{
    "language": "go",
    "app": "go run main.go",
    "codeMakerOutput": "generated",
    "terraformProviders": ["hashicorp/azurerm@~> 2.74"],
    "terraformModules": [],
    "context": {
        "excludeStackIdFromLogicalIds": "true",
	"allowSepCharsInLogicalIds": "true"
    }
}
waxb@mbp:cdktf-azure-go$ go version
go version go1.17.2 darwin/amd64

Expected Behavior

waxb@mbp:cdktf-azure-go$ cdktf get
⠧ downloading and generating modules and providers...
Generated go constructs in the output directory: generated

The generated code depends on jsii-runtime-go. If you haven't yet installed it, you can run go mod tidy to automatically install it.
waxb@mbp:cdktf-azure-go$ 

Actual Behavior

https://gist.github.com/waxb/78ecff27cd265525e290043da7db6c71

  | <--- Last few GCs --->
  | [49924:0x7f8761900000]   197321 ms: Mark-sweep 10085.5 (10275.2) -> 10071.7 (10276.9) MB, 12795.4 / 0.0 ms  (average mu = 0.142, current mu = 0.002) allocation failure scavenge might not succeed
  | [49924:0x7f8761900000]   210142 ms: Mark-sweep 10087.5 (10276.9) -> 10073.8 (10279.2) MB, 12795.8 / 0.0 ms  (average mu = 0.077, current mu = 0.002) allocation failure scavenge might not succeed
  | <--- JS stacktrace --->
  | FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory

cdktf get fails as the azurerm provider generation is bigger than the default
(console.warn(found NODE_OPTIONS environment variable without a setting for --max-old-space-size. The provider generation needs a substantial amount of memory (~13GB) for some providers and languages. So cdktf-cli sets it to NODE_OPTIONS="--max-old-space-size=16384" by default. As your environment already contains a NODE_OPTIONS variable, we won't override it. Hence, the provider generation might fail with an out of memory error.); )

Important Factoids

⚠️ Note: be sure you are using underscores instead of dashes as seen in the example, not in the warn message
workaround: export NODE_OPTIONS=--max_old_space_size=16384 as azurerm takes ~13 GBs to generate

The workaround is only working if you are seeing the ...NODE_OPTIONS variable, we won't override it... message as shown above.

References

This seems to be the affecting line

I got the idea from here

@waxb waxb added bug Something isn't working new Un-triaged issue labels Nov 5, 2021
@ansgarm
Copy link
Member

ansgarm commented Nov 5, 2021

Hi @waxb!
Thank you for reporting the issue. We're currently setting the limit to 10GB because the default is only something along the lines of 2GB 😅 Interesting that the azurerm provider needs even more – in the past usually the AWS provider popped up with this issue.

I fear that this is something we should fix upstream in JSII as we're now approaching memory requirements that I'd rather not want to require from users. Fixing the root cause here sounds more reasonable to me.

@ansgarm
Copy link
Member

ansgarm commented Nov 5, 2021

Just a guess (to be confirmed): The memory usage could be a side effect of aws/jsii#3091

@ansgarm
Copy link
Member

ansgarm commented Nov 8, 2021

Just a guess (to be confirmed): The memory usage could be a side effect of aws/jsii#3091

Could not confirm this – it still failed with an OOM error after commenting out that cpu heavy code.

@skorfmann skorfmann added language/golang provider/azure Issue shows up using Azure provider labels Nov 8, 2021
@jcdan3
Copy link

jcdan3 commented Nov 9, 2021

I am getting the same error but with the provider "hashicorp/aws@~> 3.64.2".
Setting export NODE_OPTIONS="--max_old_space_size=20384" did not work on my side.

The output is:
| <--- Last few GCs ---> | [933216:0x54d70a0] 397793 ms: Mark-sweep 10091.3 (10282.0) -> 10079.4 (10286.0) MB, 6808.6 / 0.0 ms (average mu = 0.081, current mu = 0.002) allocation failure scavenge might not succeed | [933216:0x54d70a0] 404596 ms: Mark-sweep 10095.3 (10286.0) -> 10079.8 (10286.5) MB, 6787.7 / 0.0 ms (average mu = 0.043, current mu = 0.002) allocation failure scavenge might not succeed | <--- JS stacktrace ---> | FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory

@edmundloo
Copy link

I am getting the same error but with the provider "hashicorp/aws@~> 3.64.2". Setting export NODE_OPTIONS="--max_old_space_size=20384" did not work on my side.

The output is: | <--- Last few GCs ---> | [933216:0x54d70a0] 397793 ms: Mark-sweep 10091.3 (10282.0) -> 10079.4 (10286.0) MB, 6808.6 / 0.0 ms (average mu = 0.081, current mu = 0.002) allocation failure scavenge might not succeed | [933216:0x54d70a0] 404596 ms: Mark-sweep 10095.3 (10286.0) -> 10079.8 (10286.5) MB, 6787.7 / 0.0 ms (average mu = 0.043, current mu = 0.002) allocation failure scavenge might not succeed | <--- JS stacktrace ---> | FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory

I had a similar failure. The mistake I made was not actually making sure I actually had the memory available for node to use on my device. It ended up working when I closed everything I could that was taking up memory and ran the command again.

@jcdan3
Copy link

jcdan3 commented Nov 9, 2021

Thanks for the quick reply. I did make sure that I had sufficient RAM available and still facing the same issue.
Any other idea?
Did you had to reload node or npm so that the setting is applied?

@edmundloo
Copy link

Thanks for the quick reply. I did make sure that I had sufficient RAM available and still facing the same issue. Any other idea? Did you had to reload node or npm so that the setting is applied?

I didn't need to, but I did monitor my RAM usage to make sure that the setting did apply to me. I didn't have to do anything extra to make it work.

@waxb
Copy link
Contributor Author

waxb commented Nov 9, 2021

@edmundloo do you get this message when it has started generating?

found NODE_OPTIONS environment variable without a setting for --max-old-space-size.
The provider generation needs a substantial amount of memory (~6-7GB) for some providers and languages.
So cdktf-cli sets it to NODE_OPTIONS="--max-old-space-size=10240" by default. As your environment already contains
a NODE_OPTIONS variable, we won't override it. Hence, the provider generation might fail with an out of memory error.

If not then you are still having the 10GB limit and from your crash output seems like this is the case.

@jcdan3
Copy link

jcdan3 commented Nov 9, 2021

@waxb In your original post, it looks like your talked about setting the variable:
NODE_OPTIONS=--max-old-space-size=13384
but also
NODE_OPTIONS=--max_old_space_size=13384

not sure what the impact is, but from the documentation the right way is --max-old-space-size

Also, from looking at the code the logic seems to prevent from using this.

I am no node expert, but my understanding is that if we set NODE_OPTIONS=--max-old-space-size=13384, then we fall in the else of line 319, which overide the NODE_OPTIONS.

@waxb
Copy link
Contributor Author

waxb commented Nov 9, 2021

@jcdan3
See that the --max-old-space-size is copied from the console.warn output and it is a direct flag for node, I tried to export it as well and didn't know why it was not working then realised from the source code that it is not actually invoked so I tried it with underscores and at viola, it worked.
You can see the workaround using underscores, but I'll make a note on the original post to highlight this.

@jcdan3
Copy link

jcdan3 commented Nov 9, 2021

Thanks for the feedback. I was not able to succeed with --max_old_space_size with CDKTF-CLI 0.7.0.
I found another way around which was to update to CDKTF-CLI 0.8.0 (pre-release) and it worked out of the box without having to set NODE_OPTIONS.

This solved my issue.

@waxb
Copy link
Contributor Author

waxb commented Nov 9, 2021

@jcdan3
I guess because of this from here.

@jcdan3
Copy link

jcdan3 commented Nov 9, 2021

@waxb Agreed, I monitored my RAM and it was roughly 12.5 GB.

@DanielMSchmidt DanielMSchmidt added waiting-on-answer and removed new Un-triaged issue labels Nov 25, 2021
@DanielMSchmidt
Copy link
Contributor

@waxb Is this solved with 0.8 for you as well?

@waxb
Copy link
Contributor Author

waxb commented Dec 7, 2021

@DanielMSchmidt
In theory, yes.

@DanielMSchmidt
Copy link
Contributor

👍 Great, then I'll close the issue for now

@adeturner
Copy link

adeturner commented Jun 20, 2022

+1 to prioritise the fix for go+azurerm, I didnt see any min specs in the docs and I have this

16GB Windows laptop
8GB WSL ubuntu instance + 2GB swap
go 1.8.3
node 18.4.0
Terraform v1.2.3
cdktf 0.11.2

By shutting down vscode I managed to "cdktf get" to complete but "cdktf synth" keeps bombing

@github-actions
Copy link
Contributor

I'm going to lock this issue because it has been closed for 30 days. This helps our maintainers find and focus on the active issues. If you've found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Nov 26, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug Something isn't working language/golang provider/azure Issue shows up using Azure provider waiting-on-answer
Projects
None yet
Development

No branches or pull requests

7 participants