Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SFDX creates enormous log files on my boot drive #1408

Closed
klivian opened this issue Feb 13, 2022 · 35 comments · Fixed by forcedotcom/sfdx-core#818
Closed

SFDX creates enormous log files on my boot drive #1408

klivian opened this issue Feb 13, 2022 · 35 comments · Fixed by forcedotcom/sfdx-core#818
Labels
bug Issue or pull request that identifies or fixes a bug investigating We're actively investigating this issue logger-replacement

Comments

@klivian
Copy link

klivian commented Feb 13, 2022

Summary

SFDX creates many gigabytes of logfiles on my primary boot drive, causing the drive to run out of space

Steps To Reproduce:

**NB: This is not a repository-specific issue. Let me know if cli isn't the right repo for this

  1. Use SFDX daily across multiple client projects on a Windows machine, averaging 10 hours a day :-(
  2. Look at %USERPROFILE%/.sfdx folder
  3. Check size of sfdx.log

Expected result

Should exist, but not be tens or hundreds of gigabytes in size

Actual result

This week it was 93.1gb in size. When I first discovered this issue on Jan 19th - 103.5gb

System Information

{
"cliVersion": "sfdx-cli/7.136.2",
"architecture": "win32-x64",
"nodeVersion": "node-v16.13.2",
"pluginVersions": [
"@oclif/plugin-autocomplete 0.3.0 (core)",
"@oclif/plugin-commands 1.3.0 (core)",
"@oclif/plugin-help 3.3.1 (core)",
"@oclif/plugin-not-found 1.2.6 (core)",
"@oclif/plugin-plugins 1.10.11 (core)",
"@oclif/plugin-update 1.5.0 (core)",
"@oclif/plugin-warn-if-update-available 1.7.3 (core)",
"@oclif/plugin-which 1.0.4 (core)",
"@salesforce/sfdx-diff 0.0.6",
"@salesforce/sfdx-scanner 2.12.0",
"alias 1.2.0 (core)",
"auth 1.8.1 (core)",
"config 1.3.15 (core)",
"generator 1.2.1 (core)",
"info 1.2.0 (core)",
"salesforcedx 52.0.0",
"├─ salesforce-alm 53.7.6",
"├─ data 0.4.11",
"├─ org 1.6.6",
"├─ apex 0.2.2",
"├─ custom-metadata 1.0.12",
"├─ schema 1.0.7",
"├─ templates 51.5.0",
"├─ limits 1.2.1",
"├─ @salesforce/sfdx-plugin-lwc-test 0.1.7",
"└─ user 1.3.0",
"sfdx-cli 7.136.2 (core)",
"source 1.8.9 (core)",
"telemetry 1.4.0 (core)",
"trust 1.1.0 (core)"
],
"osVersion": "Windows_NT 10.0.22000"
}

Additional information

@klivian klivian added the investigating We're actively investigating this issue label Feb 13, 2022
@github-actions
Copy link

Thank you for filing this issue. We appreciate your feedback and will review the issue as soon as possible. Remember, however, that GitHub isn't a mechanism for receiving support under any agreement or SLA. If you require immediate assistance, contact Salesforce Customer Support.

@ImJohnMDaniel
Copy link

Yeah, I saw the same thing. My log file was 333 GB and I deleted it just yesterday. Granted it had been approximately 2 months since I last purged that log file, but it was surprising to see it grow that much in such a short time.

@iowillhoit
Copy link
Contributor

iowillhoit commented Feb 14, 2022

Yeesh, thanks for bringing this up. Do either of you have SFDX_LOG_LEVEL set (docs)? Having this set to debug or trace would drastically increase the number of logs in that file.

@klivian Are you saying that you got a 93.1GB log file in a single week? How long have you been working with sfdx on your machine?

@ImJohnMDaniel Are you also on Windows? For how long have you been aware of and occasionally clearing this file?

@ImJohnMDaniel
Copy link

ImJohnMDaniel commented Feb 14, 2022

I am on MacOS. I don't have SFDX_LOG_LEVEL set. I have been aware that the log file can get excessive for a year or more. I clear it out about once every 3 months or so. It routinely gets into the GB range quickly. Having said that, I have never seen it get that big before.

@iowillhoit
Copy link
Contributor

Yea, 333GB is huge. Thanks for the additional info. I'll look into it.

@iowillhoit
Copy link
Contributor

Hey @ImJohnMDaniel, could you run this to see if there is a single message that is especially noisy? This won't account for logs with small variations, but it's a good start to rule out something obvious. You'll need jq installed if you don't already.

@klivian, this might work for you if you are using WSL or GitBash on your machine. I've not tested it though

jq -c -r '. | { name, hostname, log, msg }' ~/.sfdx/sfdx.log | uniq -c | sort --reverse | head -n 20

@tprouvot
Copy link

Same issue here on Windows (more than 98GB), I tried to update sfdx (two versions) but nothing changed.

@klivian
Copy link
Author

klivian commented Feb 16, 2022

Just back to this after a few days off - no SFDX_LOG_LEVEL set, output from the jq command :-

5 {"name":"sfdx","hostname":"WINDOWS-N6U49BG","log":null,"msg":""}

@ImJohnMDaniel
Copy link

ImJohnMDaniel commented Feb 16, 2022

@iowillhoit, here is the jq output requested. Having said that, this was run on the current sfdx.log; not the large one that I deleted last week. I purged that from the recycle bin already

jq-review-of-imjohnmdaniel-sfdx.log

@iowillhoit
Copy link
Contributor

Ok, thanks everyone. Was hoping we'd see something obvious. I will ask internally to see if I can find a large log file to look over.

@JonnyPower
Copy link

just came across this today as well - 360 gig log file. I purged it before analyzing (sorry) - will keep an eye open for how it grows over the next while and provide better information if it repeats.

@iowillhoit
Copy link
Contributor

Hey @JonnyPower, no worries about deleting it. It's looking likely that we will be implementing a rotating log file to prevent these huge files in the future. Will post back here once it's released or if our plans change 👍

@iowillhoit iowillhoit added the bug Issue or pull request that identifies or fixes a bug label Feb 23, 2022
@uip-robot-zz
Copy link

This issue has been linked to a new work item: W-10745422

@klivian
Copy link
Author

klivian commented Feb 23, 2022

@iowillhoit - as part of the work on this, could I ask (if it isn't already planned) whether there could be a way to configure the storage location for the log file(s) ? I have ample space on my other drive but to the best of my knowledge, no way of telling SFDX to use it.

@iowillhoit
Copy link
Contributor

@klivian it doesn't look like this is currently possible (source). This would be a much larger change since several commands rely on configs in this directory. However the defaults on the log rotation should be reasonable enough that you won't ever get close to 360GB again.

@bbarlowOpFocus
Copy link

I had setup SFDX on my current machine back in July. At the beginning of February my log file was over 800GB. I am glad to know that this is being addressed

@iowillhoit
Copy link
Contributor

A change to implement a rotating file log has been released in today's release candidate (7.144.0). By default, the log file will rotate daily and we will keep two backups. These defaults can be modified with environment variables. More info in today's release notes. Please install the RC and let us know if you run into any issues. Thanks!
• Update with npm: npm install -g sfdx-cli@latest-rc
• Update with Installer: sfdx update stable-rc (run sfdx update stable to rollback)

@klivian
Copy link
Author

klivian commented Mar 30, 2022

@iowillhoit Been testing since 7.144.0 RC (now on 7.144.2 as of a few minutes ago) - log sizes dramatically reduced. I have sfdx.log at 3kb, sfdx.log.0 at 0kb and the new sf.log (in it's new directory) also at 0kb. Will continue to monitor until we go from RC to GA on this version

@iowillhoit
Copy link
Contributor

That's great! Thank you for testing it out @klivian

@mshanemc mshanemc closed this as completed Apr 1, 2022
@austin-convoy
Copy link

I've had the same issue pop up again, this time in the ~/.sf folder - again sf logs are taking up upwards of 200 GB before my storage maxes out. Should I file a separate issue?

@jverelst
Copy link

jverelst commented Sep 8, 2022

@iowillhoit I just updated SFDX in our azure pipeline, which runs every night to run all tests in our partialcopy org. I fear that this change has caused an issue for long-running commands that pass the 00:00h time mark:

2022-09-07T20:03:42.6844274Z ##[section]Starting: Bash
2022-09-07T20:03:42.6851690Z ==============================================================================
2022-09-07T20:03:42.6851960Z Task         : Bash
2022-09-07T20:03:42.6852180Z Description  : Run a Bash script on macOS, Linux, or Windows
2022-09-07T20:03:42.6852450Z Version      : 3.201.1
2022-09-07T20:03:42.6852636Z Author       : Microsoft Corporation
2022-09-07T20:03:42.6852918Z Help         : https://docs.microsoft.com/azure/devops/pipelines/tasks/utility/bash
2022-09-07T20:03:42.6853271Z ==============================================================================
2022-09-07T20:03:42.9802645Z Generating script.
2022-09-07T20:03:42.9818573Z Script contents:
2022-09-07T20:03:42.9818956Z scripts/sf-run-testsuite.sh lightning-pc
2022-09-07T20:03:42.9827087Z ========================== Starting Command Output ===========================
2022-09-07T20:03:42.9841514Z [command]/bin/bash /__w/_temp/160b180a-b886-4515-9e79-0d226dc2a1fd.sh
2022-09-07T20:04:03.5696315Z WARNING: WARNING: In the Summer ’21 release, we'll rename or remove some of the output files that are created when you run force:apex:test:run or force:apex:test:report with the --outputdir (-d) parameter.  This breaking change could impact your CI/CD scripts if they reference specific filenames. See https://sfdc.co/apex-test-updates for details
2022-09-07T20:04:03.5697997Z 
2022-09-08T00:00:00.0139261Z [Error: ENOENT: no such file or directory, rename '/home/vsts_azpcontainer/.sf/sf.log' -> '/home/vsts_azpcontainer/.sf/sf.log.0'] {
2022-09-08T00:00:00.0140067Z   errno: -2,
2022-09-08T00:00:00.0140490Z   code: 'ENOENT',
2022-09-08T00:00:00.0140901Z   syscall: 'rename',
2022-09-08T00:00:00.0141374Z   path: '/home/vsts_azpcontainer/.sf/sf.log',
2022-09-08T00:00:00.0141889Z   dest: '/home/vsts_azpcontainer/.sf/sf.log.0'
2022-09-08T00:00:00.0142275Z }
2022-09-08T00:00:00.0161881Z [Error: ENOENT: no such file or directory, rename '/home/vsts_azpcontainer/.sf/sf.log' -> '/home/vsts_azpcontainer/.sf/sf.log.0'] {
2022-09-08T00:00:00.0162705Z   errno: -2,
2022-09-08T00:00:00.0163105Z   code: 'ENOENT',
2022-09-08T00:00:00.0163518Z   syscall: 'rename',
2022-09-08T00:00:00.0163983Z   path: '/home/vsts_azpcontainer/.sf/sf.log',
2022-09-08T00:00:00.0164514Z   dest: '/home/vsts_azpcontainer/.sf/sf.log.0'
2022-09-08T00:00:00.0164889Z }
2022-09-08T00:00:00.0179293Z ERROR running force:apex:test:run:  Unknown error in Apex Library: ENOENT: no such file or directory, rename '/home/vsts_azpcontainer/.sf/sf.log' -> '/home/vsts_azpcontainer/.sf/sf.log.0'

You can see exactly at 00:00:00 errors are being generated, which result in the entire pipeline failing. This change in log-rotation seems to be the cause.

The pipeline in question usually runs about 5 hours, starting at 8pm GMT, finishing around 1am GMT. But now the pipeline immediately fails at 00:00.

@jverelst
Copy link

jverelst commented Sep 8, 2022

The script sf-run-testsuite has the following commands:

#!/bin/bash

# Clean up the testoutput directory and create it
if [ -e testoutput ]
then
  rm -rf testoutput/
fi
mkdir testoutput

# Disabled code-coverage, to improve performance of the entire test run
# EXTRAFLAGS="-c "
EXTRAFLAGS="-c"

# Upgrade the amount of results that can be returned, the default is 10.000, we make it 100.000
export SFDX_MAX_QUERY_LIMIT="100000"

# Instead of doing a 'RunLocalTests' (which will also run classes from unlocked packages),
# we make a list of all the testclasses in the src/ directory and only run those
ALLCLASSES=$(sfdx sfpowerkit:source:apextest:list --resultasstring --json -p src 2>/dev/null | jq -r .result)

# Run all tests
sfdx force:apex:test:run -w 300 -u $1 -n "$ALLCLASSES" -d testoutput/ -r json $EXTRAFLAGS > testoutput/output.json

# Retrieve the junit.xml file, to be displayed in azure
runid=$(cat testoutput/test-run-id.txt)
sfdx force:apex:test:report -u $1 -i $runid $EXTRAFLAGS -d testoutput/ -r junit

# Get the human-readable testoutput, and generate an MD file from it
sfdx force:apex:test:report -u $1 -i $runid -r human > testoutput/human.txt

# Make sure that the junit.xml file is valid, it may contain doublequotes due to a SFDX bug
bash scripts/fix-junit-xml.sh

@KacperAugustyniak
Copy link

So we have a similar problem, but with sf/sf.log file.
The whole file consists of lines like this:

{"name":"sf","hostname":"<redacted>","pid":15580,"level":60,"errno":-32,"code":"EPIPE","syscall":"write","msg":"","time":"2022-08-17T09:15:53.565Z","v":0}

And consumes all available space.

@austin-convoy
Copy link

Is there a follow-up to KacperAugustyniak's comment? We are also seeing this same bug take place in the .sf folder. Multiple log files taking up 30+ GB until the hard drive is out of space. Happens multiple times per week.

@peternhale
Copy link
Contributor

@austin-convoy did you try the workaround mentioned in issue #1706?

set SFDX_LOG_ROTATION_PERIOD=1w

@KacperAugustyniak
Copy link

@austin-convoy We recently found out what was causing the enormous logs in our case.
We were having this problem when in GitLab runner a job was cancelled.
It turns out, when you cancel a job in GitLab runner it sends SIGKILL to the job, which apparently leaves orphan process if sfdx force:source:deploy already started, creating logs I mentioned above.
The solution was to never cancel from GitLab. Instead if we have co terminate a job, we do it from "Deployment status" page in Salesforce Setup or just let the job finish.

I hope this will help you find a solution to your problem.

@couturecraigj
Copy link

@peternhale will this help with .sf logs instead of .sfdx logs?

@peternhale
Copy link
Contributor

@couturecraigj Not sure what you are asking?

@couturecraigj
Copy link

This

@couturecraigj Not sure what you are asking?

This was originally about an issue with .sfdx (%USERPROFILE%/.sfdx) log files. All our issues going forward are exploding in the .sf (%USERPROFILE%/.sf) log files.

@peternhale
Copy link
Contributor

@couturecraigj thank you. The sf command is using the same log rotation logic as sfdx. Which version of sf are you using? Have you eliminated the possibility of orphaned processes as a source of log file growth?

@couturecraigj
Copy link

@peternhale I have not orphaned any processes that I can identify as I wait for all of them to complete

@dagreatbrendino
Copy link

I had this same issue. Was very confused as to how I was randomly running out of hard drive space. I've only been using SFDX to complete some trails for a week and it already ballooned to 33GB. There definitely seems to be some erratic logging going on here. There should at least be some config that can be used to set a max log file size.

@tehnrd
Copy link

tehnrd commented Mar 6, 2023

FWIW, I just deleted a 335GB log file on MacOS. I never adjusted the log level from whatever the default is.

Unfortunately, I don't now if this started to slow in growth recently. I deleted the file and will monitor.

@jackexu
Copy link

jackexu commented Dec 6, 2024

Just deleted 328.2GB sfdx.log on MacOS without adjustment of log level or anything - just used it for a day or two.

VERSION
  @salesforce/cli/2.67.7 darwin-arm64 node-v22.11.0

and plug-in

@oclif/plugin-autocomplete 3.2.8 (core)
@oclif/plugin-commands 4.1.8 (core)
@oclif/plugin-help 6.2.16 (core)
@oclif/plugin-not-found 3.2.25 (core)
@oclif/plugin-plugins 5.4.15 (core)
@oclif/plugin-search 1.2.14 (core)
@oclif/plugin-update 4.6.10 (core)
@oclif/plugin-version 2.2.15 (core)
@oclif/plugin-warn-if-update-available 3.1.21 (core)
@oclif/plugin-which 3.2.17 (core)
api 1.3.2 (core)
auth 3.6.73 (core)
deploy-retrieve 3.15.11 (core)
info 3.4.18 (core)
marketplace 1.3.4 (core)
packaging 2.9.0 (core)
salesforcedx 52.0.0
├─ apex 0.2.2
├─ salesforce-alm 52.0.0
├─ templates 51.5.0
├─ custom-metadata 1.0.12
├─ user 1.3.0
├─ @salesforce/sfdx-plugin-lwc-test 0.1.7
├─ org 1.6.6
├─ schema 1.0.7
├─ limits 1.2.1
└─ data 0.4.11
settings 2.4.4 (core)
sobject 1.4.45 (core)
telemetry 3.6.20 (core)
trust 3.7.41 (core)

Uninstalled JIT Plugins:
community 3.3.2
dev 2.5.0
devops-center 1.2.26
env 3.0.33
functions 1.23.0
signups 2.6.1
@salesforce/sfdx-scanner 4.7.0

@iowillhoit
Copy link
Contributor

iowillhoit commented Dec 6, 2024

sf no longer logs to the sfdx.log file. It logs to ~/.sf/sf-YYYY-MM-DD (see PR). Prior to that, it would log to sf.log.0, sf.log.1, sf.log.2. And prior to that, it logged to sfdx.log or sf.log.

It is likely you are getting the old Logger from the salesforcedx plugin. That is very old and should be uninstalled.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Issue or pull request that identifies or fixes a bug investigating We're actively investigating this issue logger-replacement
Projects
None yet
Development

Successfully merging a pull request may close this issue.