-
Notifications
You must be signed in to change notification settings - Fork 17.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
x/build: add LUCI openbsd-ppc64 builder #63480
Comments
Change https://go.dev/cl/534976 mentions this issue: |
Thanks. Here's the resulting certificate: openbsd-ppc64-n2vi-1697128325.cert.txt. I've mailed CLs to define your new builder in LUCI and will comment once that's done. |
Thank you; I confirm that using the cert I get a plausible looking luci_machine_tokend/token.json. |
Since the list of BUILDER_TYPES is nearly sorted, keep that up, and sort (using 'Sort Lines' in $EDITOR) two of Linux run mods. For golang/go#63480. For golang/go#63481. For golang/go#63482. Change-Id: Icef633ab7a0d53b5807c2ab4a076d74c291dc0ea Reviewed-on: https://go-review.googlesource.com/c/build/+/534976 TryBot-Bypass: Dmitri Shuralyov <[email protected]> Reviewed-by: Carlos Amedee <[email protected]> Auto-Submit: Dmitri Shuralyov <[email protected]> Reviewed-by: Dmitri Shuralyov <[email protected]> Reviewed-by: Heschi Kreinick <[email protected]>
I have not read the code yet to diagnose this; leaving assigned to me.
|
I don't see anything in the code or logs here that help me diagnose. It just looks like the server didn't like the token.json that had been refreshed just a minute before. Maybe someone there can check server-side luci logs? Unable to reassign to dmitshur; hope someone there sees this. |
Thanks for the update. I recall there was a similar looking error in #61666 (comment). We'll take a look. |
In case it helps... I set both -token-file-path on the bootstrapswarm command line and also LUCI_MACHINE_TOKEN in the environment. The logs don't indicate any trouble reading the token.json file, though they're not very explicit. I appreciate that there have been serious security flaws in the past from too-detailed error messages. But I'd venture that it is safe for luci to say more than "403". I recognize I'm a guinea pig for the Go LUCI stuff, so happy to give you a login on t.n2vi.com if you would find it easier to debug directly or hop on a video call with screen sharing. Finally, I recognize I'm a newcomer to Go Builders. So it could well be user error here. |
Thanks for your patience as we work through this and smooth out the builder onboarding process.
To confirm, are both of them set to the same value, which is the file path location of the token.json file? If you don't mind experimenting on your side, you can check if anything is different if you leave LUCI_MACHINE_TOKEN unset and instead rely on the default location for your OS ( We'll keep looking into this on our side. Though next week we might be somewhat occupied by a team event, so please expect some delays. Thanks again. |
Yes, both are set to the same value /home/luci/luci_machine_tokend/token.json. (My OS doesn't have /var/lib and anyway not a fan of leaving cleartext credentials in obscure corners of the filesystem.) This morning I've retried the same invocation of bootstrapswarm as before and don't get the 403 Client Error. So maybe there was just a transient issue. Happy to set this effort on the shelf for a week or two; enjoy the team event! |
CC @golang/release. |
Over the last week I tried swarm a few more times with no problems, so whatever issue I saw before indeed seems transient. I never saw swarm do any actual work, presumably because some server-side table is still pointing to my machine as in the old-builder state rather than new-builder. Fine by me. I'll have limited ability to work on it from November 8 - 20, but happy to work on it during the next few days if you're waiting on me. |
The builder is currently in a "Quarantined—Had 6 consecutive BOT_DIED tasks" state. @n2vi Can you please restart the swarming bot on your side and see if that's enough to get it out of that state? We've applied changes on our side (e.g., CL 546715) that should help avoid this repeating, but it's possible more work will be needed. Let's see what happens after you restart it next time. Thanks. |
Thanks. I think you should let the LUCI version of the builder run for some time, and when it seems stable, feel free to stop the coordinator instance on your side to free up the resources. The only reason to keep the coordinator instance is if you're not quite ready to switch yet, but it needs to happen at some point since the coordinator will be going away. I'll update CL 585217 to give it a timeout scale for now, especially since it's running builds for both LUCI and coordinator, and we can adjust it later on as it becomes more clear what the optimal value is. |
As of 18:10 UTC, rebooted openbsd-ppc64-n2vi with datasize-max=8192M for swarming. If 8GB of RAM is not enough we have other problems. |
This eventually panic'd the kernel with an allocation failure. {But the tests are not automatically restarting. "Retry Build" button on the Builder Dashboard is gray'd out for me; perhaps someone there can kick it?} |
The port wasn't added until Go 1.22, so no need to test it with Go 1.21. Also set a timeout scale factor of 2 for now, while the LUCI builder is running alongside the coordinator builder on the same hardware. This is fine to adjust later as it becomes more clear what the optimal value is. For golang/go#63480. For golang/go#56001. Change-Id: I707ffe7d15afa6a70d6d8789f959a5835259df3f Reviewed-on: https://go-review.googlesource.com/c/build/+/585217 Reviewed-by: Cherry Mui <[email protected]> Reviewed-by: Dmitri Shuralyov <[email protected]> Auto-Submit: Dmitri Shuralyov <[email protected]> LUCI-TryBot-Result: Go LUCI <[email protected]>
Still wasn't seeing anything running, so killed off the python and bootstrapswarm processes and restarted. |
Thanks for working on this.
I failed to realizer this sooner, but our configuration intends to make it possible for you to see the machine pool (see "poolViewer" granted to group "all" here). I believe it currently requires you to sign in (any account will work), then you can view the contents of the "Machine Pool" links such as https://chromium-swarm.appspot.com/botlist?f=cipd_platform%3Aopenbsd-ppc64&f=pool%3Aluci.golang.shared-workers. You should see something like this: And clicking on the bot name will take you to https://chromium-swarm.appspot.com/bot?id=openbsd-ppc64-n2vi where you'll find more information about its current state from LUCI's perspective. Apologies about the additional overhead at this time to get to this information. Since you've done some restarts, it might help to confirm that luci_machine_tokend process still working as described in step 4 of https://go.dev/wiki/DashboardBuilders#how-to-set-up-a-builder-1, and that the token file it writes to has new content, which is propagated to bootstrapswarm. If that isn't where the problem is, is there more information included in the status code 401 message, beyond "Downloading the swarming bot" and "status code 401"? Also, is there more useful context in the local swarming bot log? |
Now that the bot is getting work again we'll see if we can reproduce the pagedaemon kernel panic. Not that I'm a kernel developer by any means, but gotta learn sometime! I recognize that this is a sufficiently unusual platform and workload that it is not inconceivable that we step on a new corner case. |
No kernel crashes yet, just running all the way to Failure. :) I'm still trying to understand more about the build output, in particular the details of what "resource temporarily unavailable" means specifically. Is it running into a user process limit for forking? The login.conf here sets maxproc-max=256, maxproc-cur=128. Do the tests need more processes than that? One probably unrelated item caught my eye: /var/log/secure reports
All those files are already owned by user "swarming" so why would the software be trying to become root? |
Overnight, we captured another kernel panic that closely resembles the earlier one. I'll get back to you when I make progress on this; may be quite a while. LUCI appropriately marks me as offline for the duration. |
status update; no need to respond... Found a recent patch to openbsd powerpc64 pagedaemon pmac.c that may be relevant, so upgraded t.n2vi.net from -stable to -snapshot. Now the previously-ok luci_machine_tokend dumps core with a pinsyscalls error on the console, so rebuilt with the nineteen-line install sequence from https://pkg.go.dev/go.chromium.org/luci and a freshly compiled go1.22.3. This now seems to be generating a new token.json ok. Rebuilt and restarted bootstrapswarm. The LUCI Builders dashboard shows the machine now as Idle; based on past experience, in an hour or two it will actually start delivering work without further attention. I'll periodically monitor to be sure that happens, and then over the next couple days we'll see if the kernel panic re-occurs. |
I do suspect we're stepping on a pagedaemon bug that occasionally crashes
the machine, but it is getting LUCI work enough done that perhaps Gophers
can make their own independent progress while I pursue the OpenBSD issue.
…On Fri, May 17, 2024, 16:48 Eric Grosse ***@***.***> wrote:
status update; no need to respond...
Found a recent patch to openbsd powerpc64 pagedaemon pmac.c that may be
relevant, so upgraded t.n2vi.net from -stable to -snapshot.
Now the previously-ok luci_machine_tokend dumps core with a pinsyscalls
error on the console, so rebuilt with the nineteen-line install sequence
from https://pkg.go.dev/go.chromium.org/luci and a freshly compiled
go1.22.3. This now seems to be generating a new token.json ok.
Rebuilt and restarted bootstrapswarm. The LUCI Builders dashboard shows
the machine now as Idle; based on past experience, in an hour or two it
will actually start delivering work without further attention. I'll
periodically monitor to be sure that happens, and then over the next couple
days we'll see if the kernel panic re-occurs.
—
Reply to this email directly, view it on GitHub
<#63480 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/ACADPOXZT27J2RRAXAVRBXLZC2JL7AVCNFSM6AAAAAA5ZVVAZSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCMJYGQ4TQNJZHA>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
|
Restarted swarm with twice the process ulimit. Let's see if that reduces the number of fork/exec failures. No recent kernel crashes. |
My builder machine is fine, no crashes, but I see that the dashboard thinks it is offline. Here is a tail -50 nohup.out. I believe the ball is back in your court...
|
The error message above includes "quota exceeded". It seems to have been temporary. Looking at https://ci.chromium.org/ui/p/golang/g/port-openbsd-ppc64/builders, the builder seems to be stable and passing in the main Go repo and all golang.org/x repos. Congratulations on reaching this point! Would you like to remove its known issue as the next step? |
We got another pager daemon kernel crash last night. I'm glad we'r getting substantial test runs done, but we're not out of the woods yet. |
I see a "context deadline exceeded" failure in the latest build. Not sure how to interpret that, but FYI as part of debugging the kernel crashes I've changed some kernel memory barriers that possibly slow page mapping changes a bit. I don't expect any large impact on system speed overall, but I'm unsure. |
I've been able to reproduce a kernel panic without anything involving Go, so will be pursuing that and temporarily not running swarm. I'll update here when we've made progress with the kernel. |
Change https://go.dev/cl/593736 mentions this issue: |
Move the timeout scale closer to what's used by openbsd-riscv64 now. This was suggested by Eric who looked at their relative performance. For golang/go#63480. Change-Id: I1f28dd183c20b9b41c807296b5624ba0dcb10bee Co-authored-by: Eric Grosse <[email protected]> Reviewed-on: https://go-review.googlesource.com/c/build/+/593736 Auto-Submit: Dmitri Shuralyov <[email protected]> Reviewed-by: Dmitri Shuralyov <[email protected]> LUCI-TryBot-Result: Go LUCI <[email protected]> Reviewed-by: Michael Knyszek <[email protected]> Reviewed-by: Eric Grosse <[email protected]>
Although the kernel issue is not fully solved, I'm satisfied that it is sufficiently understood and being worked on in Mac Studio locking bug. I now regard the LUCI migration as complete for openbsd-ppc64 builder and am no longer running the buildlet there. As long as we keep the machine load at a reasonable level, we're rarely triggering the kernel lock issue. @dmitshur Thanks again for all your help with this. You may remove |
Change https://go.dev/cl/596817 mentions this issue: |
I regret to say that my comment seems to have jinxed things. After the change, openbsd-ppc64 builder is crashing more frequently again. Anyway, let's leave things be for a couple weeks while y'all are at GopherCon and OpenBSD works on locks. |
Sure. As my system kernel friends say, multicore MMU is an art. I believe there are remaining bugs encountered when under high load, but I reboot the server when needed. |
The builder has reached a point where it's considered added. Fixes golang/go#63480. Change-Id: I82985686fa1ac0f00d46c2b49fd8e2fc187fc5fa Reviewed-on: https://go-review.googlesource.com/c/build/+/596817 LUCI-TryBot-Result: Go LUCI <[email protected]> Auto-Submit: Dmitri Shuralyov <[email protected]> Reviewed-by: Eric Grosse <[email protected]> Reviewed-by: Dmitri Shuralyov <[email protected]> Reviewed-by: Carlos Amedee <[email protected]>
Closed by merging CL 596817 (commit golang/build@4a73433) to |
Following the instructions at Dashboard builders:
hostname openbsd-ppc64-n2vi
CSR is attached after renaming since Github doesn't seem to allow attaching with the name openbsd-ppc64-n2vi.csr you asked for.
The text was updated successfully, but these errors were encountered: