-
Notifications
You must be signed in to change notification settings - Fork 632
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Automated Logpush challenge not working #1019
Comments
This looks to be similar to #954 (pending an upstream Terraform discussion at hashicorp/terraform-plugin-sdk#706) however yours is slightly different that you get an error for an incorrect ownership challenge value. In other cases, this scenario is returning a A workaround I've previously used to fix the attached issue is to apply the change in two steps. One for the challenge and another for the logpush job. A brief but slightly different variant of https://registry.terraform.io/providers/cloudflare/cloudflare/latest/docs/resources/logpush_job#example-usage-manual-inspection-of-s3-bucket. This allows you to remove the dependency chain issue while still keeping your resources managed in code. I might have a play and see if there is a third way we can document for total automation but I recall coming up empty last time I tried to find a better approach. As for the documentation issues, I'm open to a PR updating those. For the S3 permissions, it might be best over at the log documentation on developers.cloudflare.com as that runs through all the integrations. |
I just spun up my old test case for this and it worked 🤷♂️ The only thing different is that I had an output at the bottom for debugging.
Steps I took:
{
"Version": "2012-10-17",
"Id": "Policy1506627184792",
"Statement": [
{
"Sid": "Stmt1506627150918",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::391854517948:user/cloudflare-logpush"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::jb-tf-repro-2/*"
}
]
}
Could you try my test above? |
Unfortunately the two-pass apply doesn't work either, fails for the same reason: Pass 1, just the challenge and the data source (applies successfully):
Pass 2: adding back in the job (fails, same error as before):
Again, the value displayed in |
Hmm, this feels like something small is being missed resulting in a weird error. Internally the "incorrect ownership challenge" error message is used only for what you'd think it would be -- when the value it gets via the API doesn't match what it is expecting. To help iron this out, are you able to try this on a fresh new bucket with my example above and slowly add the components to it? I.e. subdirectory configuration, |
Just an observation, the one of the differences here is that your
I wonder if this is causing an issue when attempting to validate the new logpush job? Could you make it destination_conf = "s3://${var.logpush_bucket_name}/${local.zone_name}/{DATE}?region=${var.logpush_bucket_region}" to match the other |
Ah! Bingo. That's it. The |
@jacobbednarz it does not work for me unless its a 2 step approach still. I am using GCP and logpush job still fails since the dependency is still not respected and file does not exist |
bugger 😢 S3 is now consistently working for me so I wonder if the GCP side of things needs addressing there instead of in this provider 🤔 |
@jacobbednarz ah, good catch, that worked for me. I left It's all working for me now, in a single pass. I don't have an explicit dependency order, but Terraform applied them in the correct order (whether coincidently or correctly, I'm not sure). It may be nice in that upcoming docs PR to also mention that the two values need to be identical. It's sort of obvious in hindsight, but there could be others caught up with this same thing. |
sure, i've opened #1024 to explicitly call this out. |
@jacobbednarz yep, just tried few times, still only works if i comment out logpush job and create t during a second run, see the code below //creating separate 3 buckers for each log type
resource "google_storage_bucket" "log-storage" {
for_each = var.logpush-schemas
project = var.gcp-project
name = "${replace(var.global_zonename,".","-")}-${each.key}"
location = "US"
force_destroy = true
lifecycle_rule {
condition {
age = 365
}
action {
type = "SetStorageClass"
storage_class = "COLDLINE"
}
}
}
//setting up iam for access to storage buckets
resource "google_project_iam_binding" "cf-storageadmin" {
project = var.gcp-project
role = "roles/storage.objectAdmin"
members = [
"serviceAccount:[email protected]",
]
}
data "google_iam_policy" "cf-storageadmin" {
depends_on = [google_project_iam_binding.cf-storageadmin]
binding {
role = "roles/storage.objectAdmin"
members = [
"serviceAccount:[email protected]",
]
}
}
resource "google_storage_bucket_iam_policy" "add-cf-storage-admin" {
depends_on = [google_storage_bucket.log-storage]
for_each = var.logpush-schemas
bucket = google_storage_bucket.log-storage[each.key].name
policy_data = data.google_iam_policy.cf-storageadmin.policy_data
}
resource "cloudflare_logpush_ownership_challenge" "ownership_challenge_logpush" {
depends_on = [google_storage_bucket_iam_policy.add-cf-storage-admin]
for_each = var.logpush-schemas
zone_id = var.zoneid
destination_conf = "gs://${replace(var.global_zonename,".","-")}-${each.key}"
}
data "google_storage_bucket_object_content" "challenge_data_logpush" {
for_each = var.logpush-schemas
bucket = google_storage_bucket.log-storage[each.key].name
name = cloudflare_logpush_ownership_challenge.ownership_challenge_logpush[each.key].ownership_challenge_filename
}
resource "cloudflare_logpush_job" "cf_logpush_http-requests" {
for_each = var.logpush-schemas
depends_on = [data.google_storage_bucket_object_content.challenge_data_logpush]
enabled = true
zone_id = var.zoneid
name = replace(each.key,"_","-")
logpull_options = "fields=${each.value[1]}"
destination_conf = "gs://${google_storage_bucket.log-storage[each.key].name}"
ownership_challenge = data.google_storage_bucket_object_content.challenge_data_logpush[each.key].content //lookup(var.challenges,each.key)
dataset = each.key
} |
if i try to execute all of the above in one run i get teh following: |
Confirmation
My issue isn't already found on the issue tracker.
I have replicated my issue using the latest version of the provider and it is still present.
Terraform version
Terraform version: 0.14.9
Cloudflare provider version: 2.19.2
Affected resource(s)
cloudflare_logpush_ownership_challenge
cloudflare_logpush_job
aws_s3_bucket_object
Terraform configuration files
Debug output
Panic output
None
Expected output
The job to be created successfully.
Actual output
An error was thrown:
Steps to reproduce
Additional factoids
I have confirmed that the
ownership_challenge
value sent in the job is identical to the value in the ownership challenge file written to S3.I've come across two minor documentation issues while working on this:
The example documentation uses a
data
resource, but references it as if it were a standard resource:ownership_challenge = aws_s3_bucket_object.challenge_file.body
should really read:
ownership_challenge = data.aws_s3_bucket_object.challenge_file.body
There is no mention of required S3 permissions for the ownership challenge. I've found that this code requires
GetObject
,PutObject
, andGetObjectTagging
. It would be helpful if these were documented somewhere.References
No response
The text was updated successfully, but these errors were encountered: