Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Migration to 2.0.0 fails to connect to Clickhouse when using IPv6 #3173

Closed
2 tasks done
BeryJu opened this issue Jul 21, 2023 · 2 comments · Fixed by #3179
Closed
2 tasks done

Migration to 2.0.0 fails to connect to Clickhouse when using IPv6 #3173

BeryJu opened this issue Jul 21, 2023 · 2 comments · Fixed by #3179
Assignees

Comments

@BeryJu
Copy link

BeryJu commented Jul 21, 2023

Past Issues Searched

  • I have searched open and closed issues to make sure that the bug has not yet been reported

Issue is a Bug Report

  • This is a bug report and not a feature request, nor asking for self-hosted support

Using official Plausible Cloud hosting or self-hosting?

Self-hosting

Describe the bug

When running bin/plausible rpc Plausible.DataMigration.NumericIDs.run when using IPv6 (having set ECTO_IPV6 and ECTO_CH_IPV6 to true), the migration fails:

~ $ bin/plausible rpc Plausible.DataMigration.NumericIDs.run
** (MatchError) no match of right hand side value: {:error, %DBConnection.ConnectionError{message: "connection not available and request was dropped from queue after 4000ms. This means requests are coming in and your connection pool cannot serve them fast enough. You can address this by:\n\n  1. Ensuring your database is available and that you can connect to it\n  2. Tracking down slow queries and making sure they are running fast enough\n  3. Increasing the pool_size (although this increases resource consumption)\n  4. Allowing requests to wait longer by increasing :queue_target and :queue_interval\n\nSee DBConnection.start_link/2 for more information\n", severity: :error, reason: :queue_timeout}}
    lib/plausible/data_migration/numeric_ids.ex:6: Plausible.DataMigration.NumericIDs.do_run/2
    lib/plausible/data_migration/numeric_ids.ex:48: Plausible.DataMigration.NumericIDs.run/1
    nofile:1: (file)
    (stdlib 4.2) erl_eval.erl:748: :erl_eval.do_apply/7
    (elixir 1.14.3) lib/code.ex:425: Code.validated_eval_string/3
~ $

Expected behavior

The migration should work


Having a dig around https://github.com/plausible/analytics/blob/16846b16c8e513f72d7c23459ee879d9249f556d/lib/plausible/data_migration/repo.ex, when adding

transport_opts: [
  inet6: true,
]

on line 20, it does work (requires rebuilding the container)

Screenshots

No response

Environment

- OS: n/a
- Browser: n/a
- Browser Version: n/a
@ruslandoga
Copy link
Contributor

ruslandoga commented Jul 21, 2023

👋 @BeryJu

Oh, great catch! Thank you!

Something like this would probably be a good enough fix:

  def start(url, max_threads) when is_binary(url) and is_integer(max_threads) do
    start_link(
      url: url,
      queue_target: 500,
      queue_interval: 2000,
      pool_size: 1,
      settings: [
        max_insert_threads: max_threads,
        send_progress_in_http_headers: 1
      ],
+     transport_opts: Plausible.ClickHouseRepo.config() |> Keyword.fetch!(:transport_opts)
    )
  end

I'll PR it tomorrow morning on Monday :)

@ruslandoga ruslandoga self-assigned this Jul 21, 2023
@BeryJu
Copy link
Author

BeryJu commented Jul 21, 2023

Yeah that's pretty close to my temporary hotfix which was

diff --git a/lib/plausible/data_migration/repo.ex b/lib/plausible/data_migration/repo.ex
index 3ceac17b..8dcfd225 100644
--- a/lib/plausible/data_migration/repo.ex
+++ b/lib/plausible/data_migration/repo.ex
@@ -16,6 +16,9 @@ defmodule Plausible.DataMigration.Repo do
       settings: [
         max_insert_threads: max_threads,
         send_progress_in_http_headers: 1
+      ],
+      transport_opts: [
+        inet6: true,
       ]
     )
   end

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants