Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Evolution matrix formalism failing for large WC #118

Open
LucaMantani opened this issue Nov 22, 2024 · 9 comments
Open

Evolution matrix formalism failing for large WC #118

LucaMantani opened this issue Nov 22, 2024 · 9 comments

Comments

@LucaMantani
Copy link

LucaMantani commented Nov 22, 2024

Dear authors,

I am using wilson with a monkey patch in order linearise the RGE. However, I am finding that while the approximation seems to hold for small wilson coefficient values, it fails if the values are big, e.g. if we run the following snippet

import wilson
import numpy as np
from functools import partial

original_beta = wilson.run.smeft.beta.beta


def beta_wrapper(C, HIGHSCALE=np.inf, *args, **kwargs):
    return original_beta(C, HIGHSCALE, *args, **kwargs)


wilson.run.smeft.beta.beta = beta_wrapper
wilson.run.smeft.beta.beta_array = partial(
    wilson.run.smeft.beta.beta_array, HIGHSCALE=np.inf
)

wc_init = wilson.Wilson({"phi": 1e-6}, scale=1000, eft="SMEFT", basis="Warsaw")

wc_final = wc_init.match_run(scale=100, eft="SMEFT", basis="Warsaw").dict

print(wc_final)

wc_init = wilson.Wilson({"phi": 10e-6}, scale=1000, eft="SMEFT", basis="Warsaw")

wc_final = wc_init.match_run(scale=100, eft="SMEFT", basis="Warsaw").dict

print(wc_final)

it returns

{'phi': (6.716328767472983e-07-6.873333994094644e-28j)}
{'phi': (8.041393150109633e-05+1.1457589123504864e-26j)}

where the second is not 10 times the first.

If I repeat the exercise with smaller values, it works much better, although not perfect. Do you understand why it's not working? there must be something I am missing that happens in the code.

Could it be because this operator is redefining an input parameter? I find that other operators are less affected.

@peterstangl
Copy link
Collaborator

I don't understand what should not be working here. If you use such an extremely large value for $C_\phi$, this has to seriously affect the values of the dim-4 parameters in the Higgs potential ($m^2$ and $\lambda$), which are determined from the Higgs mass and vev (which depend on both dim-4 and dim-6 parameters, and in particular $C_\phi$). A big change in the SM parameters affects the beta functions and the running of $C_\phi$.
All of these effects you should see no matter if you linearise the RGEs or not. But of course with such an extreme value of $C_\phi$, the dim-6 contribution to the dim-4 running is significant and you should find a clear difference between the solution of the full RGEs and the linearised ones.

Note that
$\lambda = M_H^2/v^2 + 3 C_\phi v^2$
and that $1/v^2 = 1.7 \times 10^{-5}~ \text{GeV}^{-2}$
So your $C_\phi$ is essentially as big as $1/v^2$

@LucaMantani
Copy link
Author

Yes, I added a comment on that just few minutes ago indeed :) Indeed I was finding that this was happening only for some operators, the ones that modify the inputs! Thanks for the prompt answer, I will close the issue, I overlooked the inputs effect :)

@peterstangl
Copy link
Collaborator

peterstangl commented Nov 26, 2024

Maybe let me just add that if you would fully linearise the RGEs, you would not see the effects mentioned above. Your code does not fully linearise the RGEs since it only removes the dim-6 contributions in the dim-4 running, but it doesn't remove the dim-6 contributions in the extraction of the dim-4 parameters. The current version of wilson has not been written to linearise the RGEs, so without further monkey patching you will always see some non-linear effects.
But if you work in a scenario in which these non-linear effects are actually not negligible, then linearising the RGEs is not a good approximation.

@peterstangl peterstangl reopened this Nov 26, 2024
@LucaMantani
Copy link
Author

Thanks, indeed I proceeded to monkey patch that part to fully linearise things. I also needed to switch off the flavour rotation at the end of the evolution, does it make sense to you?

@peterstangl
Copy link
Collaborator

If you switch off the flavour rotation at the end of the evolution, you will end up in a non-canonical flavour basis in which neither the down- nor the up-Yukawa matrix is diagonal. I don't think this is what you want, and so you should not switch it off.

If you want to include the flavour rotation in an evolution matrix, i.e. you want to remove any non-linearities in the dim-6 Wilson coefficients, then I think what you should do is to remove the dim-6 contributions to the mass matrices that are diagonalized to obtain the flavour rotation matrices. So I think you would have to patch the following lines:

Mep = v/sqrt(2) * (C['Ge'] - C['ephi'] * v**2/2)
Mup = v/sqrt(2) * (C['Gu'] - C['uphi'] * v**2/2)
Mdp = v/sqrt(2) * (C['Gd'] - C['dphi'] * v**2/2)

Of course this is only a good approximation if the coefficients ephi, uphi, and dphi are small enough.

@LucaMantani
Copy link
Author

Is the flavour rotation doing anything relevant for the WC running if SM parameters are running on their own (dim-6 contributions there have been switched off) and there is no input redefinitions with dim-6 operators?

@peterstangl
Copy link
Collaborator

Yes, it does. The RG evolution of the Yukawa matrices in the SM (without any dim-6 contributions) leads to a rotation in flavour space. This means that an initially diagonal Yukawa matrix will have non-zero off-diagonal entries after the RG evolution and has to be re-diagonalized. This re-diagonalization, also known as "back-rotation", rotates the fermion fields in flavour space, and so it affects all operators involving fermions, even if the RG evolution of the Yukawa matrices is only the SM RG evolution.

Strictly speaking, the back-rotation is not part of the RG evolution itself. But the RG evolution doesn't preserve a given flavour basis. So if you don't re-diagonalize, then you run from one flavour basis at some scale into a different flavour basis at another scale. Making the re-diagonalization part of the running, i.e. defining the running as a combination of RG evolution and back-rotation, allows you to actually run from a given flavour basis at some scale to the same flavour basis at another scale. So if you want to have an evolution matrix that evolves dim-6 coefficients from one scale to another in one and the same flavour basis, then this evolution matrix has to be a combination of the RG evolution matrix and the back-rotation matrix, which are both independent of the values of dim-6 coefficients if the dim-6 corrections to dim-4 parameters are neglected.

For some examples of the phenomenological relevance of the back-rotation, see e.g. https://arxiv.org/abs/2005.12283.

@LucaMantani
Copy link
Author

I see, thanks for the explanation and the reference, very interesting! We are currently running with only top-yukawa turned on and in that case the RG preserves the structure (within all of our approximations) not generating off-diagonal nor other Yukawas. So in this specific case, the rotation does not seem to have an impact.

@peterstangl
Copy link
Collaborator

Yes, if the CKM matrix is taken to be the unit matrix and so up and down Yukawa matrices can be simultaneously diagonalised (which is the case in your scenario with only top Yukawa turned on), then the Yukawa couplings preserve a global flavour symmetry that protects you from generating off-diagonal entries through RG evolution.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants