Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow handicap calculation for generic scoring systems #56

Closed
TomHall2020 opened this issue Feb 2, 2024 · 3 comments · Fixed by #59
Closed

Allow handicap calculation for generic scoring systems #56

TomHall2020 opened this issue Feb 2, 2024 · 3 comments · Fixed by #59
Assignees

Comments

@TomHall2020
Copy link
Contributor

TomHall2020 commented Feb 2, 2024

Presently the supported scoring systems are definied in the Target class and used to guide the calculation of expected arrow score, by dispatching to an inlined formula. Therefore only explicitly listed scoring systems are usable. This also makes the mathematical part of the code in arrow_score() harder to read and maintain

It would be useful to make this generic and be able to directly calculate the expected arrow_score and therefore handicap for any custom scoring system (eg 11zone or kings of archery scoring for indoors) , by directly providing the parameters defining the rings of a target face, or also in my intended use case to just provide the target information directly rather than having to maintain a mapping across to the archeryutils named scoring systems for different targets.

I've got an implementation part cooked up already having reverse engineered the formulas in handicap_equations, so the more interesting bit for me is how to expose an API to use it.

@TomHall2020
Copy link
Contributor Author

TomHall2020 commented Feb 3, 2024

To explain what I've cooked up so far, heres some relevant snippets.

in handicap_equations.py:

...
import itertools as itr
...

def _s_bar(target_specs: dict[float, int], arw_rad: float, sig_r: float):
    """Calculate expected score directly from target ring sizes.

    Parameters
    ----------
    target_specs : dict[float, int]
        Mapping of target ring *diameters* in [metres], to points scored
    arw_d : float
        arrow diameter in [metres]
    sig_r : float
        standard deviation of group size [metres]

    Returns
    -------
    s_bar : float
        expected average score per arrow

    Notes
    -----
    May differ from previous implementions due to floating point errors in deriving
    ring radiuses directly from diameters, rather than multiplying by ring count and
    dividing by overall diameter
    Eg 0.12 / 2 != 3*0.4 / 20

    Examples
    --------
    >>> #WA 18m compound triple spot
    >>> specs = {0.02: 10, 0.08: 9, 0.12: 8, 0.16: 7, 0.2: 6}
    >>> _sbar(specs, 9.3e-3, 0.04)
    8.928787288284521 # differs from previous implementation by 1.7763568394002505e-15
    """

    ring_scores = [*itr.chain(target_specs.values(), [0])]
    score_drops = [inner - outer for inner,outer in itr.pairwise(ring_scores)]
    max_score = max(ring_scores)

    s_bar = max_score - sum(
        score_drop * np.exp(-(((arw_rad + (ring_diam / 2)) / sig_r) ** 2))
        for ring_diam, score_drop in zip(target_specs, score_drops)
    )
    return s_bar

example test:

class Test_S_Bar:

    @pytest.mark.parametrize(
            "distance, arrow_diam, handicap",
            [
                (20.0, 9.3e-3, 20.0),
                (18.0, 9.3e-3, 25.0),
                (18.0, 9.3e-3, -10.0),
                (18.0, 9.3e-3, 200.0),
                (18.0, 10.7e-3, 5.0),
                (10.0, 7.5e-3, 60.0),
                (50.0, 5.5e-3, 25.0),
            ]
    )
    def test_compare_vs_40cm_compound(self, distance, arrow_diam, handicap):
        target = Target("10_zone_5_ring_compound", 40, distance, indoor=True)
        spec = {0.02: 10, 0.08: 9, 0.12: 8, 0.16: 7, 0.2: 6}

        arrow_score = hc_eq.arrow_score(
            target=target,
            handicap=handicap,
            hc_sys="AGB",
            hc_dat=hc_params,
            arw_d=arrow_diam,
        )
        sigma_r = hc_eq.sigma_r(
            handicap=handicap,
            hc_sys="AGB",
            dist=distance,
            hc_dat = hc_params
        )
        s_bar = hc_eq._s_bar(
            tspec,
            arw_rad=arrow_diam/2,
            sig_r=sigma_r
        )
        
        assert arrow_score == pytest.approx(s_bar)

@TomHall2020
Copy link
Contributor Author

I'm pretty far in with this now, will open a PR to show what I've been able to do with the generic implementation, theres still choices to make on the api that I could use input on, mainly with regards to constructing target/pass instances. I think it might have actually fixed a mistake in the calculations for WA field faces.

@jatkinson1000
Copy link
Owner

Yep, I am in support of this.
Please do open a PR - if you can open as WIP or draft I can try to take a look and provide some input and we can record development conversation there.
I am currently working on #58 and #2

Comments based on the above:

  • It would be good to preserve providing the target type as a string and covering most target types, with user-defined being a fairly specialised option. My guess at the best way to do this would be to have some kind of private dictionary that mapped default face types to instances of your target_specs dictionary.
  • Rather than creating a private s_bar function I think it'd be better to just modify arrow_score? But I appreciate having both in place to compare for now.
  • With changes in precision we'd need to be careful and see how the tests fare, as the official numbers now set by AGB use the old code - which is the same as the formulation derived by D. Lane. If there are differences then I think there would be a way to wrangle the numbers, it'd just be a bit verbose/ugly. Hopefully this won't be an issue however... I'll think on it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants