-
Notifications
You must be signed in to change notification settings - Fork 431
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SIMD distributions aren't well documented #1227
Comments
This also might be related to #496 but that issue hasn't had any comments for 4 years and a lot has been done since then, so, I'm not sure what the status of that is. |
Thanks for the detailed issue. I won't read into the specifics now, but do plan to go over SIMD Uniform distributions as part of #1196 (and would welcome relevant comments on the report there). |
I should also add that before I wrote this, I didn't know about the |
I really should have read this earlier, apologies.
This is a line inside the plane (or space). I'm surprised you'd think we might have a sampler for that, and especially surprised you might think SIMD algorithms would do that. Granted, we do have UnitCircle which samples from a line in the plane, but... SIMD is Single Instruction Multiple Data, not interpolation. You can think of a SIMD sampler as sampling in (hyper-)box defined by two points or you can think of it as independently sampling N values from N independent ranges. This applies to both int and float variants. SIMD stuff isn't well documented because it's experimental and not even stable really, but... maybe we should add some basic docs.
This is a different problem solved by UnitBall.
This works (using rand master since the previous release used packed_simd_2): [package]
name = "simd-tests"
version = "0.1.0"
edition = "2021"
[dependencies.rand]
git = "https://github.com/rust-random/rand.git"
rev = "7d73990096890960dbc086e5ad93c453e4435b25"
features = ["simd_support"] #![feature(portable_simd)]
use rand::prelude::*;
use std::simd::Simd;
fn main() {
let mut rng = rand::thread_rng();
let x: Simd<i8, 4> = rng.gen();
println!("x = {x:?}");
let y: Simd<f32, 4> = rng.gen_range(
Simd::<f32, 4>::splat(0.0) ..=
Simd::from_array([1.0, 2.0, 3.0, 1.0])
);
println!("y = {y:?}");
let z: Simd<i32, 4> = rng.gen_range(
Simd::<i32, 4>::splat(0) ..=
Simd::from_array([10, 100, 1000, 10000])
);
println!("z = {z:?}");
}
Sounds like you are talking about ActionWe should add a little documentation clarifying exactly what SIMD stuff is good for. Maybe we should also support something like Problem: |
When I ported to std::simd I added a small bit in Lines 156 to 159 in 7d73990
Though this issue still deserves more thought |
Some doc was added in #1526 |
Right now, the exact definitions for the distributions for SIMD types aren't documented at all, and I personally had to delve into the source code to (after much difficulty) discover which actual implementation was provided. Going to separate out some thoughts into sections so it's a bit easier to read.
Uniform distributions
Uniform distributions for SIMD types have two natural definitions: either as a linear interpolation between the two values, or as a random value inside the box enclosed by the two provided points.
For floating-point SIMD vectors, both definitions would make logical sense to include somewhere, even though the latter can easily be computed from existing float distributions with one caveat: does "excluding" the end point from the distribution mean that all the facets around the final point are excluded (no dimensions may be equal to the final point) or that just the final point itself is excluded? The former is easily composable with existing distributions, but the latter is not, although I fail to find a case where the latter is that useful.
For integers, the bounding-box definition still is natural, but the linear interpolation is not, since it would require computing the distance between the points and removing common factors from all components before adding. And, unlike floats where precision can be lost, the linear interpolation could easily be done losslessly.
As expected, integers use the bounding-box approach. However, floats use the linear interpolation approach, at least from what I can see. While the linear interpolation is (IMHO) the objectively more useful version to provide, I think that both could be useful in their own right, and there's not really any documentation anywhere that describes what version is provided, whereas floats have entire pages written about what the distributions do.
Standard distributions
While it seems obvious, the standard distributions (and the open and open-closed distributions) simply generate a random number for each lane of the vector for floats. However, a mathematically useful alternative could be generating a random point inside a unit hypersphere, which is not trivial, but definitely a reasonable distribution to add.
Const generics
One thing that would be useful along the SIMD types' distributions is const-generic versions of the existing
gen_iter
methods that provide a set amount of points. This could potentially mitigate concerns about the distributions' implementations as well by allowing users to generate values asf64x4::from_array(rng.gen_array::<4>())
. While not absolutely required for SIMD support, it's something else I'd like to see that's probably mentioned in another issue, so, I won't focus too much on it.Non-SIMD versions
And finally… it would be useful to have these distributions available outside of the SIMD implementations. Although it would require extra computation, it would be useful to be able to compute uniform ranges along a line without the loss of accuracy that could be accumulated via multiplication. I'm not sure what this API would look like, but IMHO, the SIMD versions should just be wrappers around these distributions, where the distributions themselves might be optimised to use SIMD operations internally regardless of the end result. (once SIMD support is stabilised, that is)
The text was updated successfully, but these errors were encountered: