Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Draft] Add generic logic structure and backends functionalities #11

Open
wants to merge 22 commits into
base: main
Choose a base branch
from

Conversation

Doomsk
Copy link
Member

@Doomsk Doomsk commented Jun 27, 2024

Design a logic and structure for the backends to work according to some defined rules and have the same standard approach.

According to qadence 2 structure design (notion page):
[x] Add Dialect, Bytecode and Interface generic api classes to accommodate for current and future backends
[x] Adjust pulser and pyqtorch logic and data structure to working with the apis above
[] Use Embedding from pyqtorch v1.3.0

@Doomsk Doomsk requested a review from kaosmicadei June 27, 2024 13:35
@Doomsk Doomsk self-assigned this Jun 27, 2024
@Doomsk Doomsk requested review from dominikandreasseitz and removed request for kaosmicadei June 27, 2024 13:40
@kaosmicadei
Copy link
Contributor

Should we add the QuInstruct extra attributes to this PR, too?

class QuInstruct:
    def __init__(self, name: str, support: Support, *args: Any, **attributes: Any) -> None:
        self.name = name
        self.support = support
        self.args = args
        self.attr = attributes

    def get(self, attribute: str, default_value: Any | None = None) -> Any:
        return self.attr.get(attribute, default_value)

@Doomsk
Copy link
Member Author

Doomsk commented Jul 1, 2024

Should we add the QuInstruct extra attributes to this PR, too?

class QuInstruct:
    def __init__(self, name: str, support: Support, *args: Any, **attributes: Any) -> None:
        self.name = name
        self.support = support
        self.args = args
        self.attr = attributes

    def get(self, attribute: str, default_value: Any | None = None) -> Any:
        return self.attr.get(attribute, default_value)

definitely we can use it for that

Comment on lines +58 to +64
match direction:
case "x":
phase = 0.0
case "y":
phase = np.pi / 2
case _:
phase = direction
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remember to drop support for Python 3.9 if we do this.

I think that's fine, since 3.8 was the difficult one (shipped with Ubuntu 20), and 3.9 is already outside Numpy support.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good point!

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it's almost fully dropped; I'm taking care of the remaining parts of 3.9

pyproject.toml Outdated
@@ -26,7 +26,9 @@ classifiers = [

# always specify a version for each package
# to maintain consistency
dependencies = ["pyqtorch",
dependencies = [
"pulser",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think pulser-core, pulser-simulation and later with torch, will be sufficient and we can avoid the pulser-pasqal module.

@awennersteen
Copy link
Member

I'm getting a bit confused here. Can you link me to any written idea for the backend platform?
I will continue this in Slack

Comment on lines +51 to +52
Fresnel = replace(PulserAnalogDevice.to_virtual(), dmm_objects=(_dmm,))
FresnelEOM = Fresnel
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't we call this OrionAlpha

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm just using the names that I'm aware of; if you have some extra information, let me know and I'll change it!

Comment on lines +54 to +56
simulation = QutipEmulator.from_sequence(
sequence=self.sequence.evaluate(self.embedding, values),
with_modulation=True,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so, this is the "old fashioned" API to pulser simulation.
I understand the more "modern" way is to separately create the emulator and then pass it the sequence.
This way is also more compatible with tensor network backends where you want to adjust settings. perhaps that should be made possible in the API?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

that's the intention. for now, we are testing functionalities

with_modulation=True,
)
result = simulation.run()
return result.sample_final_state(N_samples=num_shots)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Run should return the state, no?

np.ndarray,
]
):
def __init__(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For running on the real device (and proposed as default on Pulser coming up) usage of the Batch concept is required.
It might be good now that we're redesigning the API to make it batch aware or compatible.

I can imagine that this would be similar to what I say a few lines below on instantiating the emulator separately.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

definitely will take that into account!

Comment on lines 13 to 23
def compile(
model: Model,
backend_name: str,
device: Optional[str] = None,
output_bytecode: bool = False,
) -> Union[RuntimeInterfaceApi, BytecodeApi]:
dialect_module: ModuleType = get_dialect_instance(backend_name)
dialect: DialectApi = dialect_module.Dialect(backend_name, model, device)
if output_bytecode:
return dialect.compile_bytecode()
return dialect.compile()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This might be a premature comment, but, I'd expect that the backend should be able to hook in and provide it's own compiler passes.

I'm also a bit confused by the bytecode and dialect, but I don't see anything written anywhere

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@awennersteen when I put the docstring, things will be more clear (hopefully)
but long story short, return type RuntimeInterface has all the relevant methods from the specific backend and it will just uses what implementation the backends supports for that particular method.

f_params = {"x": torch.rand(1, requires_grad=True)}
wf = compiled_model(pyq.zero_state(2), f_params)
dfdx = torch.autograd.grad(wf, f_params["x"], torch.ones_like(wf))[0]
assert not torch.all(torch.isnan(dfdx))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
assert not torch.all(torch.isnan(dfdx))
assert not torch.any(torch.isnan(dfdx))

@awennersteen
Copy link
Member

When working with people in the Hamiltonian or Simulation teams they generally always request access to intermediate stage information from the emulator and I understand that this is crucial for their workflow.

I fear that the current approach here will not enable that, but it should be fairly easy to make this available I think.

Copy link
Contributor

@RolandMacDoland RolandMacDoland left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @Doomsk. This is a massive PR and I just noticed that I didn't press the submit button after my previous review. Sorry about this but here it is. I haven't gone through all of it in all fairness, will review in steps.

@@ -13,14 +13,14 @@ repos:
- id: black

- repo: https://github.com/charliermarsh/ruff-pre-commit
rev: "v0.4.2"
rev: "v0.4.10"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you please check if the template-python-project is up-to-date with pre-commits ?

docstring-quotes = "double"

[tool.mypy]
python_version = "3.10"
warn_return_any = true
warn_return_any = false
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why ?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it doesn't like when certain types of returns happen, namely if you do a function factory

@@ -26,7 +26,10 @@ classifiers = [

# always specify a version for each package
# to maintain consistency
dependencies = ["pyqtorch",
dependencies = [
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do these dependencies have to be installed by default ? Or could they be added as extra ?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll change it once refactoring parts of the Embedding

def __init__(
self,
backend: str,
sequence: InstructionsObjectType,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is highly confusing that sequence is of instruction type (sort of) and instructions is of bytecode type.


def __init__(
self,
backend: str,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't that be a backend name ?

"""

model: Model
param_buffer: ParameterBufferApi
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
param_buffer: ParameterBufferApi
param_buffer: ParameterBuffer


model: Model
param_buffer: ParameterBufferApi
mapped_vars: EmbeddingMappingResultType
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
mapped_vars: EmbeddingMappingResultType
mapped_vars: EmbeddingMapType

)


class RuntimeInterfaceApi(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
class RuntimeInterfaceApi(
class RuntimeInterface(

Interface is redundant.


if model.register.grid_type == "triangular":
layout = TriangularLatticeLayout(n_traps=61, spacing=spacing_unit)
transform = np.array([[1.0, 0.0], [0.5, 0.8660254037844386]])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this some unit conversion ?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

to transform the triangular shape from a regular square one

# TODO: move `from_model` and `name_mapping` to a new `ModelParser` class and
# make `Embedding` independent of `Model`; also no need to have `ParameterBuffer`

def np_call(call: Call) -> Callable[[dict, dict], np.ndarray]:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't that covered in Embedding ?

@RolandMacDoland
Copy link
Contributor

Hey @Doomsk you mentioned you'll be carrying on with it right ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants