You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Since #4782 we've been pulling the BackendProperties unconditionally if they're available for a backend. The only provider that actually provides them (beside the fake backends) is the IBMQ provider which involves an expensive API to parse a large json blob into the BackendProperties object. While this is cached for everything after the first call, there can be significant overhead on the first call to properties() which can often be slower than the entire transpile. What feels unnecessary here is that there is no qubit remapping at all (and I've never seen it happen in practice), so we waste time querying the ibmq api and never use the data we sit waiting for.
Steps to reproduce the problem
Run transpile() with a backend that has a BackendProperties available.
What is the expected behavior?
That we don't actually call backend.properties() unless needed
Suggested solutions
Honestly I think we should revert the faulty qubit functionality between issues like this and #5113 it doesn't seem to ever be used. But also this isn't actually something we should be doing in the transpiler, if a qubit is faulty and the qubits should be remapped this is something the provider should be doing, since it's backend specific, it feels like hacking this inside transpile() is the wrong place for it.
Longer term when we have a new backends abstract api (like what's being proposed in #5885 ) the provider will actually be proving a CouplingMap object which makes this a bit easier, but there's nothing stopping the ibmq provider from doing this with the coupling map edge list today.
The text was updated successfully, but these errors were encountered:
So this is no longer relevant.The faulty qubit functionality isn't part of the BackendV2 interface and providers will need to do any faulty qubit filtering themselves. But also more of the transpiler is noise aware now (vf2 layout passes, dense layout passes, etc) so we can't avoid loading the backend properties. Especially with BackendV2/Target this is all integrated into what we pass to the compiler now so there is no way to avoid it anymore.
Information
What is the current behavior?
Since #4782 we've been pulling the
BackendProperties
unconditionally if they're available for a backend. The only provider that actually provides them (beside the fake backends) is the IBMQ provider which involves an expensive API to parse a large json blob into theBackendProperties
object. While this is cached for everything after the first call, there can be significant overhead on the first call toproperties()
which can often be slower than the entire transpile. What feels unnecessary here is that there is no qubit remapping at all (and I've never seen it happen in practice), so we waste time querying the ibmq api and never use the data we sit waiting for.Steps to reproduce the problem
Run
transpile()
with a backend that has aBackendProperties
available.What is the expected behavior?
That we don't actually call
backend.properties()
unless neededSuggested solutions
Honestly I think we should revert the faulty qubit functionality between issues like this and #5113 it doesn't seem to ever be used. But also this isn't actually something we should be doing in the transpiler, if a qubit is faulty and the qubits should be remapped this is something the provider should be doing, since it's backend specific, it feels like hacking this inside
transpile()
is the wrong place for it.Longer term when we have a new backends abstract api (like what's being proposed in #5885 ) the provider will actually be proving a
CouplingMap
object which makes this a bit easier, but there's nothing stopping the ibmq provider from doing this with the coupling map edge list today.The text was updated successfully, but these errors were encountered: