-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
STFT processor #2
Comments
I am not sure one can get the required bandwidth with only a 1024 point FFT. I want to do a MS-type gate with motional sidebands at f_c +- 5 MHz, with a gate duration of 1 ms. Q1) Is this many points in the FFT possible? Or is the expectation that one would need 2 STFT processors, one for the upper and one for the lower sideband (in this example). Q2) There is a reasonable amount of data to be transferred into the STFT engine (2 sidebands * 20 points * 2 coefficients = 160 words that are non-zero) - how do you foresee this loading being implemented so as not to be a bottleneck? |
Q1: If you want to cover Q2: In the worst case you have to transfer those 160 words every pulse (1 ms). That's ok. And what do you mean by 20 points? Also what do you compare this with? A 4-tone pulse in the old architecture (not even shaped in ampltiude) has 4 tones times (2 words for frequency, 1 for amplitude, one for phase) = 16 words. In an STFT it has (4 components times 2 quadratures) plus 2 words for carrier, one for duration, and one for interpolation = 12 words. |
Other ways to phrase the constraints:
|
But I see that due to simple coupling strength considerations one would want multiple (blue and red per mode) clearly separate spectral windows, i.e. either multiple carriers or even multiple independent STFT processors. That would allow large |
@jordens I haven't had much time to think about this these past weeks, but I would say it sounds quite interesting. In terms of generating the tones, what kind of distortion/noise floor can one reasonably expect from the FFT with (I am assuming CIC) interpolation? As long as it is below what the DAC can manage we are fine, but I don't have a sense for how this scales with e.g. number of tones, Didi has for a long time wanted to express transport waveforms as Fourier series rather than as splines (as you know!), so it seems this would allow that to be tried out as well. We would set For microwave gates, we have sideband Rabi frequencies in the 1-2 kHz regime, so we need to adjust our sideband detunings at the ~ 10s of Hz resolution level to get good loop closure for really high fidelity gates. For this, given the ~10-15 MHz splitting between red and blue sideband frequencies, we would need to have two separate small spectral windows (keeping A couple of questions for now:
|
|
Ack.
So what would be? It depends on your application, and the type of distortion. If it's broadband noise that you end up with, that's different from harmonic distortion which is (probably) less critical as long as one is careful about where the
I am not pushing hard for Fourier decomposition being the best/right thing to do, but it is a different way to represent an arbitrary signal over a finite period of time. The advantage (why it would be "better") would be relative ease in adjusting parameters to compensate for non-flat frequency characteristics of (DAC, amplifier, trap filters) between the digital waveform and the ion. Are the waveforms periodic? In general, no. And I have not spent time thinking about this in detail, or doing calculations (transport is on the far back burner for me these days); this was merely a point/question since this STFT parameterization would enable one to try out the idea for transport.
Roughly speaking, the answer is "yes", but there would probably be the need for an overall amplitude scaling and time offset between red/blue sidebands to make sure the amplitudes and phases are matched at the ion, due to amplifier gain mismatch/frequency-dependent line loss/channel-dependent time delay. If these are not matched it degrades the quality of the gates, seen experimentally.
Yes, thanks :) Not thinking freely enough...
That would also be a nice option, especially for multitone gates, if one is compensating for a drifting secular frequency -- moves all sideband detunings uniformly.
The NIST ones do :)
NIST/LLNL theory paper here (open access) explains exactly why we care about pulse ramps on and off for the microwaves -- it boils down to being able to account properly for off-resonant carrier effects on the gate, which matter much more for microwave gates than for laser gates in general. In the language of the paper, we describe this as ensuring that the transform between the lab frame and the "bichromatic interaction picture" where we analyze the gate (an approach borrowed from Christian Roos's work on Ca+ laser gates) approaches the identity. In practice, in the experiment we implement a 5 us Blackman rise/fall shape with a flat top (~50-100 us) in between. The specific shape doesn't matter enormously (i.e. nothing magical about Blackman), but the whole thing must meet the criterion that the value in Eq. (21) is the identity. Experimentally, we can see that small (<100 ns) tweaks of the rise and fall times lead to measurable changes in the final Bell state fidelity at the few tenths of a percent level (a level that we care about now). In other words, it is not enough just to do a smooth ramp off and on, we are rather sensitive to the exact timing if the ramp is not super slow. Although I haven't done the math explicitly, it must be equivalent to making sure there is a minimum in the frequency spectrum at the carrier frequency; in other words, we are trying to put a sharp "null" at the qubit frequency in the frequency-domain representation of the BSB + RSB pulses, rather than just narrowing their spectrum more (to avoid the qubit frequency) by slowing the rise and fall times. One can do this kind of "null" with a square pulse too; with a shaped rise/fall, you're basically making miscalibrations around the "null" in frequency space more forgiving.
It seems so! :) At least for some applications. As long as the "several tones with spline-defined amplitude/phase/freq" parameterization continues to be available, that is probably a better match for us at present. I was unclear whether this STFT parameterization was intended as a substitute or a complement. |
Exactly. Distortion is neither the whole story nor does it originate from DAC only. There is the upconverter, and the entire rest of the RF chain as well. At least in the digital domain, "distortion" is typically a set of input-signal-dependent spurs and not uncorrelated noise.
Yes. But you don't need the phaser gateware to try expressing transport waveforms in Fourier series. Just do it on your computer first and see whether it makes sense.
Those would all be still within the factorization. Compensating for narrowband transfer function differences would not be. I.e. if there is something that breaks the hierarchy carrier-sidebands-narrowband then that becomes tricky.
Absolutely. The hierarchy is universal and mapping it is a good idea.
I'm very surprised. From what I can see in fig 1 or the preprint (already from (b) and knowing the trajectory-STFT correspondence), this pulse is not flat top at all. Do you have a time domain plot of that pulse to indulge me? I'd actually bet a couple beer on it not being flat top. The presence of multiple tones (that coincidentally do not correspond to phase modulation) contradicts flat top.
Yes! Why do you look (in the published paper) at your pulse shapes
Yes yes yes. Do the math in frequency space!
Yes! I know. Why on earth then stick with This is exactly why I came up with this idea. The condition of phase space loop closure for a (any) mode at delta is exactly
No! I still think exactly for the pulses in your paper this is just the right thing. You just have to let go of thinking in time domain and temporal pulse shaping. |
tl;dr - I like this idea, I want to understand more about how to make it work, when I have some time (goodness knows when) I will play with some Fourier transforms and pulses on my computer to figure out more :)
I am generally more worried about close-in distortion (e.g. non-harmonic spurs or elevated noise floor) than harmonic distortion; the analog parts of the chain tend to give more of the latter than the former, the exception being IMD from amps and from the upconverter (as well as feedthrough from imperfect mixer balancing). We jump through various hoops to try to keep IMD to a minimum (e.g. parallel amplifier chains, summed afterwards on hybrids), and we use homodyne mixing to avoid issues with low-level carrier and sideband feedthrough that one gets with SSB mixing, even with careful tweaking of offset bias. Anyway, the DAC will probably be the eventual limit on distortion (we currently try hard to make it so for close-in distortion), but I wanted to get a sense of where this STFT method might have its own noise floor. Whatever spurs arise need to be controlled carefully so they don't end up in the wrong place (this seems like a tractable issue, but one that we need to at least be aware of).
True. However, if you are already building an engine to turn frequency samples into time-domain output waveforms, then these frequency amplitudes can be tweaked and the rest of the math done in gateware, while if one changes predistortion on temporal samples one has to recalculate -- which in the absence of gateware means math on the PC (or possibly on a Zynq core device), thus slower. I agree, there is no free lunch, but I was just trying to harness the power of something you're already planning on doing....
Thanks for the beers :) You think I don't know what's in my own paper? Read the bottom right corner of p.3. It is three flat top pulses (in the lab frame), at different frequencies (
It's a good thought; the main (stupid) reason is that the knob we have in the lab is time domain pulse shaping with PDQ splines, so we have been thinking in terms of that. I will go back and think about shaping and coefficients in the frequency domain.
I am not giving up on the STFT idea! I think it's neat and still sorting through things, trying to understand what the limits might be, and where the big wins are. I haven't thought about this in great detail, as stated, due to my own bandwidth limitations. I still don't see a way around wanting a flat-top pulse, though, simply for reasons of gate speed. When one uses a pulse that is windowed over its whole duration (e.g. the LUH design), it means the entire gate is slower by a factor of ~2 when one is working in the regime where the maximum Rabi frequency is limited by available power (this is always the case for microwave gates), and one of the issues with microwave gates is speed. |
Whether you look at the third harmonic of a single tone or close-in third order intermodulation doesn't matter much. They have the same origin, same mechanism, and same scaling, only the stimulus is different. I don't see how an analog component can have more (odd) harmonic distortion than IMD. Especially since harmonic distortion is typically lowered due to bandwidth limits and filters while IMD is not.
This has been done since the early days more than a decade ago IIRC.
Gateware to do temporal transfer function compensation is likely significantly simpler than gateware to handle the variable-speed tweaking of Fourier series components. Why should there be an advantage? This still looks like a wild but inefficient idea to me. But maybe I don't understand it.
I expect that you know your paper very well. That's not what I meant. Individually, the tones of a pulse are always "flat top". Even if you shape the pulse, each component in the Fourier expansion is "flat top" and has "steep and well defined rise and fall". With multiple tones in a Fourier series the notion of (temporal) "flat top" is not that meaningful. By the way, in your case, the pulse (in the bichromatic interaction picture) looks like it might be equivalent to a phase (or frequency?) modulated pulse. The Jacobi-Anger expansion and the Bessel functions are typical tell-tale of FM/PM.
IIRC dynamical decoupling comes with a slow-down of similar magnitude. Given constant microwave power, is your scheme just as fast as the pure MS gate? I see a relative gate speed of only 0.49 in your paper. That would be worse than a factor of two slow-down in the LUH design. It's even funnier: with the STFT representation you can easily look for pulses that optimize speed over power while keeping other constraints (insensitivities). |
Taken up by #14 |
Proposal for a different way to generate "pulses with multiple tones" and equivalently "pulse shaping".
Background
In virtually all current proposals or experiments on gates (e.g. LUH, NIST, Sussex, Weizmann and others) use multiple tones.
Layout
Drawing the consequences it appears that a very efficient parametrization of a gate pulse would be just:
a_m = i_m + j*q_m
,m
tone index, integer\in [-M/2*cutoff, M/2*cutoff]
,M=64, 256, or 1024
FFT width,cutoff=0.8
interpolation passbandf_delta = 1/(M*r*t_sample)
.t_sample = 2 ns
(for Phaser),r \in [1, 2, 4, 8, 12, ..., infty]
interpolation factor.t_pulse = n*r*t_sample
,n
integer pulse length, multiples of powers of two and M are interesting values.f_carrier
with mHz resolution.The output tones are then at
m*f_delta + f_carrier
with amplitudea_m
times the shape of the frequency-shifted gate (square t, i.e. sinc f) times the carrier interpolation window (the typical steep lowpass) shape. No other magic required.You can skip/clear as many of the
M*cutoff
tones as you want (This is obviously unlike time domain samples where you can't skip). The above would completely describe a pulse. There would be no need for any other data or features ("spline interpolation" or any additional interpolation, "amplitude modulation", "temporal pulse shaping", "pre-distortion").Technically speaking: you want to take the tone amplitudes, do an FFT, gate the output, interpolate to the pulse length and upconvert to the carrier. The block diagram of the path is simply: FFT, time gate, interpolator, upconverter. No point in sketching it.
With an FFT in FPGA you can easily get a very respectable number of tones out of this.
And you could have more of those chains added together before the DAC.
Analysis
For Phaser, this would give much more than the "4 tones in 20 MHz" limitation and is much more scalable. This may be the ultimate toolkit do do gates and it may actually make the entire design significantly simpler (on top of making it much more scalable and better than e.g. a fixed set of tones like SAWG or old-Phaser). The representation/parametrization is extremely efficient compared to time domain or multiple modulated NCOs.
The technique is also analogous to OFDM/QAM.
Some IMD and non-linearities can and should be addressed outside this proposal. E.g. AOM/qubit chirps, IMD, and amplifier non-linearity should see something like an extended RFPAL approach (https://github.com/sinara-hw/meta/issues/50#issuecomment-582374158). Some features (like chirping the carrier or amplitude scaling + phase shifting it) could still be integrated into this proposal. That would handle the usual qubit "carrier phase tracking" requirement.
You also get free compensation of non-flat frequency/phase responses (AOM, amplifiers, filters etc) out of this. It doesn't need added functionality.
You can also use this to null away spectral leakage (due to the gate window) at multiple frequencies.
It's easy to solve for "smooth" pulses (equivalent to spectral narrowness) still under the constraints imposed by the gate.
This approach also solves the problem of aliasing of spectral content within the interpolation filter transition bands which the temporal representation does not handle at all.
The question that needs to be discussed is whether the constraints on
f_delta
andt_pulse
are a problem in practice and whether there is need for more than one such STFT processor per channel./cc @hartytp @dtcallcock @dhslichter @chanlists @cjbe @dnadlinger @jbqubit and whoever is interested.
The text was updated successfully, but these errors were encountered: