You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
There has been research and exploration around some extensions to CAGRA that use product quantization to effectively compress the input dataset, thus shrinking the CAGRA footprint and providing support for larger data sizes.
This is a placeholder issue to be taken up once CAGRA-Q research has gotten to a point where we feel it's ready to be ported to RAFT. Currently, this is showing great performance on GH using huge page pinned memory for the underlying optimized graph, allowing for the search times to be comparable to the graph stored fully in device memory.
The text was updated successfully, but these errors were encountered:
There has been research and exploration around some extensions to CAGRA that use product quantization to effectively compress the input dataset, thus shrinking the CAGRA footprint and providing support for larger data sizes.
This is a placeholder issue to be taken up once CAGRA-Q research has gotten to a point where we feel it's ready to be ported to RAFT. Currently, this is showing great performance on GH using huge page pinned memory for the underlying optimized graph, allowing for the search times to be comparable to the graph stored fully in device memory.
The text was updated successfully, but these errors were encountered: