-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support inq
splitting in findByForeignKeys
#3444
Comments
Discussion: Any thoughts? @strongloop/loopback-maintainers |
I was considering this use case: From my understanding and my experience, it's developer's responsibility to decide how many queries to send. |
@agnes512, are we ready to estimate this task? |
In LoopBack 3, we are splitting the queries for the user automatically - see https://github.com/strongloop/loopback-datasource-juggler/blob/814c55c7cd7ac49f3a52729949a5d0aaeb91853d/lib/include.js#L237-L264. I am fine to put this feature on hold for now and wait if there are any users asking for it. If we decide to do so, then I'd like to ensure that the enforced inq limit is large enough to support most users and that a helpful error message is reported when the limit is reached. Since #3443 has been already finished, I am proposing to open a new story where we will investigate what errors are reported now, add acceptance-level tests to trigger the scenario & verify the reported error, and ensure consistent behavior for all supported databases. |
According to #1352, this is out of scope for Q4. |
I'm asking for this, if possible. I have some queries that I am not interested in paging and just want the api to return all of the rows even if it's more than 1000 in oracle. |
Re-opening the issue for further discussion. |
For further info on why we need this, please see the bug report I submitted #8773 |
Under the hood, inclusion resolvers are implemented using
inq
operator:inq
and PK/FK values from step 1.This can be problematic when the number of source instances is large, we don't
know if all databases support
inq
with arbitrary number of items.To address this issue, LB3 is implementing "inq splitting", where a single query
with arbitrary-sized
inq
condition is split into multiple queries where eachquery has a reasonably-sized
inq
condition.Connectors are allowed to specify the maximum
inq
size supported by thedatabase via
dataSource.settings.inqLimit
option. By default,inqLimit
isset to 256.
In this task, we need to improve
findByForeignKeys
(see #3443) to handle the maximum size ofinq
parameter supported by the target database (data-source). When the list of provided FK values is too long, then we should split it into smaller chunks and execute multiple queries.However, because our
Repository
interface is generic and does not assume thata repository has to be backed by a data-source, I am proposing to expose
inqLimit
via a new property of theRepository
interface instead of accessingthe parameter via DataSource settings.
To preserve backwards compatibility with existing repository implementation, we
cannot add
RepositoryCapabilities
directly to theRepository
class. We needto introduce a new interface instead that Repositories can (or may not)
implement.
See #3387 for more details & a prototype implementation.
Acceptance criteria
To allow the helper to detect
inqLimit
, we need to extend Repository interfaces.RepositoryCapabilities
interface (calledConnectorCapabilities
in the spike), this interface will have a single propertyinqLimit
(for now).RepositoryWithCapabilities
interface (calledWithCapabilities
in the spike),= this interface should definecapabilities
property.isRepositoryWithCapabilities
type guardgetRepositoryCapabilities
helperThe rest should be straightforward:
findByForeignKeys
to obtaininqLimit
from repository capabilities and implement query splitting (see the spike implementation).findByForeignKeys
.repository-tests
) to verify that connectors can handleinqLimit
they are advertising. For example, create a test that runs a query returning 1000 records.The text was updated successfully, but these errors were encountered: