You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
This is a feature request. Currently, users are only able to submit one text input at a time.
Describe the solution you'd like
While the current design does work well in certain scenarios, it would also be valuable if users could submit batch / bulk sets of text as one input to the framework. In certain use cases, this design should be more efficient and would also reduce the number of API calls required.
The new endpoint (or update to the existing endpoint) should all for multiple text inputs to be passes as one single call to the API. Each input will be processed individually, and the scores will be returned as individual scores, flagging only the toxic/disruptive messages.
What else have you considered
I've also considered retuning the scores as a bulk list of scores corresponding to each text input within the list. That design may cause unnecessary overhead in the system, and we'd like to only send a signal for toxic or disruptive messages.
Additional context
N/A
The text was updated successfully, but these errors were encountered:
Is your feature request related to a problem? Please describe.
This is a feature request. Currently, users are only able to submit one text input at a time.
Describe the solution you'd like
While the current design does work well in certain scenarios, it would also be valuable if users could submit batch / bulk sets of text as one input to the framework. In certain use cases, this design should be more efficient and would also reduce the number of API calls required.
The new endpoint (or update to the existing endpoint) should all for multiple text inputs to be passes as one single call to the API. Each input will be processed individually, and the scores will be returned as individual scores, flagging only the toxic/disruptive messages.
What else have you considered
I've also considered retuning the scores as a bulk list of scores corresponding to each text input within the list. That design may cause unnecessary overhead in the system, and we'd like to only send a signal for toxic or disruptive messages.
Additional context
N/A
The text was updated successfully, but these errors were encountered: