Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add-eb-global-support #400

Open
wants to merge 3 commits into
base: master
Choose a base branch
from
Open

add-eb-global-support #400

wants to merge 3 commits into from

Conversation

jgilbert01
Copy link
Owner

No description provided.


return {
...batchUow,
[publishRequestField]: endpointId ? /* istanbul ignore next */ {

This comment was marked as outdated.

@@ -47,6 +48,9 @@ export const publishToEventBridge = ({ // eslint-disable-line import/prefer-defa
Entries: batchUow.batch
.filter((uow) => uow[publishRequestEntryField])
.map((uow) => uow[publishRequestEntryField]),
...(endpointId && {
EndpointId: endpointId,
}),
Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • jumping thru a hoop here with this condition to keep unit tests backwards compatible
  • otherwise it will be necessary to assert EndpointId: undefined in many tests

@petermyers
Copy link
Collaborator

If we're in the middle of a failover and the publishing is failing, is that considered a retryable error? So in theory it would bypass faulting and retry until the failover is complete and we can continue publishing to the global EB?
I think right now its checking based on the smithy error classifications and we use:
(isThrottlingError(err) || isTransientError(err) || isServerError(err))

@jgilbert01
Copy link
Owner Author

If we're in the middle of a failover and the publishing is failing, is that considered a retryable error? So in theory it would bypass faulting and retry until the failover is complete and we can continue publishing to the global EB? I think right now its checking based on the smithy error classifications and we use: (isThrottlingError(err) || isTransientError(err) || isServerError(err))

  • right
  • global endpoint is powered by r53 primary/secondary failover
  • our retry feature is not enabled by default for backwards compatibility
  • when enabled the lambda will retry retriable errors, such as 5xx, instead of publishing faults
  • the regional failover will usually take about 5 minutes to avoid premature failover
  • then the events will flow to the secondary region

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants