Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs(batch): explain record type discrepancy in failure and success handler #2868

Merged
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
17 changes: 11 additions & 6 deletions docs/utilities/batch.md
Original file line number Diff line number Diff line change
Expand Up @@ -522,14 +522,19 @@ You might want to bring custom logic to the existing `BatchProcessor` to slightl

For these scenarios, you can subclass `BatchProcessor` and quickly override `success_handler` and `failure_handler` methods:

* **`success_handler()`** – Keeps track of successful batch records
* **`failure_handler()`** – Keeps track of failed batch records
* **`success_handler()`** is called for each successfully processed record
* **`failure_handler()`** is called for each failed record

???+ example
Let's suppose you'd like to add a metric named `BatchRecordFailures` for each batch record that failed processing
???+ note
These functions have a common `record` argument. For backward compatibility reasons, their type is not the same:

```python hl_lines="8 9 16-19 22 38" title="Extending failure handling mechanism in BatchProcessor"
--8<-- "examples/batch_processing/src/extending_failure.py"
- `success_handler`: `record` type is `dict[str, Any]`, the raw record data.
- `failure_handler`: `record` type can be an Event Source Data Class or your [Pydantic model](#pydantic-integration). During Pydantic validation errors, we fall back and serialize `record` to Event Source Data Class to not break the processing pipeline.

Let's suppose you'd like to add metrics to track successes and failures of your batch records.

```python hl_lines="8-10 18-25 28 44" title="Extending failure handling mechanism in BatchProcessor"
--8<-- "examples/batch_processing/src/extending_processor_handlers.py"
```

### Create your own partial processor
Expand Down
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
import json
from typing import Any

from aws_lambda_powertools import Logger, Metrics, Tracer
from aws_lambda_powertools.metrics import MetricUnit
Expand All @@ -9,11 +10,16 @@
FailureResponse,
process_partial_response,
)
from aws_lambda_powertools.utilities.batch.base import SuccessResponse
from aws_lambda_powertools.utilities.data_classes.sqs_event import SQSRecord
from aws_lambda_powertools.utilities.typing import LambdaContext


class MyProcessor(BatchProcessor):
def success_handler(self, record: dict[str, Any], result: Any) -> SuccessResponse:
metrics.add_metric(name="BatchRecordSuccesses", unit=MetricUnit.Count, value=1)
return super().success_handler(record, result)

def failure_handler(self, record: SQSRecord, exception: ExceptionInfo) -> FailureResponse:
metrics.add_metric(name="BatchRecordFailures", unit=MetricUnit.Count, value=1)
return super().failure_handler(record, exception)
Expand Down