Skip to content

Commit

Permalink
docs(batch): explain record type discrepancy in failure and success h…
Browse files Browse the repository at this point in the history
…andler (#2868)

Co-authored-by: Heitor Lessa <[email protected]>
  • Loading branch information
duc00 and heitorlessa authored Jul 28, 2023
1 parent d97d176 commit 920d70e
Show file tree
Hide file tree
Showing 2 changed files with 17 additions and 6 deletions.
17 changes: 11 additions & 6 deletions docs/utilities/batch.md
Original file line number Diff line number Diff line change
Expand Up @@ -522,14 +522,19 @@ You might want to bring custom logic to the existing `BatchProcessor` to slightl

For these scenarios, you can subclass `BatchProcessor` and quickly override `success_handler` and `failure_handler` methods:

* **`success_handler()`** – Keeps track of successful batch records
* **`failure_handler()`** – Keeps track of failed batch records
* **`success_handler()`** is called for each successfully processed record
* **`failure_handler()`** is called for each failed record

???+ example
Let's suppose you'd like to add a metric named `BatchRecordFailures` for each batch record that failed processing
???+ note
These functions have a common `record` argument. For backward compatibility reasons, their type is not the same:

```python hl_lines="8 9 16-19 22 38" title="Extending failure handling mechanism in BatchProcessor"
--8<-- "examples/batch_processing/src/extending_failure.py"
- `success_handler`: `record` type is `dict[str, Any]`, the raw record data.
- `failure_handler`: `record` type can be an Event Source Data Class or your [Pydantic model](#pydantic-integration). During Pydantic validation errors, we fall back and serialize `record` to Event Source Data Class to not break the processing pipeline.

Let's suppose you'd like to add metrics to track successes and failures of your batch records.

```python hl_lines="8-10 18-25 28 44" title="Extending failure handling mechanism in BatchProcessor"
--8<-- "examples/batch_processing/src/extending_processor_handlers.py"
```

### Create your own partial processor
Expand Down
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
import json
from typing import Any

from aws_lambda_powertools import Logger, Metrics, Tracer
from aws_lambda_powertools.metrics import MetricUnit
Expand All @@ -9,11 +10,16 @@
FailureResponse,
process_partial_response,
)
from aws_lambda_powertools.utilities.batch.base import SuccessResponse
from aws_lambda_powertools.utilities.data_classes.sqs_event import SQSRecord
from aws_lambda_powertools.utilities.typing import LambdaContext


class MyProcessor(BatchProcessor):
def success_handler(self, record: dict[str, Any], result: Any) -> SuccessResponse:
metrics.add_metric(name="BatchRecordSuccesses", unit=MetricUnit.Count, value=1)
return super().success_handler(record, result)

def failure_handler(self, record: SQSRecord, exception: ExceptionInfo) -> FailureResponse:
metrics.add_metric(name="BatchRecordFailures", unit=MetricUnit.Count, value=1)
return super().failure_handler(record, exception)
Expand Down

0 comments on commit 920d70e

Please sign in to comment.