Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs(batch): explain record type discrepancy in failure and success handler #2868

Merged
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
16 changes: 11 additions & 5 deletions docs/utilities/batch.md
Original file line number Diff line number Diff line change
Expand Up @@ -522,14 +522,20 @@ You might want to bring custom logic to the existing `BatchProcessor` to slightl

For these scenarios, you can subclass `BatchProcessor` and quickly override `success_handler` and `failure_handler` methods:

* **`success_handler()`** – Keeps track of successful batch records
* **`failure_handler()`** – Keeps track of failed batch records
* **`success_handler()`** – is called for each successfully processed record
* **`failure_handler()`** – is called for each failed record
heitorlessa marked this conversation as resolved.
Show resolved Hide resolved

???+ warning
heitorlessa marked this conversation as resolved.
Show resolved Hide resolved
These functions have a common `record` argument. For backward compatibility reasons, their type is not the same:

- `success_handler`: `record` type is `dict[str, Any]`, the raw record data
- `failure_handler`: `record` type is a Pydantic model, either the parsed record or the `EventSourceDataClass` record in case of validation error. Refer to [Accessing processed messages](#accessing-processed-messages) for more details
heitorlessa marked this conversation as resolved.
Show resolved Hide resolved

???+ example
Let's suppose you'd like to add a metric named `BatchRecordFailures` for each batch record that failed processing
Let's suppose you'd like to add metrics tracking processing successes and failures of your batch records.
heitorlessa marked this conversation as resolved.
Show resolved Hide resolved

```python hl_lines="8 9 16-19 22 38" title="Extending failure handling mechanism in BatchProcessor"
--8<-- "examples/batch_processing/src/extending_failure.py"
```python hl_lines="8 9 18-25 28 44" title="Extending failure handling mechanism in BatchProcessor"
heitorlessa marked this conversation as resolved.
Show resolved Hide resolved
--8<-- "examples/batch_processing/src/extending_processor_handlers.py"
```

### Create your own partial processor
Expand Down
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
import json
from typing import Any

from aws_lambda_powertools import Logger, Metrics, Tracer
from aws_lambda_powertools.metrics import MetricUnit
Expand All @@ -9,11 +10,16 @@
FailureResponse,
process_partial_response,
)
from aws_lambda_powertools.utilities.batch.base import SuccessResponse
from aws_lambda_powertools.utilities.data_classes.sqs_event import SQSRecord
from aws_lambda_powertools.utilities.typing import LambdaContext


class MyProcessor(BatchProcessor):
def success_handler(self, record: dict[str, Any], result: Any) -> SuccessResponse:
metrics.add_metric(name="BatchRecordSuccesses", unit=MetricUnit.Count, value=1)
return super().success_handler(record, result)

def failure_handler(self, record: SQSRecord, exception: ExceptionInfo) -> FailureResponse:
metrics.add_metric(name="BatchRecordFailures", unit=MetricUnit.Count, value=1)
return super().failure_handler(record, exception)
Expand Down