Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore: fix typos, docstrings and type hints #154

Merged
merged 1 commit into from
Sep 2, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .pre-commit-config.yaml
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# We use poetry to run formatting and linting before commit/push
# Longers checks such as tests, security and complexity baseline
# Longer checks such as tests, security and complexity baseline
# are run as part of CI to prevent slower feedback loop
# All checks can be run locally via `make pr`

Expand Down
8 changes: 5 additions & 3 deletions aws_lambda_powertools/metrics/base.py
Original file line number Diff line number Diff line change
Expand Up @@ -112,7 +112,7 @@ def add_metric(self, name: str, unit: MetricUnit, value: Union[float, int]):
Metric name
unit : MetricUnit
`aws_lambda_powertools.helper.models.MetricUnit`
value : float
value : Union[float, int]
Metric value

Raises
Expand Down Expand Up @@ -146,6 +146,8 @@ def serialize_metric_set(self, metrics: Dict = None, dimensions: Dict = None, me
Dictionary of metrics to serialize, by default None
dimensions : Dict, optional
Dictionary of dimensions to serialize, by default None
metadata: Dict, optional
Dictionary of metadata to serialize, by default None

Example
-------
Expand Down Expand Up @@ -183,7 +185,7 @@ def serialize_metric_set(self, metrics: Dict = None, dimensions: Dict = None, me
metric_names_and_values: Dict[str, str] = {} # { "metric_name": 1.0 }

for metric_name in metrics:
metric: str = metrics[metric_name]
metric: dict = metrics[metric_name]
metric_value: int = metric.get("Value", 0)
metric_unit: str = metric.get("Unit", "")

Expand Down Expand Up @@ -257,7 +259,7 @@ def add_metadata(self, key: str, value: Any):

Parameters
----------
name : str
key : str
Metadata key
value : any
Metadata value
Expand Down
2 changes: 1 addition & 1 deletion aws_lambda_powertools/utilities/batch/sqs.py
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ def __init__(self, config: Optional[Config] = None):

super().__init__()

def _get_queue_url(self) -> str:
def _get_queue_url(self) -> Optional[str]:
"""
Format QueueUrl from first records entry
"""
Expand Down
2 changes: 1 addition & 1 deletion docs/content/utilities/batch.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ SQS integration with Lambda is one of the most well established ones and pretty

As any function call, you may face errors during execution, in one or more records belonging to a batch. SQS's native behavior is to redrive the **whole** batch to the queue again, reprocessing all of them again, including successful ones. This cycle can happen multiple times depending on your [configuration][3], until the whole batch succeeds or the maximum number of attempts is reached. Your application may face some problems with such behavior, especially if there's no idempotency.

A *naive* approach to solving this problem is to delete successful records from the queue before redriving's phase. The `PartialSQSProcessor` class offers this solution both as context manager and middleware, removing all successful messages from the queue case one or more failures ocurred during lambda's execution. Two examples are provided below, displaying the behavior of this class.
A *naive* approach to solving this problem is to delete successful records from the queue before redriving's phase. The `PartialSQSProcessor` class offers this solution both as context manager and middleware, removing all successful messages from the queue case one or more failures occurred during lambda's execution. Two examples are provided below, displaying the behavior of this class.

**Examples:**

Expand Down
6 changes: 3 additions & 3 deletions example/hello_world/app.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@
set_package_logger() # Enable package diagnostics (DEBUG log)

# tracer = Tracer() # patches all available modules # noqa: E800
tracer = Tracer(patch_modules=("aioboto3", "boto3", "requests")) # ~90-100ms faster in perf depending on set of libs
tracer = Tracer(patch_modules=["aioboto3", "boto3", "requests"]) # ~90-100ms faster in perf depending on set of libs
logger = Logger()
metrics = Metrics()

Expand Down Expand Up @@ -114,13 +114,13 @@ def lambda_handler(event, context):

try:
ip = requests.get("http://checkip.amazonaws.com/")
metrics.add_metric(name="SuccessfulLocations", unit="Count", value=1)
metrics.add_metric(name="SuccessfulLocations", unit=MetricUnit.Count, value=1)
except requests.RequestException as e:
# Send some context about this error to Lambda Logs
logger.exception(e)
raise

with single_metric(name="UniqueMetricDimension", unit="Seconds", value=1) as metric:
with single_metric(name="UniqueMetricDimension", unit=MetricUnit.Seconds, value=1) as metric:
metric.add_dimension(name="unique_dimension", value="for_unique_metric")

resp = {"message": "hello world", "location": ip.text.replace("\n", ""), "async_http": async_http_ret}
Expand Down