Skip to content

Do escape/unescape for flow inputs only #3514

Do escape/unescape for flow inputs only

Do escape/unescape for flow inputs only #3514

GitHub Actions / Executor E2E Test Result [user/yalu4/escape_2](https://github.com/microsoft/promptflow/actions/workflows/promptflow-executor-e2e-test.yml?query=branch:user/yalu4/escape_2++) failed Apr 22, 2024 in 0s

13 fail, 5 skipped, 222 pass in 5m 12s

240 tests  ±0   222 ✅  - 13   5m 12s ⏱️ -1s
  1 suites ±0     5 💤 ± 0 
  1 files   ±0    13 ❌ +13 

Results for commit 74d3370. ± Comparison against earlier commit d0d496c.

Annotations

Check warning on line 0 in tests.executor.e2etests.test_batch_engine.TestBatch

See this annotation in the file changed.

@github-actions github-actions / Executor E2E Test Result [user/yalu4/escape_2](https://github.com/microsoft/promptflow/actions/workflows/promptflow-executor-e2e-test.yml?query=branch:user/yalu4/escape_2++)

test_batch_storage (tests.executor.e2etests.test_batch_engine.TestBatch) failed

artifacts/Test Results (Python 3.9) (OS ubuntu-latest)/test-results.xml [took 8s]
Raw output
assert 0 == 2
 +  where 0 = BatchResult(status=<Status.Completed: 'Completed'>, total_lines=2, completed_lines=0, failed_lines=2, node_status={'fetch_text_content_from_url.completed': 2, 'prepare_examples.completed': 2, 'summarize_text_content.failed': 2}, start_time=datetime.datetime(2024, 4, 22, 12, 24, 10, 629523), end_time=datetime.datetime(2024, 4, 22, 12, 24, 18, 595671), metrics={}, system_metrics=SystemMetrics(total_tokens=0, prompt_tokens=0, completion_tokens=0, duration=7.966148), error_summary=ErrorSummary(failed_user_error_lines=2, failed_system_error_lines=0, error_list=[LineError(line_number=0, error={'message': "OpenAI API hits exception: TypeError: argument of type 'NoneType' is not iterable", 'messageFormat': '', 'messageParameters': {}, 'referenceCode': 'Tool/promptflow.tools.aoai', 'code': 'UserError', 'innerError': {'code': 'LLMError', 'innerError': None}, 'debugInfo': {'type': 'LLMError', 'message': "OpenAI API hits exception: TypeError: argument of type 'NoneType' is not iterable", 'stackTrace': '\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 964, in _exec\n    output, aggregation_inputs = self._exec_inner_with_trace(\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 864, in _exec_inner_with_trace\n    output, aggregation_inputs = self._exec_inner(\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 895, in _exec_inner\n    output, nodes_outputs = self._traverse_nodes(inputs, context)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1146, in _traverse_nodes\n    nodes_outputs, bypassed_nodes = self._submit_to_scheduler(context, inputs, batch_nodes)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1183, in _submit_to_scheduler\n    return scheduler.execute(self._line_timeout_sec)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 131, in execute\n    raise e\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 113, in execute\n    self._dag_manager.complete_nodes(self._collect_outputs(completed_futures))\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 160, in _collect_outputs\n    each_node_result = each_future.result()\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/_base.py", line 439, in result\n    return self.__get_result()\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/_base.py", line 391, in __get_result\n    raise self._exception\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/thread.py", line 58, in run\n    result = self.fn(*self.args, **self.kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 181, in _exec_single_node_in_thread\n    result = context.invoke_tool(node, f, kwargs=kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 90, in invoke_tool\n    result = self._invoke_tool_inner(node, f, kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 201, in _invoke_tool_inner\n    raise e\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 182, in _invoke_tool_inner\n    return f(**kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_trace.py", line 469, in wrapped\n    output = func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 590, in wrapper\n    raise LLMError(message=error_message)\n', 'innerException': {'type': 'TypeError', 'message': "argument of type 'NoneType' is not iterable", 'stackTrace': 'Traceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 523, in wrapper\n    return func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/aoai.py", line 124, in chat\n    messages = build_messages(prompt, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 719, in build_messages\n    escape_dict = build_escape_dict(inputs_to_escape=inputs_to_escape, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 622, in build_escape_dict\n    if _should_escape(k, v, inputs_to_escape):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 615, in _should_escape\n    return (isinstance(v, PromptResult) and v.get_escape_mapping()) or k in inputs_to_escape\n', 'innerException': None}}}), LineError(line_number=1, error={'message': "OpenAI API hits exception: TypeError: argument of type 'NoneType' is not iterable", 'messageFormat': '', 'messageParameters': {}, 'referenceCode': 'Tool/promptflow.tools.aoai', 'code': 'UserError', 'innerError': {'code': 'LLMError', 'innerError': None}, 'debugInfo': {'type': 'LLMError', 'message': "OpenAI API hits exception: TypeError: argument of type 'NoneType' is not iterable", 'stackTrace': '\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 964, in _exec\n    output, aggregation_inputs = self._exec_inner_with_trace(\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 864, in _exec_inner_with_trace\n    output, aggregation_inputs = self._exec_inner(\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 895, in _exec_inner\n    output, nodes_outputs = self._traverse_nodes(inputs, context)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1146, in _traverse_nodes\n    nodes_outputs, bypassed_nodes = self._submit_to_scheduler(context, inputs, batch_nodes)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1183, in _submit_to_scheduler\n    return scheduler.execute(self._line_timeout_sec)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 131, in execute\n    raise e\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 113, in execute\n    self._dag_manager.complete_nodes(self._collect_outputs(completed_futures))\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 160, in _collect_outputs\n    each_node_result = each_future.result()\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/_base.py", line 439, in result\n    return self.__get_result()\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/_base.py", line 391, in __get_result\n    raise self._exception\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/thread.py", line 58, in run\n    result = self.fn(*self.args, **self.kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 181, in _exec_single_node_in_thread\n    result = context.invoke_tool(node, f, kwargs=kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 90, in invoke_tool\n    result = self._invoke_tool_inner(node, f, kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 201, in _invoke_tool_inner\n    raise e\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 182, in _invoke_tool_inner\n    return f(**kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_trace.py", line 469, in wrapped\n    output = func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 590, in wrapper\n    raise LLMError(message=error_message)\n', 'innerException': {'type': 'TypeError', 'message': "argument of type 'NoneType' is not iterable", 'stackTrace': 'Traceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 523, in wrapper\n    return func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/aoai.py", line 124, in chat\n    messages = build_messages(prompt, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 719, in build_messages\n    escape_dict = build_escape_dict(inputs_to_escape=inputs_to_escape, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 622, in build_escape_dict\n    if _should_escape(k, v, inputs_to_escape):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 615, in _should_escape\n    return (isinstance(v, PromptResult) and v.get_escape_mapping()) or k in inputs_to_escape\n', 'innerException': None}}})], aggr_error_dict={}, batch_error_dict=None)).completed_lines
self = <executor.e2etests.test_batch_engine.TestBatch object at 0x7f06dad14e80>
dev_connections = {'aoai_assistant_connection': {'module': 'promptflow.connections', 'name': 'aoai_assistant_connection', 'type': 'Azure...ai.azure.com/', 'api_key': 'ac21d4a97d044b28990a740132f40d35', 'api_type': 'azure', 'api_version': '2024-02-01'}}, ...}

    def test_batch_storage(self, dev_connections):
        mem_run_storage = MemoryRunStorage()
        run_id = str(uuid.uuid4())
        inputs_mapping = {"url": "${data.url}"}
        batch_result = submit_batch_run(
            SAMPLE_FLOW, inputs_mapping, run_id=run_id, connections=dev_connections, storage=mem_run_storage
        )
    
        nlines = get_batch_inputs_line(SAMPLE_FLOW)
        assert batch_result.total_lines == nlines
>       assert batch_result.completed_lines == nlines
E       assert 0 == 2
E        +  where 0 = BatchResult(status=<Status.Completed: 'Completed'>, total_lines=2, completed_lines=0, failed_lines=2, node_status={'fetch_text_content_from_url.completed': 2, 'prepare_examples.completed': 2, 'summarize_text_content.failed': 2}, start_time=datetime.datetime(2024, 4, 22, 12, 24, 10, 629523), end_time=datetime.datetime(2024, 4, 22, 12, 24, 18, 595671), metrics={}, system_metrics=SystemMetrics(total_tokens=0, prompt_tokens=0, completion_tokens=0, duration=7.966148), error_summary=ErrorSummary(failed_user_error_lines=2, failed_system_error_lines=0, error_list=[LineError(line_number=0, error={'message': "OpenAI API hits exception: TypeError: argument of type 'NoneType' is not iterable", 'messageFormat': '', 'messageParameters': {}, 'referenceCode': 'Tool/promptflow.tools.aoai', 'code': 'UserError', 'innerError': {'code': 'LLMError', 'innerError': None}, 'debugInfo': {'type': 'LLMError', 'message': "OpenAI API hits exception: TypeError: argument of type 'NoneType' is not iterable", 'stackTrace': '\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 964, in _exec\n    output, aggregation_inputs = self._exec_inner_with_trace(\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 864, in _exec_inner_with_trace\n    output, aggregation_inputs = self._exec_inner(\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 895, in _exec_inner\n    output, nodes_outputs = self._traverse_nodes(inputs, context)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1146, in _traverse_nodes\n    nodes_outputs, bypassed_nodes = self._submit_to_scheduler(context, inputs, batch_nodes)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1183, in _submit_to_scheduler\n    return scheduler.execute(self._line_timeout_sec)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 131, in execute\n    raise e\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 113, in execute\n    self._dag_manager.complete_nodes(self._collect_outputs(completed_futures))\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 160, in _collect_outputs\n    each_node_result = each_future.result()\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/_base.py", line 439, in result\n    return self.__get_result()\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/_base.py", line 391, in __get_result\n    raise self._exception\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/thread.py", line 58, in run\n    result = self.fn(*self.args, **self.kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 181, in _exec_single_node_in_thread\n    result = context.invoke_tool(node, f, kwargs=kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 90, in invoke_tool\n    result = self._invoke_tool_inner(node, f, kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 201, in _invoke_tool_inner\n    raise e\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 182, in _invoke_tool_inner\n    return f(**kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_trace.py", line 469, in wrapped\n    output = func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 590, in wrapper\n    raise LLMError(message=error_message)\n', 'innerException': {'type': 'TypeError', 'message': "argument of type 'NoneType' is not iterable", 'stackTrace': 'Traceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 523, in wrapper\n    return func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/aoai.py", line 124, in chat\n    messages = build_messages(prompt, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 719, in build_messages\n    escape_dict = build_escape_dict(inputs_to_escape=inputs_to_escape, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 622, in build_escape_dict\n    if _should_escape(k, v, inputs_to_escape):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 615, in _should_escape\n    return (isinstance(v, PromptResult) and v.get_escape_mapping()) or k in inputs_to_escape\n', 'innerException': None}}}), LineError(line_number=1, error={'message': "OpenAI API hits exception: TypeError: argument of type 'NoneType' is not iterable", 'messageFormat': '', 'messageParameters': {}, 'referenceCode': 'Tool/promptflow.tools.aoai', 'code': 'UserError', 'innerError': {'code': 'LLMError', 'innerError': None}, 'debugInfo': {'type': 'LLMError', 'message': "OpenAI API hits exception: TypeError: argument of type 'NoneType' is not iterable", 'stackTrace': '\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 964, in _exec\n    output, aggregation_inputs = self._exec_inner_with_trace(\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 864, in _exec_inner_with_trace\n    output, aggregation_inputs = self._exec_inner(\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 895, in _exec_inner\n    output, nodes_outputs = self._traverse_nodes(inputs, context)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1146, in _traverse_nodes\n    nodes_outputs, bypassed_nodes = self._submit_to_scheduler(context, inputs, batch_nodes)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1183, in _submit_to_scheduler\n    return scheduler.execute(self._line_timeout_sec)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 131, in execute\n    raise e\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 113, in execute\n    self._dag_manager.complete_nodes(self._collect_outputs(completed_futures))\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 160, in _collect_outputs\n    each_node_result = each_future.result()\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/_base.py", line 439, in result\n    return self.__get_result()\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/_base.py", line 391, in __get_result\n    raise self._exception\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/thread.py", line 58, in run\n    result = self.fn(*self.args, **self.kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 181, in _exec_single_node_in_thread\n    result = context.invoke_tool(node, f, kwargs=kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 90, in invoke_tool\n    result = self._invoke_tool_inner(node, f, kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 201, in _invoke_tool_inner\n    raise e\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 182, in _invoke_tool_inner\n    return f(**kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_trace.py", line 469, in wrapped\n    output = func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 590, in wrapper\n    raise LLMError(message=error_message)\n', 'innerException': {'type': 'TypeError', 'message': "argument of type 'NoneType' is not iterable", 'stackTrace': 'Traceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 523, in wrapper\n    return func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/aoai.py", line 124, in chat\n    messages = build_messages(prompt, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 719, in build_messages\n    escape_dict = build_escape_dict(inputs_to_escape=inputs_to_escape, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 622, in build_escape_dict\n    if _should_escape(k, v, inputs_to_escape):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 615, in _should_escape\n    return (isinstance(v, PromptResult) and v.get_escape_mapping()) or k in inputs_to_escape\n', 'innerException': None}}})], aggr_error_dict={}, batch_error_dict=None)).completed_lines

/home/runner/work/promptflow/promptflow/src/promptflow/tests/executor/e2etests/test_batch_engine.py:120: AssertionError

Check warning on line 0 in tests.executor.e2etests.test_executor_happypath.TestExecutor

See this annotation in the file changed.

@github-actions github-actions / Executor E2E Test Result [user/yalu4/escape_2](https://github.com/microsoft/promptflow/actions/workflows/promptflow-executor-e2e-test.yml?query=branch:user/yalu4/escape_2++)

test_executor_exec_line[web_classification_no_variants] (tests.executor.e2etests.test_executor_happypath.TestExecutor) failed

artifacts/Test Results (Python 3.9) (OS ubuntu-latest)/test-results.xml [took 1s]
Raw output
promptflow.tools.exception.LLMError: OpenAI API hits exception: TypeError: argument of type 'NoneType' is not iterable
args = (<promptflow.tools.aoai.AzureOpenAI object at 0x7fb85f15a1c0>,)
kwargs = {'best_of': '1', 'deployment_name': 'gpt-35-turbo', 'echo': 'False', 'frequency_penalty': 0.0, ...}
i = 0
error_message = "OpenAI API hits exception: TypeError: argument of type 'NoneType' is not iterable"

    @functools.wraps(func)
    def wrapper(*args, **kwargs):
        for i in range(tries + 1):
            try:
>               return func(*args, **kwargs)

/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py:523: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/aoai.py:124: in chat
    messages = build_messages(prompt, **kwargs)
/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py:719: in build_messages
    escape_dict = build_escape_dict(inputs_to_escape=inputs_to_escape, **kwargs)
/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py:622: in build_escape_dict
    if _should_escape(k, v, inputs_to_escape):
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

k = 'echo', v = 'False', inputs_to_escape = None

    def _should_escape(k, v, inputs_to_escape: list):
>       return (isinstance(v, PromptResult) and v.get_escape_mapping()) or k in inputs_to_escape
E       TypeError: argument of type 'NoneType' is not iterable

/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py:615: TypeError

During handling of the above exception, another exception occurred:

self = <executor.e2etests.test_executor_happypath.TestExecutor object at 0x7fb85fec29a0>
flow_folder = 'web_classification_no_variants'
dev_connections = {'aoai_assistant_connection': {'module': 'promptflow.connections', 'name': 'aoai_assistant_connection', 'type': 'Azure...ai.azure.com/', 'api_key': 'ac21d4a97d044b28990a740132f40d35', 'api_type': 'azure', 'api_version': '2024-02-01'}}, ...}

    @pytest.mark.parametrize(
        "flow_folder",
        [
            SAMPLE_FLOW,
            "prompt_tools",
            "script_with___file__",
            "script_with_import",
            "package_tools",
            "connection_as_input",
            "async_tools",
            "async_tools_with_sync_tools",
            "tool_with_assistant_definition",
        ],
    )
    def test_executor_exec_line(self, flow_folder, dev_connections):
        self.skip_serp(flow_folder, dev_connections)
        os.chdir(get_flow_folder(flow_folder))
        executor = FlowExecutor.create(get_yaml_file(flow_folder), dev_connections)
>       flow_result = executor.exec_line(self.get_line_inputs())

/home/runner/work/promptflow/promptflow/src/promptflow/tests/executor/e2etests/test_executor_happypath.py:78: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py:703: in exec_line
    line_result = self._exec(
/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py:964: in _exec
    output, aggregation_inputs = self._exec_inner_with_trace(
/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py:864: in _exec_inner_with_trace
    output, aggregation_inputs = self._exec_inner(
/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py:895: in _exec_inner
    output, nodes_outputs = self._traverse_nodes(inputs, context)
/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py:1146: in _traverse_nodes
    nodes_outputs, bypassed_nodes = self._submit_to_scheduler(context, inputs, batch_nodes)
/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py:1183: in _submit_to_scheduler
    return scheduler.execute(self._line_timeout_sec)
/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py:131: in execute
    raise e
/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py:113: in execute
    self._dag_manager.complete_nodes(self._collect_outputs(completed_futures))
/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py:160: in _collect_outputs
    each_node_result = each_future.result()
/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/_base.py:439: in result
    return self.__get_result()
/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/_base.py:391: in __get_result
    raise self._exception
/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/thread.py:58: in run
    result = self.fn(*self.args, **self.kwargs)
/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py:181: in _exec_single_node_in_thread
    result = context.invoke_tool(node, f, kwargs=kwargs)
/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py:90: in invoke_tool
    result = self._invoke_tool_inner(node, f, kwargs)
/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py:201: in _invoke_tool_inner
    raise e
/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py:182: in _invoke_tool_inner
    return f(**kwargs)
/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_trace.py:469: in wrapped
    output = func(*args, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

args = (<promptflow.tools.aoai.AzureOpenAI object at 0x7fb85f15a1c0>,)
kwargs = {'best_of': '1', 'deployment_name': 'gpt-35-turbo', 'echo': 'False', 'frequency_penalty': 0.0, ...}
i = 0
error_message = "OpenAI API hits exception: TypeError: argument of type 'NoneType' is not iterable"

    @functools.wraps(func)
    def wrapper(*args, **kwargs):
        for i in range(tries + 1):
            try:
                return func(*args, **kwargs)
            except (SystemErrorException, UserErrorException) as e:
                # Throw inner wrapped exception directly
                raise e
            except (APIStatusError, APIConnectionError) as e:
                #  Handle retriable exception, please refer to
                #  https://platform.openai.com/docs/guides/error-codes/api-errors
                print(f"Exception occurs: {type(e).__name__}: {str(e)}", file=sys.stderr)
                # Vision model does not support all chat api parameters, e.g. response_format and function_call.
                # Recommend user to use vision model in vision tools, rather than LLM tool.
                # Related issue https://github.com/microsoft/promptflow/issues/1683
                if isinstance(e, BadRequestError) and "extra fields not permitted" in str(e).lower():
                    refined_error_message = \
                        refine_extra_fields_not_permitted_error(args[0].connection,
                                                                kwargs.get("deployment_name", ""),
                                                                kwargs.get("model", ""))
                    if refined_error_message:
                        raise LLMError(message=f"{str(e)} {refined_error_message}")
                    else:
                        raise WrappedOpenAIError(e)
    
                if isinstance(e, APIConnectionError) and not isinstance(e, APITimeoutError) \
                        and not is_retriable_api_connection_error(e):
                    raise WrappedOpenAIError(e)
                # Retry InternalServerError(>=500), RateLimitError(429), UnprocessableEntityError(422)
                if isinstance(e, APIStatusError):
                    status_code = e.response.status_code
                    if status_code < 500 and status_code not in [429, 422]:
                        raise WrappedOpenAIError(e)
                if isinstance(e, RateLimitError) and getattr(e, "type", None) == "insufficient_quota":
                    # Exit retry if this is quota insufficient error
                    print(f"{type(e).__name__} with insufficient quota. Throw user error.", file=sys.stderr)
                    raise WrappedOpenAIError(e)
                if i == tries:
                    # Exit retry if max retry reached
                    print(f"{type(e).__name__} reached max retry. Exit retry with user error.", file=sys.stderr)
                    raise ExceedMaxRetryTimes(e)
    
                if hasattr(e, 'response') and e.response is not None:
                    retry_after_in_header = e.response.headers.get("retry-after", None)
                else:
                    retry_after_in_header = None
    
                if not retry_after_in_header:
                    retry_after_seconds = generate_retry_interval(i)
                    msg = (
                        f"{type(e).__name__} #{i}, but no Retry-After header, "
                        + f"Back off {retry_after_seconds} seconds for retry."
                    )
                    print(msg, file=sys.stderr)
                else:
                    retry_after_seconds = float(retry_after_in_header)
                    msg = (
                        f"{type(e).__name__} #{i}, Retry-After={retry_after_in_header}, "
                        f"Back off {retry_after_seconds} seconds for retry."
                    )
                    print(msg, file=sys.stderr)
                time.sleep(retry_after_seconds)
            except OpenAIError as e:
                # For other non-retriable errors from OpenAIError,
                # For example, AuthenticationError, APIConnectionError, BadRequestError, NotFoundError
                # Mark UserError for all the non-retriable OpenAIError
                print(f"Exception occurs: {type(e).__name__}: {str(e)}", file=sys.stderr)
                raise WrappedOpenAIError(e)
            except Exception as e:
                print(f"Exception occurs: {type(e).__name__}: {str(e)}", file=sys.stderr)
                error_message = f"OpenAI API hits exception: {type(e).__name__}: {str(e)}"
>               raise LLMError(message=error_message)
E               promptflow.tools.exception.LLMError: OpenAI API hits exception: TypeError: argument of type 'NoneType' is not iterable

/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py:590: LLMError

Check warning on line 0 in tests.executor.e2etests.test_batch_engine.TestBatch

See this annotation in the file changed.

@github-actions github-actions / Executor E2E Test Result [user/yalu4/escape_2](https://github.com/microsoft/promptflow/actions/workflows/promptflow-executor-e2e-test.yml?query=branch:user/yalu4/escape_2++)

test_batch_run[web_classification_no_variants-inputs_mapping0] (tests.executor.e2etests.test_batch_engine.TestBatch) failed

artifacts/Test Results (Python 3.9) (OS ubuntu-latest)/test-results.xml [took 6s]
Raw output
assert 0 == 2
 +  where 0 = BatchResult(status=<Status.Completed: 'Completed'>, total_lines=2, completed_lines=0, failed_lines=2, node_status={'fetch_text_content_from_url.completed': 2, 'prepare_examples.completed': 2, 'summarize_text_content.failed': 2}, start_time=datetime.datetime(2024, 4, 22, 12, 24, 18, 933390), end_time=datetime.datetime(2024, 4, 22, 12, 24, 25, 112724), metrics={}, system_metrics=SystemMetrics(total_tokens=0, prompt_tokens=0, completion_tokens=0, duration=6.179334), error_summary=ErrorSummary(failed_user_error_lines=2, failed_system_error_lines=0, error_list=[LineError(line_number=0, error={'message': "OpenAI API hits exception: TypeError: argument of type 'NoneType' is not iterable", 'messageFormat': '', 'messageParameters': {}, 'referenceCode': 'Tool/promptflow.tools.aoai', 'code': 'UserError', 'innerError': {'code': 'LLMError', 'innerError': None}, 'debugInfo': {'type': 'LLMError', 'message': "OpenAI API hits exception: TypeError: argument of type 'NoneType' is not iterable", 'stackTrace': '\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 964, in _exec\n    output, aggregation_inputs = self._exec_inner_with_trace(\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 864, in _exec_inner_with_trace\n    output, aggregation_inputs = self._exec_inner(\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 895, in _exec_inner\n    output, nodes_outputs = self._traverse_nodes(inputs, context)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1146, in _traverse_nodes\n    nodes_outputs, bypassed_nodes = self._submit_to_scheduler(context, inputs, batch_nodes)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1183, in _submit_to_scheduler\n    return scheduler.execute(self._line_timeout_sec)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 131, in execute\n    raise e\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 113, in execute\n    self._dag_manager.complete_nodes(self._collect_outputs(completed_futures))\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 160, in _collect_outputs\n    each_node_result = each_future.result()\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/_base.py", line 439, in result\n    return self.__get_result()\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/_base.py", line 391, in __get_result\n    raise self._exception\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/thread.py", line 58, in run\n    result = self.fn(*self.args, **self.kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 181, in _exec_single_node_in_thread\n    result = context.invoke_tool(node, f, kwargs=kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 90, in invoke_tool\n    result = self._invoke_tool_inner(node, f, kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 201, in _invoke_tool_inner\n    raise e\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 182, in _invoke_tool_inner\n    return f(**kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_trace.py", line 469, in wrapped\n    output = func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 590, in wrapper\n    raise LLMError(message=error_message)\n', 'innerException': {'type': 'TypeError', 'message': "argument of type 'NoneType' is not iterable", 'stackTrace': 'Traceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 523, in wrapper\n    return func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/aoai.py", line 124, in chat\n    messages = build_messages(prompt, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 719, in build_messages\n    escape_dict = build_escape_dict(inputs_to_escape=inputs_to_escape, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 622, in build_escape_dict\n    if _should_escape(k, v, inputs_to_escape):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 615, in _should_escape\n    return (isinstance(v, PromptResult) and v.get_escape_mapping()) or k in inputs_to_escape\n', 'innerException': None}}}), LineError(line_number=1, error={'message': "OpenAI API hits exception: TypeError: argument of type 'NoneType' is not iterable", 'messageFormat': '', 'messageParameters': {}, 'referenceCode': 'Tool/promptflow.tools.aoai', 'code': 'UserError', 'innerError': {'code': 'LLMError', 'innerError': None}, 'debugInfo': {'type': 'LLMError', 'message': "OpenAI API hits exception: TypeError: argument of type 'NoneType' is not iterable", 'stackTrace': '\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 964, in _exec\n    output, aggregation_inputs = self._exec_inner_with_trace(\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 864, in _exec_inner_with_trace\n    output, aggregation_inputs = self._exec_inner(\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 895, in _exec_inner\n    output, nodes_outputs = self._traverse_nodes(inputs, context)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1146, in _traverse_nodes\n    nodes_outputs, bypassed_nodes = self._submit_to_scheduler(context, inputs, batch_nodes)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1183, in _submit_to_scheduler\n    return scheduler.execute(self._line_timeout_sec)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 131, in execute\n    raise e\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 113, in execute\n    self._dag_manager.complete_nodes(self._collect_outputs(completed_futures))\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 160, in _collect_outputs\n    each_node_result = each_future.result()\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/_base.py", line 439, in result\n    return self.__get_result()\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/_base.py", line 391, in __get_result\n    raise self._exception\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/thread.py", line 58, in run\n    result = self.fn(*self.args, **self.kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 181, in _exec_single_node_in_thread\n    result = context.invoke_tool(node, f, kwargs=kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 90, in invoke_tool\n    result = self._invoke_tool_inner(node, f, kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 201, in _invoke_tool_inner\n    raise e\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 182, in _invoke_tool_inner\n    return f(**kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_trace.py", line 469, in wrapped\n    output = func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 590, in wrapper\n    raise LLMError(message=error_message)\n', 'innerException': {'type': 'TypeError', 'message': "argument of type 'NoneType' is not iterable", 'stackTrace': 'Traceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 523, in wrapper\n    return func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/aoai.py", line 124, in chat\n    messages = build_messages(prompt, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 719, in build_messages\n    escape_dict = build_escape_dict(inputs_to_escape=inputs_to_escape, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 622, in build_escape_dict\n    if _should_escape(k, v, inputs_to_escape):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 615, in _should_escape\n    return (isinstance(v, PromptResult) and v.get_escape_mapping()) or k in inputs_to_escape\n', 'innerException': None}}})], aggr_error_dict={}, batch_error_dict=None)).completed_lines
self = <executor.e2etests.test_batch_engine.TestBatch object at 0x7f06dad14d30>
flow_folder = 'web_classification_no_variants'
inputs_mapping = {'url': '${data.url}'}
dev_connections = {'aoai_assistant_connection': {'module': 'promptflow.connections', 'name': 'aoai_assistant_connection', 'type': 'Azure...ai.azure.com/', 'api_key': 'ac21d4a97d044b28990a740132f40d35', 'api_type': 'azure', 'api_version': '2024-02-01'}}, ...}

    @pytest.mark.parametrize(
        "flow_folder, inputs_mapping",
        [
            (
                SAMPLE_FLOW,
                {"url": "${data.url}"},
            ),
            (
                "prompt_tools",
                {"text": "${data.text}"},
            ),
            (
                "script_with___file__",
                {"text": "${data.text}"},
            ),
            (
                "sample_flow_with_functions",
                {"question": "${data.question}"},
            ),
        ],
    )
    def test_batch_run(self, flow_folder, inputs_mapping, dev_connections):
        batch_result, output_dir = submit_batch_run(
            flow_folder, inputs_mapping, connections=dev_connections, return_output_dir=True
        )
    
        assert isinstance(batch_result, BatchResult)
        nlines = get_batch_inputs_line(flow_folder)
        assert batch_result.total_lines == nlines
>       assert batch_result.completed_lines == nlines
E       assert 0 == 2
E        +  where 0 = BatchResult(status=<Status.Completed: 'Completed'>, total_lines=2, completed_lines=0, failed_lines=2, node_status={'fetch_text_content_from_url.completed': 2, 'prepare_examples.completed': 2, 'summarize_text_content.failed': 2}, start_time=datetime.datetime(2024, 4, 22, 12, 24, 18, 933390), end_time=datetime.datetime(2024, 4, 22, 12, 24, 25, 112724), metrics={}, system_metrics=SystemMetrics(total_tokens=0, prompt_tokens=0, completion_tokens=0, duration=6.179334), error_summary=ErrorSummary(failed_user_error_lines=2, failed_system_error_lines=0, error_list=[LineError(line_number=0, error={'message': "OpenAI API hits exception: TypeError: argument of type 'NoneType' is not iterable", 'messageFormat': '', 'messageParameters': {}, 'referenceCode': 'Tool/promptflow.tools.aoai', 'code': 'UserError', 'innerError': {'code': 'LLMError', 'innerError': None}, 'debugInfo': {'type': 'LLMError', 'message': "OpenAI API hits exception: TypeError: argument of type 'NoneType' is not iterable", 'stackTrace': '\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 964, in _exec\n    output, aggregation_inputs = self._exec_inner_with_trace(\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 864, in _exec_inner_with_trace\n    output, aggregation_inputs = self._exec_inner(\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 895, in _exec_inner\n    output, nodes_outputs = self._traverse_nodes(inputs, context)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1146, in _traverse_nodes\n    nodes_outputs, bypassed_nodes = self._submit_to_scheduler(context, inputs, batch_nodes)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1183, in _submit_to_scheduler\n    return scheduler.execute(self._line_timeout_sec)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 131, in execute\n    raise e\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 113, in execute\n    self._dag_manager.complete_nodes(self._collect_outputs(completed_futures))\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 160, in _collect_outputs\n    each_node_result = each_future.result()\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/_base.py", line 439, in result\n    return self.__get_result()\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/_base.py", line 391, in __get_result\n    raise self._exception\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/thread.py", line 58, in run\n    result = self.fn(*self.args, **self.kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 181, in _exec_single_node_in_thread\n    result = context.invoke_tool(node, f, kwargs=kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 90, in invoke_tool\n    result = self._invoke_tool_inner(node, f, kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 201, in _invoke_tool_inner\n    raise e\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 182, in _invoke_tool_inner\n    return f(**kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_trace.py", line 469, in wrapped\n    output = func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 590, in wrapper\n    raise LLMError(message=error_message)\n', 'innerException': {'type': 'TypeError', 'message': "argument of type 'NoneType' is not iterable", 'stackTrace': 'Traceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 523, in wrapper\n    return func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/aoai.py", line 124, in chat\n    messages = build_messages(prompt, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 719, in build_messages\n    escape_dict = build_escape_dict(inputs_to_escape=inputs_to_escape, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 622, in build_escape_dict\n    if _should_escape(k, v, inputs_to_escape):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 615, in _should_escape\n    return (isinstance(v, PromptResult) and v.get_escape_mapping()) or k in inputs_to_escape\n', 'innerException': None}}}), LineError(line_number=1, error={'message': "OpenAI API hits exception: TypeError: argument of type 'NoneType' is not iterable", 'messageFormat': '', 'messageParameters': {}, 'referenceCode': 'Tool/promptflow.tools.aoai', 'code': 'UserError', 'innerError': {'code': 'LLMError', 'innerError': None}, 'debugInfo': {'type': 'LLMError', 'message': "OpenAI API hits exception: TypeError: argument of type 'NoneType' is not iterable", 'stackTrace': '\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 964, in _exec\n    output, aggregation_inputs = self._exec_inner_with_trace(\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 864, in _exec_inner_with_trace\n    output, aggregation_inputs = self._exec_inner(\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 895, in _exec_inner\n    output, nodes_outputs = self._traverse_nodes(inputs, context)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1146, in _traverse_nodes\n    nodes_outputs, bypassed_nodes = self._submit_to_scheduler(context, inputs, batch_nodes)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1183, in _submit_to_scheduler\n    return scheduler.execute(self._line_timeout_sec)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 131, in execute\n    raise e\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 113, in execute\n    self._dag_manager.complete_nodes(self._collect_outputs(completed_futures))\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 160, in _collect_outputs\n    each_node_result = each_future.result()\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/_base.py", line 439, in result\n    return self.__get_result()\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/_base.py", line 391, in __get_result\n    raise self._exception\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/thread.py", line 58, in run\n    result = self.fn(*self.args, **self.kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 181, in _exec_single_node_in_thread\n    result = context.invoke_tool(node, f, kwargs=kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 90, in invoke_tool\n    result = self._invoke_tool_inner(node, f, kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 201, in _invoke_tool_inner\n    raise e\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 182, in _invoke_tool_inner\n    return f(**kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_trace.py", line 469, in wrapped\n    output = func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 590, in wrapper\n    raise LLMError(message=error_message)\n', 'innerException': {'type': 'TypeError', 'message': "argument of type 'NoneType' is not iterable", 'stackTrace': 'Traceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 523, in wrapper\n    return func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/aoai.py", line 124, in chat\n    messages = build_messages(prompt, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 719, in build_messages\n    escape_dict = build_escape_dict(inputs_to_escape=inputs_to_escape, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 622, in build_escape_dict\n    if _should_escape(k, v, inputs_to_escape):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 615, in _should_escape\n    return (isinstance(v, PromptResult) and v.get_escape_mapping()) or k in inputs_to_escape\n', 'innerException': None}}})], aggr_error_dict={}, batch_error_dict=None)).completed_lines

/home/runner/work/promptflow/promptflow/src/promptflow/tests/executor/e2etests/test_batch_engine.py:154: AssertionError

Check warning on line 0 in tests.executor.e2etests.test_executor_happypath.TestExecutor

See this annotation in the file changed.

@github-actions github-actions / Executor E2E Test Result [user/yalu4/escape_2](https://github.com/microsoft/promptflow/actions/workflows/promptflow-executor-e2e-test.yml?query=branch:user/yalu4/escape_2++)

test_executor_exec_node[web_classification_no_variants-summarize_text_content-flow_inputs0-dependency_nodes_outputs0] (tests.executor.e2etests.test_executor_happypath.TestExecutor) failed

artifacts/Test Results (Python 3.9) (OS ubuntu-latest)/test-results.xml [took 0s]
Raw output
promptflow.tools.exception.LLMError: OpenAI API hits exception: TypeError: argument of type 'NoneType' is not iterable
args = (<promptflow.tools.aoai.AzureOpenAI object at 0x7fb85f035070>,)
kwargs = {'best_of': '1', 'deployment_name': 'gpt-35-turbo', 'echo': 'False', 'frequency_penalty': 0.0, ...}
i = 0
error_message = "OpenAI API hits exception: TypeError: argument of type 'NoneType' is not iterable"

    @functools.wraps(func)
    def wrapper(*args, **kwargs):
        for i in range(tries + 1):
            try:
>               return func(*args, **kwargs)

/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py:523: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/aoai.py:124: in chat
    messages = build_messages(prompt, **kwargs)
/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py:719: in build_messages
    escape_dict = build_escape_dict(inputs_to_escape=inputs_to_escape, **kwargs)
/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py:622: in build_escape_dict
    if _should_escape(k, v, inputs_to_escape):
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

k = 'echo', v = 'False', inputs_to_escape = None

    def _should_escape(k, v, inputs_to_escape: list):
>       return (isinstance(v, PromptResult) and v.get_escape_mapping()) or k in inputs_to_escape
E       TypeError: argument of type 'NoneType' is not iterable

/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py:615: TypeError

During handling of the above exception, another exception occurred:

self = <executor.e2etests.test_executor_happypath.TestExecutor object at 0x7fb860032bb0>
flow_folder = 'web_classification_no_variants'
node_name = 'summarize_text_content', flow_inputs = {}
dependency_nodes_outputs = {'fetch_text_content_from_url': 'Hello'}
dev_connections = {'aoai_assistant_connection': {'module': 'promptflow.connections', 'name': 'aoai_assistant_connection', 'type': 'Azure...ai.azure.com/', 'api_key': 'ac21d4a97d044b28990a740132f40d35', 'api_type': 'azure', 'api_version': '2024-02-01'}}, ...}

    @pytest.mark.parametrize(
        "flow_folder, node_name, flow_inputs, dependency_nodes_outputs",
        [
            ("web_classification_no_variants", "summarize_text_content", {}, {"fetch_text_content_from_url": "Hello"}),
            ("prompt_tools", "summarize_text_content_prompt", {"text": "text"}, {}),
            ("script_with___file__", "node1", {"text": "text"}, None),
            ("script_with___file__", "node2", None, {"node1": "text"}),
            ("script_with___file__", "node3", None, None),
            ("package_tools", "search_by_text", {"text": "elon mask"}, None),  # Skip since no api key in CI
            ("connection_as_input", "conn_node", None, None),
            ("simple_aggregation", "accuracy", {"text": "A"}, {"passthrough": "B"}),
            ("script_with_import", "node1", {"text": "text"}, None),
        ],
    )
    def test_executor_exec_node(self, flow_folder, node_name, flow_inputs, dependency_nodes_outputs, dev_connections):
        self.skip_serp(flow_folder, dev_connections)
        yaml_file = get_yaml_file(flow_folder)
>       run_info = FlowExecutor.load_and_exec_node(
            yaml_file,
            node_name,
            flow_inputs=flow_inputs,
            dependency_nodes_outputs=dependency_nodes_outputs,
            connections=dev_connections,
            raise_ex=True,
        )

/home/runner/work/promptflow/promptflow/src/promptflow/tests/executor/e2etests/test_executor_happypath.py:145: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py:440: in load_and_exec_node
    context.invoke_tool(resolved_node.node, resolved_node.callable, kwargs=resolved_inputs)
/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py:90: in invoke_tool
    result = self._invoke_tool_inner(node, f, kwargs)
/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py:201: in _invoke_tool_inner
    raise e
/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py:182: in _invoke_tool_inner
    return f(**kwargs)
/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_trace.py:469: in wrapped
    output = func(*args, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

args = (<promptflow.tools.aoai.AzureOpenAI object at 0x7fb85f035070>,)
kwargs = {'best_of': '1', 'deployment_name': 'gpt-35-turbo', 'echo': 'False', 'frequency_penalty': 0.0, ...}
i = 0
error_message = "OpenAI API hits exception: TypeError: argument of type 'NoneType' is not iterable"

    @functools.wraps(func)
    def wrapper(*args, **kwargs):
        for i in range(tries + 1):
            try:
                return func(*args, **kwargs)
            except (SystemErrorException, UserErrorException) as e:
                # Throw inner wrapped exception directly
                raise e
            except (APIStatusError, APIConnectionError) as e:
                #  Handle retriable exception, please refer to
                #  https://platform.openai.com/docs/guides/error-codes/api-errors
                print(f"Exception occurs: {type(e).__name__}: {str(e)}", file=sys.stderr)
                # Vision model does not support all chat api parameters, e.g. response_format and function_call.
                # Recommend user to use vision model in vision tools, rather than LLM tool.
                # Related issue https://github.com/microsoft/promptflow/issues/1683
                if isinstance(e, BadRequestError) and "extra fields not permitted" in str(e).lower():
                    refined_error_message = \
                        refine_extra_fields_not_permitted_error(args[0].connection,
                                                                kwargs.get("deployment_name", ""),
                                                                kwargs.get("model", ""))
                    if refined_error_message:
                        raise LLMError(message=f"{str(e)} {refined_error_message}")
                    else:
                        raise WrappedOpenAIError(e)
    
                if isinstance(e, APIConnectionError) and not isinstance(e, APITimeoutError) \
                        and not is_retriable_api_connection_error(e):
                    raise WrappedOpenAIError(e)
                # Retry InternalServerError(>=500), RateLimitError(429), UnprocessableEntityError(422)
                if isinstance(e, APIStatusError):
                    status_code = e.response.status_code
                    if status_code < 500 and status_code not in [429, 422]:
                        raise WrappedOpenAIError(e)
                if isinstance(e, RateLimitError) and getattr(e, "type", None) == "insufficient_quota":
                    # Exit retry if this is quota insufficient error
                    print(f"{type(e).__name__} with insufficient quota. Throw user error.", file=sys.stderr)
                    raise WrappedOpenAIError(e)
                if i == tries:
                    # Exit retry if max retry reached
                    print(f"{type(e).__name__} reached max retry. Exit retry with user error.", file=sys.stderr)
                    raise ExceedMaxRetryTimes(e)
    
                if hasattr(e, 'response') and e.response is not None:
                    retry_after_in_header = e.response.headers.get("retry-after", None)
                else:
                    retry_after_in_header = None
    
                if not retry_after_in_header:
                    retry_after_seconds = generate_retry_interval(i)
                    msg = (
                        f"{type(e).__name__} #{i}, but no Retry-After header, "
                        + f"Back off {retry_after_seconds} seconds for retry."
                    )
                    print(msg, file=sys.stderr)
                else:
                    retry_after_seconds = float(retry_after_in_header)
                    msg = (
                        f"{type(e).__name__} #{i}, Retry-After={retry_after_in_header}, "
                        f"Back off {retry_after_seconds} seconds for retry."
                    )
                    print(msg, file=sys.stderr)
                time.sleep(retry_after_seconds)
            except OpenAIError as e:
                # For other non-retriable errors from OpenAIError,
                # For example, AuthenticationError, APIConnectionError, BadRequestError, NotFoundError
                # Mark UserError for all the non-retriable OpenAIError
                print(f"Exception occurs: {type(e).__name__}: {str(e)}", file=sys.stderr)
                raise WrappedOpenAIError(e)
            except Exception as e:
                print(f"Exception occurs: {type(e).__name__}: {str(e)}", file=sys.stderr)
                error_message = f"OpenAI API hits exception: {type(e).__name__}: {str(e)}"
>               raise LLMError(message=error_message)
E               promptflow.tools.exception.LLMError: OpenAI API hits exception: TypeError: argument of type 'NoneType' is not iterable

/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py:590: LLMError

Check warning on line 0 in tests.executor.e2etests.test_batch_engine.TestBatch

See this annotation in the file changed.

@github-actions github-actions / Executor E2E Test Result [user/yalu4/escape_2](https://github.com/microsoft/promptflow/actions/workflows/promptflow-executor-e2e-test.yml?query=branch:user/yalu4/escape_2++)

test_spawn_mode_batch_run[web_classification_no_variants-inputs_mapping0] (tests.executor.e2etests.test_batch_engine.TestBatch) failed

artifacts/Test Results (Python 3.9) (OS ubuntu-latest)/test-results.xml [took 8s]
Raw output
Exception: Hit exception: assert 0 == 2
 +  where 0 = BatchResult(status=<Status.Completed: 'Completed'>, total_lines=2, completed_lines=0, failed_lines=2, node_status={'fetch_text_content_from_url.completed': 2, 'prepare_examples.completed': 2, 'summarize_text_content.failed': 2}, start_time=datetime.datetime(2024, 4, 22, 12, 24, 44, 8778), end_time=datetime.datetime(2024, 4, 22, 12, 24, 51, 900236), metrics={}, system_metrics=SystemMetrics(total_tokens=0, prompt_tokens=0, completion_tokens=0, duration=7.891458), error_summary=ErrorSummary(failed_user_error_lines=2, failed_system_error_lines=0, error_list=[LineError(line_number=0, error={'message': "OpenAI API hits exception: TypeError: argument of type 'NoneType' is not iterable", 'messageFormat': '', 'messageParameters': {}, 'referenceCode': 'Tool/promptflow.tools.aoai', 'code': 'UserError', 'innerError': {'code': 'LLMError', 'innerError': None}, 'debugInfo': {'type': 'LLMError', 'message': "OpenAI API hits exception: TypeError: argument of type 'NoneType' is not iterable", 'stackTrace': '\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 964, in _exec\n    output, aggregation_inputs = self._exec_inner_with_trace(\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 864, in _exec_inner_with_trace\n    output, aggregation_inputs = self._exec_inner(\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 895, in _exec_inner\n    output, nodes_outputs = self._traverse_nodes(inputs, context)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1146, in _traverse_nodes\n    nodes_outputs, bypassed_nodes = self._submit_to_scheduler(context, inputs, batch_nodes)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1183, in _submit_to_scheduler\n    return scheduler.execute(self._line_timeout_sec)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 131, in execute\n    raise e\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 113, in execute\n    self._dag_manager.complete_nodes(self._collect_outputs(completed_futures))\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 160, in _collect_outputs\n    each_node_result = each_future.result()\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/_base.py", line 439, in result\n    return self.__get_result()\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/_base.py", line 391, in __get_result\n    raise self._exception\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/thread.py", line 58, in run\n    result = self.fn(*self.args, **self.kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 181, in _exec_single_node_in_thread\n    result = context.invoke_tool(node, f, kwargs=kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 90, in invoke_tool\n    result = self._invoke_tool_inner(node, f, kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 201, in _invoke_tool_inner\n    raise e\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 182, in _invoke_tool_inner\n    return f(**kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_trace.py", line 469, in wrapped\n    output = func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 590, in wrapper\n    raise LLMError(message=error_message)\n', 'innerException': {'type': 'TypeError', 'message': "argument of type 'NoneType' is not iterable", 'stackTrace': 'Traceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 523, in wrapper\n    return func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/aoai.py", line 124, in chat\n    messages = build_messages(prompt, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 719, in build_messages\n    escape_dict = build_escape_dict(inputs_to_escape=inputs_to_escape, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 622, in build_escape_dict\n    if _should_escape(k, v, inputs_to_escape):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 615, in _should_escape\n    return (isinstance(v, PromptResult) and v.get_escape_mapping()) or k in inputs_to_escape\n', 'innerException': None}}}), LineError(line_number=1, error={'message': "OpenAI API hits exception: TypeError: argument of type 'NoneType' is not iterable", 'messageFormat': '', 'messageParameters': {}, 'referenceCode': 'Tool/promptflow.tools.aoai', 'code': 'UserError', 'innerError': {'code': 'LLMError', 'innerError': None}, 'debugInfo': {'type': 'LLMError', 'message': "OpenAI API hits exception: TypeError: argument of type 'NoneType' is not iterable", 'stackTrace': '\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 964, in _exec\n    output, aggregation_inputs = self._exec_inner_with_trace(\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 864, in _exec_inner_with_trace\n    output, aggregation_inputs = self._exec_inner(\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 895, in _exec_inner\n    output, nodes_outputs = self._traverse_nodes(inputs, context)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1146, in _traverse_nodes\n    nodes_outputs, bypassed_nodes = self._submit_to_scheduler(context, inputs, batch_nodes)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1183, in _submit_to_scheduler\n    return scheduler.execute(self._line_timeout_sec)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 131, in execute\n    raise e\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 113, in execute\n    self._dag_manager.complete_nodes(self._collect_outputs(completed_futures))\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 160, in _collect_outputs\n    each_node_result = each_future.result()\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/_base.py", line 439, in result\n    return self.__get_result()\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/_base.py", line 391, in __get_result\n    raise self._exception\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/thread.py", line 58, in run\n    result = self.fn(*self.args, **self.kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 181, in _exec_single_node_in_thread\n    result = context.invoke_tool(node, f, kwargs=kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 90, in invoke_tool\n    result = self._invoke_tool_inner(node, f, kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 201, in _invoke_tool_inner\n    raise e\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 182, in _invoke_tool_inner\n    return f(**kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_trace.py", line 469, in wrapped\n    output = func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 590, in wrapper\n    raise LLMError(message=error_message)\n', 'innerException': {'type': 'TypeError', 'message': "argument of type 'NoneType' is not iterable", 'stackTrace': 'Traceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 523, in wrapper\n    return func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/aoai.py", line 124, in chat\n    messages = build_messages(prompt, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 719, in build_messages\n    escape_dict = build_escape_dict(inputs_to_escape=inputs_to_escape, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 622, in build_escape_dict\n    if _should_escape(k, v, inputs_to_escape):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 615, in _should_escape\n    return (isinstance(v, PromptResult) and v.get_escape_mapping()) or k in inputs_to_escape\n', 'innerException': None}}})], aggr_error_dict={}, batch_error_dict=None)).completed_lines
Stack trace: Traceback (most recent call last):
  File "/home/runner/work/promptflow/promptflow/src/promptflow/tests/executor/e2etests/test_batch_engine.py", line 62, in run_batch_with_start_method
    _run_batch_with_start_method(multiprocessing_start_method, flow_folder, inputs_mapping, dev_connections)
  File "/home/runner/work/promptflow/promptflow/src/promptflow/tests/executor/e2etests/test_batch_engine.py", line 86, in _run_batch_with_start_method
    assert batch_result.completed_lines == nlines
AssertionError: assert 0 == 2
 +  where 0 = BatchResult(status=<Status.Completed: 'Completed'>, total_lines=2, completed_lines=0, failed_lines=2, node_status={'fetch_text_content_from_url.completed': 2, 'prepare_examples.completed': 2, 'summarize_text_content.failed': 2}, start_time=datetime.datetime(2024, 4, 22, 12, 24, 44, 8778), end_time=datetime.datetime(2024, 4, 22, 12, 24, 51, 900236), metrics={}, system_metrics=SystemMetrics(total_tokens=0, prompt_tokens=0, completion_tokens=0, duration=7.891458), error_summary=ErrorSummary(failed_user_error_lines=2, failed_system_error_lines=0, error_list=[LineError(line_number=0, error={'message': "OpenAI API hits exception: TypeError: argument of type 'NoneType' is not iterable", 'messageFormat': '', 'messageParameters': {}, 'referenceCode': 'Tool/promptflow.tools.aoai', 'code': 'UserError', 'innerError': {'code': 'LLMError', 'innerError': None}, 'debugInfo': {'type': 'LLMError', 'message': "OpenAI API hits exception: TypeError: argument of type 'NoneType' is not iterable", 'stackTrace': '\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 964, in _exec\n    output, aggregation_inputs = self._exec_inner_with_trace(\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 864, in _exec_inner_with_trace\n    output, aggregation_inputs = self._exec_inner(\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 895, in _exec_inner\n    output, nodes_outputs = self._traverse_nodes(inputs, context)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1146, in _traverse_nodes\n    nodes_outputs, bypassed_nodes = self._submit_to_scheduler(context, inputs, batch_nodes)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1183, in _submit_to_scheduler\n    return scheduler.execute(self._line_timeout_sec)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 131, in execute\n    raise e\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 113, in execute\n    self._dag_manager.complete_nodes(self._collect_outputs(completed_futures))\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 160, in _collect_outputs\n    each_node_result = each_future.result()\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/_base.py", line 439, in result\n    return self.__get_result()\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/_base.py", line 391, in __get_result\n    raise self._exception\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/thread.py", line 58, in run\n    result = self.fn(*self.args, **self.kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 181, in _exec_single_node_in_thread\n    result = context.invoke_tool(node, f, kwargs=kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 90, in invoke_tool\n    result = self._invoke_tool_inner(node, f, kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 201, in _invoke_tool_inner\n    raise e\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 182, in _invoke_tool_inner\n    return f(**kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_trace.py", line 469, in wrapped\n    output = func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 590, in wrapper\n    raise LLMError(message=error_message)\n', 'innerException': {'type': 'TypeError', 'message': "argument of type 'NoneType' is not iterable", 'stackTrace': 'Traceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 523, in wrapper\n    return func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/aoai.py", line 124, in chat\n    messages = build_messages(prompt, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 719, in build_messages\n    escape_dict = build_escape_dict(inputs_to_escape=inputs_to_escape, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 622, in build_escape_dict\n    if _should_escape(k, v, inputs_to_escape):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 615, in _should_escape\n    return (isinstance(v, PromptResult) and v.get_escape_mapping()) or k in inputs_to_escape\n', 'innerException': None}}}), LineError(line_number=1, error={'message': "OpenAI API hits exception: TypeError: argument of type 'NoneType' is not iterable", 'messageFormat': '', 'messageParameters': {}, 'referenceCode': 'Tool/promptflow.tools.aoai', 'code': 'UserError', 'innerError': {'code': 'LLMError', 'innerError': None}, 'debugInfo': {'type': 'LLMError', 'message': "OpenAI API hits exception: TypeError: argument of type 'NoneType' is not iterable", 'stackTrace': '\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 964, in _exec\n    output, aggregation_inputs = self._exec_inner_with_trace(\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 864, in _exec_inner_with_trace\n    output, aggregation_inputs = self._exec_inner(\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 895, in _exec_inner\n    output, nodes_outputs = self._traverse_nodes(inputs, context)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1146, in _traverse_nodes\n    nodes_outputs, bypassed_nodes = self._submit_to_scheduler(context, inputs, batch_nodes)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1183, in _submit_to_scheduler\n    return scheduler.execute(self._line_timeout_sec)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 131, in execute\n    raise e\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 113, in execute\n    self._dag_manager.complete_nodes(self._collect_outputs(completed_futures))\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 160, in _collect_outputs\n    each_node_result = each_future.result()\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/_base.py", line 439, in result\n    return self.__get_result()\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/_base.py", line 391, in __get_result\n    raise self._exception\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/thread.py", line 58, in run\n    result = self.fn(*self.args, **self.kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 181, in _exec_single_node_in_thread\n    result = context.invoke_tool(node, f, kwargs=kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 90, in invoke_tool\n    result = self._invoke_tool_inner(node, f, kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 201, in _invoke_tool_inner\n    raise e\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 182, in _invoke_tool_inner\n    return f(**kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_trace.py", line 469, in wrapped\n    output = func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 590, in wrapper\n    raise LLMError(message=error_message)\n', 'innerException': {'type': 'TypeError', 'message': "argument of type 'NoneType' is not iterable", 'stackTrace': 'Traceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 523, in wrapper\n    return func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/aoai.py", line 124, in chat\n    messages = build_messages(prompt, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 719, in build_messages\n    escape_dict = build_escape_dict(inputs_to_escape=inputs_to_escape, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 622, in build_escape_dict\n    if _should_escape(k, v, inputs_to_escape):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 615, in _should_escape\n    return (isinstance(v, PromptResult) and v.get_escape_mapping()) or k in inputs_to_escape\n', 'innerException': None}}})], aggr_error_dict={}, batch_error_dict=None)).completed_lines
self = <executor.e2etests.test_batch_engine.TestBatch object at 0x7f06dacfe910>
flow_folder = 'web_classification_no_variants'
inputs_mapping = {'url': '${data.url}'}
dev_connections = {'aoai_assistant_connection': {'module': 'promptflow.connections', 'type': 'AzureOpenAIConnection', 'value': {'api_bas...ai.azure.com/', 'api_key': 'ac21d4a97d044b28990a740132f40d35', 'api_type': 'azure', 'api_version': '2024-02-01'}}, ...}

    @pytest.mark.parametrize(
        "flow_folder, inputs_mapping",
        [
            (
                SAMPLE_FLOW,
                {"url": "${data.url}"},
            ),
            (
                "prompt_tools",
                {"text": "${data.text}"},
            ),
            (
                "script_with___file__",
                {"text": "${data.text}"},
            ),
            (
                "sample_flow_with_functions",
                {"question": "${data.question}"},
            ),
        ],
    )
    def test_spawn_mode_batch_run(self, flow_folder, inputs_mapping, dev_connections):
        if "spawn" not in multiprocessing.get_all_start_methods():
            pytest.skip("Unsupported start method: spawn")
        exception_queue = multiprocessing.Queue()
        p = multiprocessing.Process(
            target=run_batch_with_start_method,
            args=("spawn", flow_folder, inputs_mapping, dev_connections, exception_queue),
        )
        p.start()
        p.join()
        if p.exitcode != 0:
            ex = exception_queue.get(timeout=1)
>           raise ex
E           Exception: Hit exception: assert 0 == 2
E            +  where 0 = BatchResult(status=<Status.Completed: 'Completed'>, total_lines=2, completed_lines=0, failed_lines=2, node_status={'fetch_text_content_from_url.completed': 2, 'prepare_examples.completed': 2, 'summarize_text_content.failed': 2}, start_time=datetime.datetime(2024, 4, 22, 12, 24, 44, 8778), end_time=datetime.datetime(2024, 4, 22, 12, 24, 51, 900236), metrics={}, system_metrics=SystemMetrics(total_tokens=0, prompt_tokens=0, completion_tokens=0, duration=7.891458), error_summary=ErrorSummary(failed_user_error_lines=2, failed_system_error_lines=0, error_list=[LineError(line_number=0, error={'message': "OpenAI API hits exception: TypeError: argument of type 'NoneType' is not iterable", 'messageFormat': '', 'messageParameters': {}, 'referenceCode': 'Tool/promptflow.tools.aoai', 'code': 'UserError', 'innerError': {'code': 'LLMError', 'innerError': None}, 'debugInfo': {'type': 'LLMError', 'message': "OpenAI API hits exception: TypeError: argument of type 'NoneType' is not iterable", 'stackTrace': '\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 964, in _exec\n    output, aggregation_inputs = self._exec_inner_with_trace(\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 864, in _exec_inner_with_trace\n    output, aggregation_inputs = self._exec_inner(\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 895, in _exec_inner\n    output, nodes_outputs = self._traverse_nodes(inputs, context)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1146, in _traverse_nodes\n    nodes_outputs, bypassed_nodes = self._submit_to_scheduler(context, inputs, batch_nodes)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1183, in _submit_to_scheduler\n    return scheduler.execute(self._line_timeout_sec)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 131, in execute\n    raise e\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 113, in execute\n    self._dag_manager.complete_nodes(self._collect_outputs(completed_futures))\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 160, in _collect_outputs\n    each_node_result = each_future.result()\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/_base.py", line 439, in result\n    return self.__get_result()\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/_base.py", line 391, in __get_result\n    raise self._exception\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/thread.py", line 58, in run\n    result = self.fn(*self.args, **self.kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 181, in _exec_single_node_in_thread\n    result = context.invoke_tool(node, f, kwargs=kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 90, in invoke_tool\n    result = self._invoke_tool_inner(node, f, kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 201, in _invoke_tool_inner\n    raise e\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 182, in _invoke_tool_inner\n    return f(**kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_trace.py", line 469, in wrapped\n    output = func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 590, in wrapper\n    raise LLMError(message=error_message)\n', 'innerException': {'type': 'TypeError', 'message': "argument of type 'NoneType' is not iterable", 'stackTrace': 'Traceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 523, in wrapper\n    return func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/aoai.py", line 124, in chat\n    messages = build_messages(prompt, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 719, in build_messages\n    escape_dict = build_escape_dict(inputs_to_escape=inputs_to_escape, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 622, in build_escape_dict\n    if _should_escape(k, v, inputs_to_escape):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 615, in _should_escape\n    return (isinstance(v, PromptResult) and v.get_escape_mapping()) or k in inputs_to_escape\n', 'innerException': None}}}), LineError(line_number=1, error={'message': "OpenAI API hits exception: TypeError: argument of type 'NoneType' is not iterable", 'messageFormat': '', 'messageParameters': {}, 'referenceCode': 'Tool/promptflow.tools.aoai', 'code': 'UserError', 'innerError': {'code': 'LLMError', 'innerError': None}, 'debugInfo': {'type': 'LLMError', 'message': "OpenAI API hits exception: TypeError: argument of type 'NoneType' is not iterable", 'stackTrace': '\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 964, in _exec\n    output, aggregation_inputs = self._exec_inner_with_trace(\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 864, in _exec_inner_with_trace\n    output, aggregation_inputs = self._exec_inner(\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 895, in _exec_inner\n    output, nodes_outputs = self._traverse_nodes(inputs, context)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1146, in _traverse_nodes\n    nodes_outputs, bypassed_nodes = self._submit_to_scheduler(context, inputs, batch_nodes)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1183, in _submit_to_scheduler\n    return scheduler.execute(self._line_timeout_sec)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 131, in execute\n    raise e\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 113, in execute\n    self._dag_manager.complete_nodes(self._collect_outputs(completed_futures))\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 160, in _collect_outputs\n    each_node_result = each_future.result()\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/_base.py", line 439, in result\n    return self.__get_result()\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/_base.py", line 391, in __get_result\n    raise self._exception\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/thread.py", line 58, in run\n    result = self.fn(*self.args, **self.kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 181, in _exec_single_node_in_thread\n    result = context.invoke_tool(node, f, kwargs=kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 90, in invoke_tool\n    result = self._invoke_tool_inner(node, f, kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 201, in _invoke_tool_inner\n    raise e\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 182, in _invoke_tool_inner\n    return f(**kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_trace.py", line 469, in wrapped\n    output = func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 590, in wrapper\n    raise LLMError(message=error_message)\n', 'innerException': {'type': 'TypeError', 'message': "argument of type 'NoneType' is not iterable", 'stackTrace': 'Traceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 523, in wrapper\n    return func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/aoai.py", line 124, in chat\n    messages = build_messages(prompt, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 719, in build_messages\n    escape_dict = build_escape_dict(inputs_to_escape=inputs_to_escape, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 622, in build_escape_dict\n    if _should_escape(k, v, inputs_to_escape):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 615, in _should_escape\n    return (isinstance(v, PromptResult) and v.get_escape_mapping()) or k in inputs_to_escape\n', 'innerException': None}}})], aggr_error_dict={}, batch_error_dict=None)).completed_lines
E           Stack trace: Traceback (most recent call last):
E             File "/home/runner/work/promptflow/promptflow/src/promptflow/tests/executor/e2etests/test_batch_engine.py", line 62, in run_batch_with_start_method
E               _run_batch_with_start_method(multiprocessing_start_method, flow_folder, inputs_mapping, dev_connections)
E             File "/home/runner/work/promptflow/promptflow/src/promptflow/tests/executor/e2etests/test_batch_engine.py", line 86, in _run_batch_with_start_method
E               assert batch_result.completed_lines == nlines
E           AssertionError: assert 0 == 2
E            +  where 0 = BatchResult(status=<Status.Completed: 'Completed'>, total_lines=2, completed_lines=0, failed_lines=2, node_status={'fetch_text_content_from_url.completed': 2, 'prepare_examples.completed': 2, 'summarize_text_content.failed': 2}, start_time=datetime.datetime(2024, 4, 22, 12, 24, 44, 8778), end_time=datetime.datetime(2024, 4, 22, 12, 24, 51, 900236), metrics={}, system_metrics=SystemMetrics(total_tokens=0, prompt_tokens=0, completion_tokens=0, duration=7.891458), error_summary=ErrorSummary(failed_user_error_lines=2, failed_system_error_lines=0, error_list=[LineError(line_number=0, error={'message': "OpenAI API hits exception: TypeError: argument of type 'NoneType' is not iterable", 'messageFormat': '', 'messageParameters': {}, 'referenceCode': 'Tool/promptflow.tools.aoai', 'code': 'UserError', 'innerError': {'code': 'LLMError', 'innerError': None}, 'debugInfo': {'type': 'LLMError', 'message': "OpenAI API hits exception: TypeError: argument of type 'NoneType' is not iterable", 'stackTrace': '\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 964, in _exec\n    output, aggregation_inputs = self._exec_inner_with_trace(\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 864, in _exec_inner_with_trace\n    output, aggregation_inputs = self._exec_inner(\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 895, in _exec_inner\n    output, nodes_outputs = self._traverse_nodes(inputs, context)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1146, in _traverse_nodes\n    nodes_outputs, bypassed_nodes = self._submit_to_scheduler(context, inputs, batch_nodes)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1183, in _submit_to_scheduler\n    return scheduler.execute(self._line_timeout_sec)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 131, in execute\n    raise e\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 113, in execute\n    self._dag_manager.complete_nodes(self._collect_outputs(completed_futures))\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 160, in _collect_outputs\n    each_node_result = each_future.result()\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/_base.py", line 439, in result\n    return self.__get_result()\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/_base.py", line 391, in __get_result\n    raise self._exception\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/thread.py", line 58, in run\n    result = self.fn(*self.args, **self.kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 181, in _exec_single_node_in_thread\n    result = context.invoke_tool(node, f, kwargs=kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 90, in invoke_tool\n    result = self._invoke_tool_inner(node, f, kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 201, in _invoke_tool_inner\n    raise e\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 182, in _invoke_tool_inner\n    return f(**kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_trace.py", line 469, in wrapped\n    output = func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 590, in wrapper\n    raise LLMError(message=error_message)\n', 'innerException': {'type': 'TypeError', 'message': "argument of type 'NoneType' is not iterable", 'stackTrace': 'Traceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 523, in wrapper\n    return func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/aoai.py", line 124, in chat\n    messages = build_messages(prompt, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 719, in build_messages\n    escape_dict = build_escape_dict(inputs_to_escape=inputs_to_escape, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 622, in build_escape_dict\n    if _should_escape(k, v, inputs_to_escape):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 615, in _should_escape\n    return (isinstance(v, PromptResult) and v.get_escape_mapping()) or k in inputs_to_escape\n', 'innerException': None}}}), LineError(line_number=1, error={'message': "OpenAI API hits exception: TypeError: argument of type 'NoneType' is not iterable", 'messageFormat': '', 'messageParameters': {}, 'referenceCode': 'Tool/promptflow.tools.aoai', 'code': 'UserError', 'innerError': {'code': 'LLMError', 'innerError': None}, 'debugInfo': {'type': 'LLMError', 'message': "OpenAI API hits exception: TypeError: argument of type 'NoneType' is not iterable", 'stackTrace': '\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 964, in _exec\n    output, aggregation_inputs = self._exec_inner_with_trace(\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 864, in _exec_inner_with_trace\n    output, aggregation_inputs = self._exec_inner(\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 895, in _exec_inner\n    output, nodes_outputs = self._traverse_nodes(inputs, context)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1146, in _traverse_nodes\n    nodes_outputs, bypassed_nodes = self._submit_to_scheduler(context, inputs, batch_nodes)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1183, in _submit_to_scheduler\n    return scheduler.execute(self._line_timeout_sec)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 131, in execute\n    raise e\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 113, in execute\n    self._dag_manager.complete_nodes(self._collect_outputs(completed_futures))\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 160, in _collect_outputs\n    each_node_result = each_future.result()\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/_base.py", line 439, in result\n    return self.__get_result()\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/_base.py", line 391, in __get_result\n    raise self._exception\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/thread.py", line 58, in run\n    result = self.fn(*self.args, **self.kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 181, in _exec_single_node_in_thread\n    result = context.invoke_tool(node, f, kwargs=kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 90, in invoke_tool\n    result = self._invoke_tool_inner(node, f, kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 201, in _invoke_tool_inner\n    raise e\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 182, in _invoke_tool_inner\n    return f(**kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_trace.py", line 469, in wrapped\n    output = func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 590, in wrapper\n    raise LLMError(message=error_message)\n', 'innerException': {'type': 'TypeError', 'message': "argument of type 'NoneType' is not iterable", 'stackTrace': 'Traceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 523, in wrapper\n    return func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/aoai.py", line 124, in chat\n    messages = build_messages(prompt, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 719, in build_messages\n    escape_dict = build_escape_dict(inputs_to_escape=inputs_to_escape, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 622, in build_escape_dict\n    if _should_escape(k, v, inputs_to_escape):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 615, in _should_escape\n    return (isinstance(v, PromptResult) and v.get_escape_mapping()) or k in inputs_to_escape\n', 'innerException': None}}})], aggr_error_dict={}, batch_error_dict=None)).completed_lines

/home/runner/work/promptflow/promptflow/src/promptflow/tests/executor/e2etests/test_batch_engine.py:198: Exception

Check warning on line 0 in tests.executor.e2etests.test_executor_happypath.TestExecutor

See this annotation in the file changed.

@github-actions github-actions / Executor E2E Test Result [user/yalu4/escape_2](https://github.com/microsoft/promptflow/actions/workflows/promptflow-executor-e2e-test.yml?query=branch:user/yalu4/escape_2++)

test_executor_node_overrides (tests.executor.e2etests.test_executor_happypath.TestExecutor) failed

artifacts/Test Results (Python 3.9) (OS ubuntu-latest)/test-results.xml [took 0s]
Raw output
AssertionError: assert 'LLMError' == 'WrappedOpenAIError'
  
  - WrappedOpenAIError
  + LLMError
self = <executor.e2etests.test_executor_happypath.TestExecutor object at 0x7fb8600e4ac0>
dev_connections = {'aoai_assistant_connection': {'module': 'promptflow.connections', 'name': 'aoai_assistant_connection', 'type': 'Azure...ai.azure.com/', 'api_key': 'ac21d4a97d044b28990a740132f40d35', 'api_type': 'azure', 'api_version': '2024-02-01'}}, ...}

    def test_executor_node_overrides(self, dev_connections):
        inputs = self.get_line_inputs()
        executor = FlowExecutor.create(
            get_yaml_file(SAMPLE_FLOW),
            dev_connections,
            node_override={"classify_with_llm.deployment_name": "dummy_deployment"},
            raise_ex=True,
        )
        with pytest.raises(UserErrorException) as e:
            executor.exec_line(inputs)
>       assert type(e.value).__name__ == "WrappedOpenAIError"
E       AssertionError: assert 'LLMError' == 'WrappedOpenAIError'
E         
E         - WrappedOpenAIError
E         + LLMError

/home/runner/work/promptflow/promptflow/src/promptflow/tests/executor/e2etests/test_executor_happypath.py:183: AssertionError

Check warning on line 0 in tests.executor.e2etests.test_executor_happypath.TestExecutor

See this annotation in the file changed.

@github-actions github-actions / Executor E2E Test Result [user/yalu4/escape_2](https://github.com/microsoft/promptflow/actions/workflows/promptflow-executor-e2e-test.yml?query=branch:user/yalu4/escape_2++)

test_executor_creation_with_default_variants[web_classification] (tests.executor.e2etests.test_executor_happypath.TestExecutor) failed

artifacts/Test Results (Python 3.9) (OS ubuntu-latest)/test-results.xml [took 0s]
Raw output
promptflow.tools.exception.LLMError: OpenAI API hits exception: TypeError: argument of type 'NoneType' is not iterable
args = (<promptflow.tools.aoai.AzureOpenAI object at 0x7fb85cd51820>,)
kwargs = {'best_of': '1', 'deployment_name': 'gpt-35-turbo', 'echo': 'False', 'frequency_penalty': 0.0, ...}
i = 0
error_message = "OpenAI API hits exception: TypeError: argument of type 'NoneType' is not iterable"

    @functools.wraps(func)
    def wrapper(*args, **kwargs):
        for i in range(tries + 1):
            try:
>               return func(*args, **kwargs)

/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py:523: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/aoai.py:124: in chat
    messages = build_messages(prompt, **kwargs)
/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py:719: in build_messages
    escape_dict = build_escape_dict(inputs_to_escape=inputs_to_escape, **kwargs)
/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py:622: in build_escape_dict
    if _should_escape(k, v, inputs_to_escape):
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

k = 'echo', v = 'False', inputs_to_escape = None

    def _should_escape(k, v, inputs_to_escape: list):
>       return (isinstance(v, PromptResult) and v.get_escape_mapping()) or k in inputs_to_escape
E       TypeError: argument of type 'NoneType' is not iterable

/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py:615: TypeError

During handling of the above exception, another exception occurred:

self = <executor.e2etests.test_executor_happypath.TestExecutor object at 0x7fb860048e20>
flow_folder = 'web_classification'
dev_connections = {'aoai_assistant_connection': {'module': 'promptflow.connections', 'name': 'aoai_assistant_connection', 'type': 'Azure...ai.azure.com/', 'api_key': 'ac21d4a97d044b28990a740132f40d35', 'api_type': 'azure', 'api_version': '2024-02-01'}}, ...}

    @pytest.mark.parametrize(
        "flow_folder",
        [
            "web_classification",
        ],
    )
    def test_executor_creation_with_default_variants(self, flow_folder, dev_connections):
        executor = FlowExecutor.create(get_yaml_file(flow_folder), dev_connections)
>       flow_result = executor.exec_line(self.get_line_inputs())

/home/runner/work/promptflow/promptflow/src/promptflow/tests/executor/e2etests/test_executor_happypath.py:262: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py:703: in exec_line
    line_result = self._exec(
/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py:964: in _exec
    output, aggregation_inputs = self._exec_inner_with_trace(
/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py:864: in _exec_inner_with_trace
    output, aggregation_inputs = self._exec_inner(
/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py:895: in _exec_inner
    output, nodes_outputs = self._traverse_nodes(inputs, context)
/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py:1146: in _traverse_nodes
    nodes_outputs, bypassed_nodes = self._submit_to_scheduler(context, inputs, batch_nodes)
/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py:1183: in _submit_to_scheduler
    return scheduler.execute(self._line_timeout_sec)
/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py:131: in execute
    raise e
/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py:113: in execute
    self._dag_manager.complete_nodes(self._collect_outputs(completed_futures))
/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py:160: in _collect_outputs
    each_node_result = each_future.result()
/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/_base.py:439: in result
    return self.__get_result()
/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/_base.py:391: in __get_result
    raise self._exception
/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/thread.py:58: in run
    result = self.fn(*self.args, **self.kwargs)
/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py:181: in _exec_single_node_in_thread
    result = context.invoke_tool(node, f, kwargs=kwargs)
/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py:90: in invoke_tool
    result = self._invoke_tool_inner(node, f, kwargs)
/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py:201: in _invoke_tool_inner
    raise e
/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py:182: in _invoke_tool_inner
    return f(**kwargs)
/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_trace.py:469: in wrapped
    output = func(*args, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

args = (<promptflow.tools.aoai.AzureOpenAI object at 0x7fb85cd51820>,)
kwargs = {'best_of': '1', 'deployment_name': 'gpt-35-turbo', 'echo': 'False', 'frequency_penalty': 0.0, ...}
i = 0
error_message = "OpenAI API hits exception: TypeError: argument of type 'NoneType' is not iterable"

    @functools.wraps(func)
    def wrapper(*args, **kwargs):
        for i in range(tries + 1):
            try:
                return func(*args, **kwargs)
            except (SystemErrorException, UserErrorException) as e:
                # Throw inner wrapped exception directly
                raise e
            except (APIStatusError, APIConnectionError) as e:
                #  Handle retriable exception, please refer to
                #  https://platform.openai.com/docs/guides/error-codes/api-errors
                print(f"Exception occurs: {type(e).__name__}: {str(e)}", file=sys.stderr)
                # Vision model does not support all chat api parameters, e.g. response_format and function_call.
                # Recommend user to use vision model in vision tools, rather than LLM tool.
                # Related issue https://github.com/microsoft/promptflow/issues/1683
                if isinstance(e, BadRequestError) and "extra fields not permitted" in str(e).lower():
                    refined_error_message = \
                        refine_extra_fields_not_permitted_error(args[0].connection,
                                                                kwargs.get("deployment_name", ""),
                                                                kwargs.get("model", ""))
                    if refined_error_message:
                        raise LLMError(message=f"{str(e)} {refined_error_message}")
                    else:
                        raise WrappedOpenAIError(e)
    
                if isinstance(e, APIConnectionError) and not isinstance(e, APITimeoutError) \
                        and not is_retriable_api_connection_error(e):
                    raise WrappedOpenAIError(e)
                # Retry InternalServerError(>=500), RateLimitError(429), UnprocessableEntityError(422)
                if isinstance(e, APIStatusError):
                    status_code = e.response.status_code
                    if status_code < 500 and status_code not in [429, 422]:
                        raise WrappedOpenAIError(e)
                if isinstance(e, RateLimitError) and getattr(e, "type", None) == "insufficient_quota":
                    # Exit retry if this is quota insufficient error
                    print(f"{type(e).__name__} with insufficient quota. Throw user error.", file=sys.stderr)
                    raise WrappedOpenAIError(e)
                if i == tries:
                    # Exit retry if max retry reached
                    print(f"{type(e).__name__} reached max retry. Exit retry with user error.", file=sys.stderr)
                    raise ExceedMaxRetryTimes(e)
    
                if hasattr(e, 'response') and e.response is not None:
                    retry_after_in_header = e.response.headers.get("retry-after", None)
                else:
                    retry_after_in_header = None
    
                if not retry_after_in_header:
                    retry_after_seconds = generate_retry_interval(i)
                    msg = (
                        f"{type(e).__name__} #{i}, but no Retry-After header, "
                        + f"Back off {retry_after_seconds} seconds for retry."
                    )
                    print(msg, file=sys.stderr)
                else:
                    retry_after_seconds = float(retry_after_in_header)
                    msg = (
                        f"{type(e).__name__} #{i}, Retry-After={retry_after_in_header}, "
                        f"Back off {retry_after_seconds} seconds for retry."
                    )
                    print(msg, file=sys.stderr)
                time.sleep(retry_after_seconds)
            except OpenAIError as e:
                # For other non-retriable errors from OpenAIError,
                # For example, AuthenticationError, APIConnectionError, BadRequestError, NotFoundError
                # Mark UserError for all the non-retriable OpenAIError
                print(f"Exception occurs: {type(e).__name__}: {str(e)}", file=sys.stderr)
                raise WrappedOpenAIError(e)
            except Exception as e:
                print(f"Exception occurs: {type(e).__name__}: {str(e)}", file=sys.stderr)
                error_message = f"OpenAI API hits exception: {type(e).__name__}: {str(e)}"
>               raise LLMError(message=error_message)
E               promptflow.tools.exception.LLMError: OpenAI API hits exception: TypeError: argument of type 'NoneType' is not iterable

/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py:590: LLMError

Check warning on line 0 in tests.executor.e2etests.test_activate.TestExecutorActivate

See this annotation in the file changed.

@github-actions github-actions / Executor E2E Test Result [user/yalu4/escape_2](https://github.com/microsoft/promptflow/actions/workflows/promptflow-executor-e2e-test.yml?query=branch:user/yalu4/escape_2++)

test_flow_run_activate[conditional_flow_with_activate] (tests.executor.e2etests.test_activate.TestExecutorActivate) failed

artifacts/Test Results (Python 3.9) (OS ubuntu-latest)/test-results.xml [took 0s]
Raw output
promptflow.tools.exception.LLMError: OpenAI API hits exception: TypeError: argument of type 'NoneType' is not iterable
args = (<promptflow.tools.aoai.AzureOpenAI object at 0x7f8d7bc485e0>,)
kwargs = {'deployment_name': 'gpt-35-turbo', 'first_method': None, 'frequency_penalty': 0.0, 'max_tokens': 256, ...}
i = 0
error_message = "OpenAI API hits exception: TypeError: argument of type 'NoneType' is not iterable"

    @functools.wraps(func)
    def wrapper(*args, **kwargs):
        for i in range(tries + 1):
            try:
>               return func(*args, **kwargs)

/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py:523: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/aoai.py:124: in chat
    messages = build_messages(prompt, **kwargs)
/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py:719: in build_messages
    escape_dict = build_escape_dict(inputs_to_escape=inputs_to_escape, **kwargs)
/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py:622: in build_escape_dict
    if _should_escape(k, v, inputs_to_escape):
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

k = 'first_method', v = None, inputs_to_escape = None

    def _should_escape(k, v, inputs_to_escape: list):
>       return (isinstance(v, PromptResult) and v.get_escape_mapping()) or k in inputs_to_escape
E       TypeError: argument of type 'NoneType' is not iterable

/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py:615: TypeError

During handling of the above exception, another exception occurred:

self = <executor.e2etests.test_activate.TestExecutorActivate object at 0x7f8d7d609a60>
dev_connections = {'aoai_assistant_connection': {'module': 'promptflow.connections', 'name': 'aoai_assistant_connection', 'type': 'Azure...ai.azure.com/', 'api_key': 'ac21d4a97d044b28990a740132f40d35', 'api_type': 'azure', 'api_version': '2024-02-01'}}, ...}
flow_folder = 'conditional_flow_with_activate'

    @pytest.mark.parametrize("flow_folder", ACTIVATE_FLOW_TEST_CASES)
    def test_flow_run_activate(self, dev_connections, flow_folder):
        executor = FlowExecutor.create(get_yaml_file(flow_folder), dev_connections)
>       results = executor.exec_line(get_flow_inputs(flow_folder))

/home/runner/work/promptflow/promptflow/src/promptflow/tests/executor/e2etests/test_activate.py:43: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py:703: in exec_line
    line_result = self._exec(
/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py:964: in _exec
    output, aggregation_inputs = self._exec_inner_with_trace(
/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py:864: in _exec_inner_with_trace
    output, aggregation_inputs = self._exec_inner(
/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py:895: in _exec_inner
    output, nodes_outputs = self._traverse_nodes(inputs, context)
/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py:1146: in _traverse_nodes
    nodes_outputs, bypassed_nodes = self._submit_to_scheduler(context, inputs, batch_nodes)
/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py:1183: in _submit_to_scheduler
    return scheduler.execute(self._line_timeout_sec)
/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py:131: in execute
    raise e
/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py:113: in execute
    self._dag_manager.complete_nodes(self._collect_outputs(completed_futures))
/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py:160: in _collect_outputs
    each_node_result = each_future.result()
/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/_base.py:439: in result
    return self.__get_result()
/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/_base.py:391: in __get_result
    raise self._exception
/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/thread.py:58: in run
    result = self.fn(*self.args, **self.kwargs)
/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py:181: in _exec_single_node_in_thread
    result = context.invoke_tool(node, f, kwargs=kwargs)
/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py:90: in invoke_tool
    result = self._invoke_tool_inner(node, f, kwargs)
/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py:201: in _invoke_tool_inner
    raise e
/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py:182: in _invoke_tool_inner
    return f(**kwargs)
/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_trace.py:469: in wrapped
    output = func(*args, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

args = (<promptflow.tools.aoai.AzureOpenAI object at 0x7f8d7bc485e0>,)
kwargs = {'deployment_name': 'gpt-35-turbo', 'first_method': None, 'frequency_penalty': 0.0, 'max_tokens': 256, ...}
i = 0
error_message = "OpenAI API hits exception: TypeError: argument of type 'NoneType' is not iterable"

    @functools.wraps(func)
    def wrapper(*args, **kwargs):
        for i in range(tries + 1):
            try:
                return func(*args, **kwargs)
            except (SystemErrorException, UserErrorException) as e:
                # Throw inner wrapped exception directly
                raise e
            except (APIStatusError, APIConnectionError) as e:
                #  Handle retriable exception, please refer to
                #  https://platform.openai.com/docs/guides/error-codes/api-errors
                print(f"Exception occurs: {type(e).__name__}: {str(e)}", file=sys.stderr)
                # Vision model does not support all chat api parameters, e.g. response_format and function_call.
                # Recommend user to use vision model in vision tools, rather than LLM tool.
                # Related issue https://github.com/microsoft/promptflow/issues/1683
                if isinstance(e, BadRequestError) and "extra fields not permitted" in str(e).lower():
                    refined_error_message = \
                        refine_extra_fields_not_permitted_error(args[0].connection,
                                                                kwargs.get("deployment_name", ""),
                                                                kwargs.get("model", ""))
                    if refined_error_message:
                        raise LLMError(message=f"{str(e)} {refined_error_message}")
                    else:
                        raise WrappedOpenAIError(e)
    
                if isinstance(e, APIConnectionError) and not isinstance(e, APITimeoutError) \
                        and not is_retriable_api_connection_error(e):
                    raise WrappedOpenAIError(e)
                # Retry InternalServerError(>=500), RateLimitError(429), UnprocessableEntityError(422)
                if isinstance(e, APIStatusError):
                    status_code = e.response.status_code
                    if status_code < 500 and status_code not in [429, 422]:
                        raise WrappedOpenAIError(e)
                if isinstance(e, RateLimitError) and getattr(e, "type", None) == "insufficient_quota":
                    # Exit retry if this is quota insufficient error
                    print(f"{type(e).__name__} with insufficient quota. Throw user error.", file=sys.stderr)
                    raise WrappedOpenAIError(e)
                if i == tries:
                    # Exit retry if max retry reached
                    print(f"{type(e).__name__} reached max retry. Exit retry with user error.", file=sys.stderr)
                    raise ExceedMaxRetryTimes(e)
    
                if hasattr(e, 'response') and e.response is not None:
                    retry_after_in_header = e.response.headers.get("retry-after", None)
                else:
                    retry_after_in_header = None
    
                if not retry_after_in_header:
                    retry_after_seconds = generate_retry_interval(i)
                    msg = (
                        f"{type(e).__name__} #{i}, but no Retry-After header, "
                        + f"Back off {retry_after_seconds} seconds for retry."
                    )
                    print(msg, file=sys.stderr)
                else:
                    retry_after_seconds = float(retry_after_in_header)
                    msg = (
                        f"{type(e).__name__} #{i}, Retry-After={retry_after_in_header}, "
                        f"Back off {retry_after_seconds} seconds for retry."
                    )
                    print(msg, file=sys.stderr)
                time.sleep(retry_after_seconds)
            except OpenAIError as e:
                # For other non-retriable errors from OpenAIError,
                # For example, AuthenticationError, APIConnectionError, BadRequestError, NotFoundError
                # Mark UserError for all the non-retriable OpenAIError
                print(f"Exception occurs: {type(e).__name__}: {str(e)}", file=sys.stderr)
                raise WrappedOpenAIError(e)
            except Exception as e:
                print(f"Exception occurs: {type(e).__name__}: {str(e)}", file=sys.stderr)
                error_message = f"OpenAI API hits exception: {type(e).__name__}: {str(e)}"
>               raise LLMError(message=error_message)
E               promptflow.tools.exception.LLMError: OpenAI API hits exception: TypeError: argument of type 'NoneType' is not iterable

/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py:590: LLMError

Check warning on line 0 in tests.executor.e2etests.test_activate.TestExecutorActivate

See this annotation in the file changed.

@github-actions github-actions / Executor E2E Test Result [user/yalu4/escape_2](https://github.com/microsoft/promptflow/actions/workflows/promptflow-executor-e2e-test.yml?query=branch:user/yalu4/escape_2++)

test_batch_run_activate (tests.executor.e2etests.test_activate.TestExecutorActivate) failed

artifacts/Test Results (Python 3.9) (OS ubuntu-latest)/test-results.xml [took 7s]
Raw output
AssertionError: assert {'line_number': 1, 'investigation_method': {'first': 'Execute job info extractor', 'second': 'Skip incident info extractor'}} == {'investigation_method': {'first': 'Skip job info extractor', 'second': 'Execute incident info extractor'}, 'line_number': 0}
  
  Differing items:
  {'line_number': 1} != {'line_number': 0}
  {'investigation_method': {'first': 'Execute job info extractor', 'second': 'Skip incident info extractor'}} != {'investigation_method': {'first': 'Skip job info extractor', 'second': 'Execute incident info extractor'}}
  
  Full diff:
    {
        'investigation_method': {
  -         'first': 'Skip job info extractor',
  ?                   ^^^^
  +         'first': 'Execute job info extractor',
  ?                   ^^^^^^^
  -         'second': 'Execute incident info extractor',
  ?                    ^^^^^^^
  +         'second': 'Skip incident info extractor',
  ?                    ^^^^
        },
  -     'line_number': 0,
  ?                    ^
  +     'line_number': 1,
  ?                    ^
    }
self = <executor.e2etests.test_activate.TestExecutorActivate object at 0x7f8d7d03f490>
dev_connections = {'aoai_assistant_connection': {'module': 'promptflow.connections', 'name': 'aoai_assistant_connection', 'type': 'Azure...ai.azure.com/', 'api_key': 'ac21d4a97d044b28990a740132f40d35', 'api_type': 'azure', 'api_version': '2024-02-01'}}, ...}

    def test_batch_run_activate(self, dev_connections):
        flow_folder = "conditional_flow_with_activate"
        mem_run_storage = MemoryRunStorage()
        batch_engine = BatchEngine(
            get_yaml_file(flow_folder),
            get_flow_folder(flow_folder),
            connections=dev_connections,
            storage=mem_run_storage,
        )
        input_dirs = {"data": get_flow_inputs_file(flow_folder, file_name="inputs.json")}
        inputs_mapping = {"incident_id": "${data.incident_id}", "incident_content": "${data.incident_content}"}
        output_dir = Path(mkdtemp())
        batch_results = batch_engine.run(input_dirs, inputs_mapping, output_dir)
    
        expected_result = get_flow_expected_result(flow_folder)
        expected_status_summary = get_flow_expected_status_summary(flow_folder)
>       self.assert_activate_bulk_run_result(
            output_dir, mem_run_storage, batch_results, expected_result, expected_status_summary
        )

/home/runner/work/promptflow/promptflow/src/promptflow/tests/executor/e2etests/test_activate.py:66: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <executor.e2etests.test_activate.TestExecutorActivate object at 0x7f8d7d03f490>
output_dir = PosixPath('/tmp/tmpkos2f8dz')
mem_run_storage = <executor.utils.MemoryRunStorage object at 0x7f8d7b8a8d30>
batch_result = BatchResult(status=<Status.Completed: 'Completed'>, total_lines=3, completed_lines=1, failed_lines=2, node_status={'in...escape_mapping()) or k in inputs_to_escape\n', 'innerException': None}}})], aggr_error_dict={}, batch_error_dict=None))
expected_result = [{'expected_bypassed_nodes': ['job_info_extractor', 'icm_retriever'], 'expected_node_count': 9, 'expected_outputs': {'...outputs': {'investigation_method': {'first': 'Skip job info extractor', 'second': 'Execute incident info extractor'}}}]
expected_status_summary = {'icm_retriever.bypassed': 2, 'icm_retriever.completed': 1, 'incident_id_extractor.completed': 3, 'incident_info_extractor.bypassed': 1, ...}

    def assert_activate_bulk_run_result(
        self,
        output_dir: Path,
        mem_run_storage: MemoryRunStorage,
        batch_result: BatchResult,
        expected_result,
        expected_status_summary,
    ):
        # Validate the flow outputs
        outputs = load_jsonl(output_dir / OUTPUT_FILE_NAME)
        for i, output in enumerate(outputs):
            expected_outputs = expected_result[i]["expected_outputs"].copy()
            expected_outputs.update({"line_number": i})
>           assert output == expected_outputs
E           AssertionError: assert {'line_number': 1, 'investigation_method': {'first': 'Execute job info extractor', 'second': 'Skip incident info extractor'}} == {'investigation_method': {'first': 'Skip job info extractor', 'second': 'Execute incident info extractor'}, 'line_number': 0}
E             
E             Differing items:
E             {'line_number': 1} != {'line_number': 0}
E             {'investigation_method': {'first': 'Execute job info extractor', 'second': 'Skip incident info extractor'}} != {'investigation_method': {'first': 'Skip job info extractor', 'second': 'Execute incident info extractor'}}
E             
E             Full diff:
E               {
E                   'investigation_method': {
E             -         'first': 'Skip job info extractor',
E             ?                   ^^^^
E             +         'first': 'Execute job info extractor',
E             ?                   ^^^^^^^
E             -         'second': 'Execute incident info extractor',
E             ?                    ^^^^^^^
E             +         'second': 'Skip incident info extractor',
E             ?                    ^^^^
E                   },
E             -     'line_number': 0,
E             ?                    ^
E             +     'line_number': 1,
E             ?                    ^
E               }

/home/runner/work/promptflow/promptflow/src/promptflow/tests/executor/e2etests/test_activate.py:129: AssertionError

Check warning on line 0 in tests.executor.e2etests.test_batch_engine.TestBatch

See this annotation in the file changed.

@github-actions github-actions / Executor E2E Test Result [user/yalu4/escape_2](https://github.com/microsoft/promptflow/actions/workflows/promptflow-executor-e2e-test.yml?query=branch:user/yalu4/escape_2++)

test_forkserver_mode_batch_run[web_classification_no_variants-inputs_mapping0] (tests.executor.e2etests.test_batch_engine.TestBatch) failed

artifacts/Test Results (Python 3.9) (OS ubuntu-latest)/test-results.xml [took 7s]
Raw output
Exception: Hit exception: assert 0 == 2
 +  where 0 = BatchResult(status=<Status.Completed: 'Completed'>, total_lines=2, completed_lines=0, failed_lines=2, node_status={'fetch_text_content_from_url.completed': 2, 'prepare_examples.completed': 2, 'summarize_text_content.failed': 2}, start_time=datetime.datetime(2024, 4, 22, 12, 25, 17, 215297), end_time=datetime.datetime(2024, 4, 22, 12, 25, 24, 524753), metrics={}, system_metrics=SystemMetrics(total_tokens=0, prompt_tokens=0, completion_tokens=0, duration=7.309456), error_summary=ErrorSummary(failed_user_error_lines=2, failed_system_error_lines=0, error_list=[LineError(line_number=0, error={'message': "OpenAI API hits exception: TypeError: argument of type 'NoneType' is not iterable", 'messageFormat': '', 'messageParameters': {}, 'referenceCode': 'Tool/promptflow.tools.aoai', 'code': 'UserError', 'innerError': {'code': 'LLMError', 'innerError': None}, 'debugInfo': {'type': 'LLMError', 'message': "OpenAI API hits exception: TypeError: argument of type 'NoneType' is not iterable", 'stackTrace': '\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 964, in _exec\n    output, aggregation_inputs = self._exec_inner_with_trace(\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 864, in _exec_inner_with_trace\n    output, aggregation_inputs = self._exec_inner(\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 895, in _exec_inner\n    output, nodes_outputs = self._traverse_nodes(inputs, context)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1146, in _traverse_nodes\n    nodes_outputs, bypassed_nodes = self._submit_to_scheduler(context, inputs, batch_nodes)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1183, in _submit_to_scheduler\n    return scheduler.execute(self._line_timeout_sec)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 131, in execute\n    raise e\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 113, in execute\n    self._dag_manager.complete_nodes(self._collect_outputs(completed_futures))\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 160, in _collect_outputs\n    each_node_result = each_future.result()\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/_base.py", line 439, in result\n    return self.__get_result()\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/_base.py", line 391, in __get_result\n    raise self._exception\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/thread.py", line 58, in run\n    result = self.fn(*self.args, **self.kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 181, in _exec_single_node_in_thread\n    result = context.invoke_tool(node, f, kwargs=kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 90, in invoke_tool\n    result = self._invoke_tool_inner(node, f, kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 201, in _invoke_tool_inner\n    raise e\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 182, in _invoke_tool_inner\n    return f(**kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_trace.py", line 469, in wrapped\n    output = func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 590, in wrapper\n    raise LLMError(message=error_message)\n', 'innerException': {'type': 'TypeError', 'message': "argument of type 'NoneType' is not iterable", 'stackTrace': 'Traceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 523, in wrapper\n    return func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/aoai.py", line 124, in chat\n    messages = build_messages(prompt, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 719, in build_messages\n    escape_dict = build_escape_dict(inputs_to_escape=inputs_to_escape, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 622, in build_escape_dict\n    if _should_escape(k, v, inputs_to_escape):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 615, in _should_escape\n    return (isinstance(v, PromptResult) and v.get_escape_mapping()) or k in inputs_to_escape\n', 'innerException': None}}}), LineError(line_number=1, error={'message': "OpenAI API hits exception: TypeError: argument of type 'NoneType' is not iterable", 'messageFormat': '', 'messageParameters': {}, 'referenceCode': 'Tool/promptflow.tools.aoai', 'code': 'UserError', 'innerError': {'code': 'LLMError', 'innerError': None}, 'debugInfo': {'type': 'LLMError', 'message': "OpenAI API hits exception: TypeError: argument of type 'NoneType' is not iterable", 'stackTrace': '\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 964, in _exec\n    output, aggregation_inputs = self._exec_inner_with_trace(\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 864, in _exec_inner_with_trace\n    output, aggregation_inputs = self._exec_inner(\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 895, in _exec_inner\n    output, nodes_outputs = self._traverse_nodes(inputs, context)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1146, in _traverse_nodes\n    nodes_outputs, bypassed_nodes = self._submit_to_scheduler(context, inputs, batch_nodes)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1183, in _submit_to_scheduler\n    return scheduler.execute(self._line_timeout_sec)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 131, in execute\n    raise e\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 113, in execute\n    self._dag_manager.complete_nodes(self._collect_outputs(completed_futures))\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 160, in _collect_outputs\n    each_node_result = each_future.result()\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/_base.py", line 439, in result\n    return self.__get_result()\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/_base.py", line 391, in __get_result\n    raise self._exception\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/thread.py", line 58, in run\n    result = self.fn(*self.args, **self.kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 181, in _exec_single_node_in_thread\n    result = context.invoke_tool(node, f, kwargs=kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 90, in invoke_tool\n    result = self._invoke_tool_inner(node, f, kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 201, in _invoke_tool_inner\n    raise e\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 182, in _invoke_tool_inner\n    return f(**kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_trace.py", line 469, in wrapped\n    output = func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 590, in wrapper\n    raise LLMError(message=error_message)\n', 'innerException': {'type': 'TypeError', 'message': "argument of type 'NoneType' is not iterable", 'stackTrace': 'Traceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 523, in wrapper\n    return func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/aoai.py", line 124, in chat\n    messages = build_messages(prompt, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 719, in build_messages\n    escape_dict = build_escape_dict(inputs_to_escape=inputs_to_escape, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 622, in build_escape_dict\n    if _should_escape(k, v, inputs_to_escape):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 615, in _should_escape\n    return (isinstance(v, PromptResult) and v.get_escape_mapping()) or k in inputs_to_escape\n', 'innerException': None}}})], aggr_error_dict={}, batch_error_dict=None)).completed_lines
Stack trace: Traceback (most recent call last):
  File "/home/runner/work/promptflow/promptflow/src/promptflow/tests/executor/e2etests/test_batch_engine.py", line 62, in run_batch_with_start_method
    _run_batch_with_start_method(multiprocessing_start_method, flow_folder, inputs_mapping, dev_connections)
  File "/home/runner/work/promptflow/promptflow/src/promptflow/tests/executor/e2etests/test_batch_engine.py", line 86, in _run_batch_with_start_method
    assert batch_result.completed_lines == nlines
AssertionError: assert 0 == 2
 +  where 0 = BatchResult(status=<Status.Completed: 'Completed'>, total_lines=2, completed_lines=0, failed_lines=2, node_status={'fetch_text_content_from_url.completed': 2, 'prepare_examples.completed': 2, 'summarize_text_content.failed': 2}, start_time=datetime.datetime(2024, 4, 22, 12, 25, 17, 215297), end_time=datetime.datetime(2024, 4, 22, 12, 25, 24, 524753), metrics={}, system_metrics=SystemMetrics(total_tokens=0, prompt_tokens=0, completion_tokens=0, duration=7.309456), error_summary=ErrorSummary(failed_user_error_lines=2, failed_system_error_lines=0, error_list=[LineError(line_number=0, error={'message': "OpenAI API hits exception: TypeError: argument of type 'NoneType' is not iterable", 'messageFormat': '', 'messageParameters': {}, 'referenceCode': 'Tool/promptflow.tools.aoai', 'code': 'UserError', 'innerError': {'code': 'LLMError', 'innerError': None}, 'debugInfo': {'type': 'LLMError', 'message': "OpenAI API hits exception: TypeError: argument of type 'NoneType' is not iterable", 'stackTrace': '\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 964, in _exec\n    output, aggregation_inputs = self._exec_inner_with_trace(\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 864, in _exec_inner_with_trace\n    output, aggregation_inputs = self._exec_inner(\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 895, in _exec_inner\n    output, nodes_outputs = self._traverse_nodes(inputs, context)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1146, in _traverse_nodes\n    nodes_outputs, bypassed_nodes = self._submit_to_scheduler(context, inputs, batch_nodes)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1183, in _submit_to_scheduler\n    return scheduler.execute(self._line_timeout_sec)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 131, in execute\n    raise e\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 113, in execute\n    self._dag_manager.complete_nodes(self._collect_outputs(completed_futures))\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 160, in _collect_outputs\n    each_node_result = each_future.result()\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/_base.py", line 439, in result\n    return self.__get_result()\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/_base.py", line 391, in __get_result\n    raise self._exception\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/thread.py", line 58, in run\n    result = self.fn(*self.args, **self.kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 181, in _exec_single_node_in_thread\n    result = context.invoke_tool(node, f, kwargs=kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 90, in invoke_tool\n    result = self._invoke_tool_inner(node, f, kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 201, in _invoke_tool_inner\n    raise e\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 182, in _invoke_tool_inner\n    return f(**kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_trace.py", line 469, in wrapped\n    output = func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 590, in wrapper\n    raise LLMError(message=error_message)\n', 'innerException': {'type': 'TypeError', 'message': "argument of type 'NoneType' is not iterable", 'stackTrace': 'Traceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 523, in wrapper\n    return func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/aoai.py", line 124, in chat\n    messages = build_messages(prompt, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 719, in build_messages\n    escape_dict = build_escape_dict(inputs_to_escape=inputs_to_escape, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 622, in build_escape_dict\n    if _should_escape(k, v, inputs_to_escape):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 615, in _should_escape\n    return (isinstance(v, PromptResult) and v.get_escape_mapping()) or k in inputs_to_escape\n', 'innerException': None}}}), LineError(line_number=1, error={'message': "OpenAI API hits exception: TypeError: argument of type 'NoneType' is not iterable", 'messageFormat': '', 'messageParameters': {}, 'referenceCode': 'Tool/promptflow.tools.aoai', 'code': 'UserError', 'innerError': {'code': 'LLMError', 'innerError': None}, 'debugInfo': {'type': 'LLMError', 'message': "OpenAI API hits exception: TypeError: argument of type 'NoneType' is not iterable", 'stackTrace': '\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 964, in _exec\n    output, aggregation_inputs = self._exec_inner_with_trace(\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 864, in _exec_inner_with_trace\n    output, aggregation_inputs = self._exec_inner(\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 895, in _exec_inner\n    output, nodes_outputs = self._traverse_nodes(inputs, context)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1146, in _traverse_nodes\n    nodes_outputs, bypassed_nodes = self._submit_to_scheduler(context, inputs, batch_nodes)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1183, in _submit_to_scheduler\n    return scheduler.execute(self._line_timeout_sec)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 131, in execute\n    raise e\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 113, in execute\n    self._dag_manager.complete_nodes(self._collect_outputs(completed_futures))\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 160, in _collect_outputs\n    each_node_result = each_future.result()\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/_base.py", line 439, in result\n    return self.__get_result()\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/_base.py", line 391, in __get_result\n    raise self._exception\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/thread.py", line 58, in run\n    result = self.fn(*self.args, **self.kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 181, in _exec_single_node_in_thread\n    result = context.invoke_tool(node, f, kwargs=kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 90, in invoke_tool\n    result = self._invoke_tool_inner(node, f, kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 201, in _invoke_tool_inner\n    raise e\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 182, in _invoke_tool_inner\n    return f(**kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_trace.py", line 469, in wrapped\n    output = func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 590, in wrapper\n    raise LLMError(message=error_message)\n', 'innerException': {'type': 'TypeError', 'message': "argument of type 'NoneType' is not iterable", 'stackTrace': 'Traceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 523, in wrapper\n    return func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/aoai.py", line 124, in chat\n    messages = build_messages(prompt, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 719, in build_messages\n    escape_dict = build_escape_dict(inputs_to_escape=inputs_to_escape, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 622, in build_escape_dict\n    if _should_escape(k, v, inputs_to_escape):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 615, in _should_escape\n    return (isinstance(v, PromptResult) and v.get_escape_mapping()) or k in inputs_to_escape\n', 'innerException': None}}})], aggr_error_dict={}, batch_error_dict=None)).completed_lines
self = <executor.e2etests.test_batch_engine.TestBatch object at 0x7f06dad14490>
flow_folder = 'web_classification_no_variants'
inputs_mapping = {'url': '${data.url}'}
dev_connections = {'aoai_assistant_connection': {'module': 'promptflow.connections', 'type': 'AzureOpenAIConnection', 'value': {'api_bas...ai.azure.com/', 'api_key': 'ac21d4a97d044b28990a740132f40d35', 'api_type': 'azure', 'api_version': '2024-02-01'}}, ...}

    @pytest.mark.parametrize(
        "flow_folder, inputs_mapping",
        [
            (
                SAMPLE_FLOW,
                {"url": "${data.url}"},
            ),
            (
                "prompt_tools",
                {"text": "${data.text}"},
            ),
            (
                "script_with___file__",
                {"text": "${data.text}"},
            ),
            (
                "sample_flow_with_functions",
                {"question": "${data.question}"},
            ),
        ],
    )
    def test_forkserver_mode_batch_run(self, flow_folder, inputs_mapping, dev_connections):
        if "forkserver" not in multiprocessing.get_all_start_methods():
            pytest.skip("Unsupported start method: forkserver")
        exception_queue = multiprocessing.Queue()
        p = multiprocessing.Process(
            target=run_batch_with_start_method,
            args=("forkserver", flow_folder, inputs_mapping, dev_connections, exception_queue),
        )
        p.start()
        p.join()
        if p.exitcode != 0:
            ex = exception_queue.get(timeout=1)
>           raise ex
E           Exception: Hit exception: assert 0 == 2
E            +  where 0 = BatchResult(status=<Status.Completed: 'Completed'>, total_lines=2, completed_lines=0, failed_lines=2, node_status={'fetch_text_content_from_url.completed': 2, 'prepare_examples.completed': 2, 'summarize_text_content.failed': 2}, start_time=datetime.datetime(2024, 4, 22, 12, 25, 17, 215297), end_time=datetime.datetime(2024, 4, 22, 12, 25, 24, 524753), metrics={}, system_metrics=SystemMetrics(total_tokens=0, prompt_tokens=0, completion_tokens=0, duration=7.309456), error_summary=ErrorSummary(failed_user_error_lines=2, failed_system_error_lines=0, error_list=[LineError(line_number=0, error={'message': "OpenAI API hits exception: TypeError: argument of type 'NoneType' is not iterable", 'messageFormat': '', 'messageParameters': {}, 'referenceCode': 'Tool/promptflow.tools.aoai', 'code': 'UserError', 'innerError': {'code': 'LLMError', 'innerError': None}, 'debugInfo': {'type': 'LLMError', 'message': "OpenAI API hits exception: TypeError: argument of type 'NoneType' is not iterable", 'stackTrace': '\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 964, in _exec\n    output, aggregation_inputs = self._exec_inner_with_trace(\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 864, in _exec_inner_with_trace\n    output, aggregation_inputs = self._exec_inner(\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 895, in _exec_inner\n    output, nodes_outputs = self._traverse_nodes(inputs, context)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1146, in _traverse_nodes\n    nodes_outputs, bypassed_nodes = self._submit_to_scheduler(context, inputs, batch_nodes)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1183, in _submit_to_scheduler\n    return scheduler.execute(self._line_timeout_sec)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 131, in execute\n    raise e\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 113, in execute\n    self._dag_manager.complete_nodes(self._collect_outputs(completed_futures))\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 160, in _collect_outputs\n    each_node_result = each_future.result()\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/_base.py", line 439, in result\n    return self.__get_result()\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/_base.py", line 391, in __get_result\n    raise self._exception\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/thread.py", line 58, in run\n    result = self.fn(*self.args, **self.kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 181, in _exec_single_node_in_thread\n    result = context.invoke_tool(node, f, kwargs=kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 90, in invoke_tool\n    result = self._invoke_tool_inner(node, f, kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 201, in _invoke_tool_inner\n    raise e\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 182, in _invoke_tool_inner\n    return f(**kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_trace.py", line 469, in wrapped\n    output = func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 590, in wrapper\n    raise LLMError(message=error_message)\n', 'innerException': {'type': 'TypeError', 'message': "argument of type 'NoneType' is not iterable", 'stackTrace': 'Traceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 523, in wrapper\n    return func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/aoai.py", line 124, in chat\n    messages = build_messages(prompt, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 719, in build_messages\n    escape_dict = build_escape_dict(inputs_to_escape=inputs_to_escape, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 622, in build_escape_dict\n    if _should_escape(k, v, inputs_to_escape):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 615, in _should_escape\n    return (isinstance(v, PromptResult) and v.get_escape_mapping()) or k in inputs_to_escape\n', 'innerException': None}}}), LineError(line_number=1, error={'message': "OpenAI API hits exception: TypeError: argument of type 'NoneType' is not iterable", 'messageFormat': '', 'messageParameters': {}, 'referenceCode': 'Tool/promptflow.tools.aoai', 'code': 'UserError', 'innerError': {'code': 'LLMError', 'innerError': None}, 'debugInfo': {'type': 'LLMError', 'message': "OpenAI API hits exception: TypeError: argument of type 'NoneType' is not iterable", 'stackTrace': '\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 964, in _exec\n    output, aggregation_inputs = self._exec_inner_with_trace(\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 864, in _exec_inner_with_trace\n    output, aggregation_inputs = self._exec_inner(\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 895, in _exec_inner\n    output, nodes_outputs = self._traverse_nodes(inputs, context)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1146, in _traverse_nodes\n    nodes_outputs, bypassed_nodes = self._submit_to_scheduler(context, inputs, batch_nodes)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1183, in _submit_to_scheduler\n    return scheduler.execute(self._line_timeout_sec)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 131, in execute\n    raise e\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 113, in execute\n    self._dag_manager.complete_nodes(self._collect_outputs(completed_futures))\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 160, in _collect_outputs\n    each_node_result = each_future.result()\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/_base.py", line 439, in result\n    return self.__get_result()\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/_base.py", line 391, in __get_result\n    raise self._exception\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/thread.py", line 58, in run\n    result = self.fn(*self.args, **self.kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 181, in _exec_single_node_in_thread\n    result = context.invoke_tool(node, f, kwargs=kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 90, in invoke_tool\n    result = self._invoke_tool_inner(node, f, kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 201, in _invoke_tool_inner\n    raise e\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 182, in _invoke_tool_inner\n    return f(**kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_trace.py", line 469, in wrapped\n    output = func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 590, in wrapper\n    raise LLMError(message=error_message)\n', 'innerException': {'type': 'TypeError', 'message': "argument of type 'NoneType' is not iterable", 'stackTrace': 'Traceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 523, in wrapper\n    return func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/aoai.py", line 124, in chat\n    messages = build_messages(prompt, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 719, in build_messages\n    escape_dict = build_escape_dict(inputs_to_escape=inputs_to_escape, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 622, in build_escape_dict\n    if _should_escape(k, v, inputs_to_escape):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 615, in _should_escape\n    return (isinstance(v, PromptResult) and v.get_escape_mapping()) or k in inputs_to_escape\n', 'innerException': None}}})], aggr_error_dict={}, batch_error_dict=None)).completed_lines
E           Stack trace: Traceback (most recent call last):
E             File "/home/runner/work/promptflow/promptflow/src/promptflow/tests/executor/e2etests/test_batch_engine.py", line 62, in run_batch_with_start_method
E               _run_batch_with_start_method(multiprocessing_start_method, flow_folder, inputs_mapping, dev_connections)
E             File "/home/runner/work/promptflow/promptflow/src/promptflow/tests/executor/e2etests/test_batch_engine.py", line 86, in _run_batch_with_start_method
E               assert batch_result.completed_lines == nlines
E           AssertionError: assert 0 == 2
E            +  where 0 = BatchResult(status=<Status.Completed: 'Completed'>, total_lines=2, completed_lines=0, failed_lines=2, node_status={'fetch_text_content_from_url.completed': 2, 'prepare_examples.completed': 2, 'summarize_text_content.failed': 2}, start_time=datetime.datetime(2024, 4, 22, 12, 25, 17, 215297), end_time=datetime.datetime(2024, 4, 22, 12, 25, 24, 524753), metrics={}, system_metrics=SystemMetrics(total_tokens=0, prompt_tokens=0, completion_tokens=0, duration=7.309456), error_summary=ErrorSummary(failed_user_error_lines=2, failed_system_error_lines=0, error_list=[LineError(line_number=0, error={'message': "OpenAI API hits exception: TypeError: argument of type 'NoneType' is not iterable", 'messageFormat': '', 'messageParameters': {}, 'referenceCode': 'Tool/promptflow.tools.aoai', 'code': 'UserError', 'innerError': {'code': 'LLMError', 'innerError': None}, 'debugInfo': {'type': 'LLMError', 'message': "OpenAI API hits exception: TypeError: argument of type 'NoneType' is not iterable", 'stackTrace': '\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 964, in _exec\n    output, aggregation_inputs = self._exec_inner_with_trace(\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 864, in _exec_inner_with_trace\n    output, aggregation_inputs = self._exec_inner(\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 895, in _exec_inner\n    output, nodes_outputs = self._traverse_nodes(inputs, context)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1146, in _traverse_nodes\n    nodes_outputs, bypassed_nodes = self._submit_to_scheduler(context, inputs, batch_nodes)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1183, in _submit_to_scheduler\n    return scheduler.execute(self._line_timeout_sec)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 131, in execute\n    raise e\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 113, in execute\n    self._dag_manager.complete_nodes(self._collect_outputs(completed_futures))\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 160, in _collect_outputs\n    each_node_result = each_future.result()\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/_base.py", line 439, in result\n    return self.__get_result()\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/_base.py", line 391, in __get_result\n    raise self._exception\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/thread.py", line 58, in run\n    result = self.fn(*self.args, **self.kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 181, in _exec_single_node_in_thread\n    result = context.invoke_tool(node, f, kwargs=kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 90, in invoke_tool\n    result = self._invoke_tool_inner(node, f, kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 201, in _invoke_tool_inner\n    raise e\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 182, in _invoke_tool_inner\n    return f(**kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_trace.py", line 469, in wrapped\n    output = func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 590, in wrapper\n    raise LLMError(message=error_message)\n', 'innerException': {'type': 'TypeError', 'message': "argument of type 'NoneType' is not iterable", 'stackTrace': 'Traceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 523, in wrapper\n    return func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/aoai.py", line 124, in chat\n    messages = build_messages(prompt, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 719, in build_messages\n    escape_dict = build_escape_dict(inputs_to_escape=inputs_to_escape, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 622, in build_escape_dict\n    if _should_escape(k, v, inputs_to_escape):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 615, in _should_escape\n    return (isinstance(v, PromptResult) and v.get_escape_mapping()) or k in inputs_to_escape\n', 'innerException': None}}}), LineError(line_number=1, error={'message': "OpenAI API hits exception: TypeError: argument of type 'NoneType' is not iterable", 'messageFormat': '', 'messageParameters': {}, 'referenceCode': 'Tool/promptflow.tools.aoai', 'code': 'UserError', 'innerError': {'code': 'LLMError', 'innerError': None}, 'debugInfo': {'type': 'LLMError', 'message': "OpenAI API hits exception: TypeError: argument of type 'NoneType' is not iterable", 'stackTrace': '\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 964, in _exec\n    output, aggregation_inputs = self._exec_inner_with_trace(\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 864, in _exec_inner_with_trace\n    output, aggregation_inputs = self._exec_inner(\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 895, in _exec_inner\n    output, nodes_outputs = self._traverse_nodes(inputs, context)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1146, in _traverse_nodes\n    nodes_outputs, bypassed_nodes = self._submit_to_scheduler(context, inputs, batch_nodes)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1183, in _submit_to_scheduler\n    return scheduler.execute(self._line_timeout_sec)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 131, in execute\n    raise e\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 113, in execute\n    self._dag_manager.complete_nodes(self._collect_outputs(completed_futures))\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 160, in _collect_outputs\n    each_node_result = each_future.result()\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/_base.py", line 439, in result\n    return self.__get_result()\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/_base.py", line 391, in __get_result\n    raise self._exception\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/thread.py", line 58, in run\n    result = self.fn(*self.args, **self.kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 181, in _exec_single_node_in_thread\n    result = context.invoke_tool(node, f, kwargs=kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 90, in invoke_tool\n    result = self._invoke_tool_inner(node, f, kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 201, in _invoke_tool_inner\n    raise e\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 182, in _invoke_tool_inner\n    return f(**kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_trace.py", line 469, in wrapped\n    output = func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 590, in wrapper\n    raise LLMError(message=error_message)\n', 'innerException': {'type': 'TypeError', 'message': "argument of type 'NoneType' is not iterable", 'stackTrace': 'Traceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 523, in wrapper\n    return func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/aoai.py", line 124, in chat\n    messages = build_messages(prompt, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 719, in build_messages\n    escape_dict = build_escape_dict(inputs_to_escape=inputs_to_escape, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 622, in build_escape_dict\n    if _should_escape(k, v, inputs_to_escape):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 615, in _should_escape\n    return (isinstance(v, PromptResult) and v.get_escape_mapping()) or k in inputs_to_escape\n', 'innerException': None}}})], aggr_error_dict={}, batch_error_dict=None)).completed_lines

/home/runner/work/promptflow/promptflow/src/promptflow/tests/executor/e2etests/test_batch_engine.py:233: Exception

Check warning on line 0 in tests.executor.e2etests.test_batch_engine.TestBatch

See this annotation in the file changed.

@github-actions github-actions / Executor E2E Test Result [user/yalu4/escape_2](https://github.com/microsoft/promptflow/actions/workflows/promptflow-executor-e2e-test.yml?query=branch:user/yalu4/escape_2++)

test_batch_run_then_eval (tests.executor.e2etests.test_batch_engine.TestBatch) failed

artifacts/Test Results (Python 3.9) (OS ubuntu-latest)/test-results.xml [took 7s]
Raw output
assert 0 == 2
 +  where 0 = BatchResult(status=<Status.Completed: 'Completed'>, total_lines=2, completed_lines=0, failed_lines=2, node_status={'fetch_text_content_from_url.completed': 2, 'prepare_examples.completed': 2, 'summarize_text_content.failed': 2}, start_time=datetime.datetime(2024, 4, 22, 12, 25, 43, 789040), end_time=datetime.datetime(2024, 4, 22, 12, 25, 50, 960587), metrics={}, system_metrics=SystemMetrics(total_tokens=0, prompt_tokens=0, completion_tokens=0, duration=7.171547), error_summary=ErrorSummary(failed_user_error_lines=2, failed_system_error_lines=0, error_list=[LineError(line_number=0, error={'message': "OpenAI API hits exception: TypeError: argument of type 'NoneType' is not iterable", 'messageFormat': '', 'messageParameters': {}, 'referenceCode': 'Tool/promptflow.tools.aoai', 'code': 'UserError', 'innerError': {'code': 'LLMError', 'innerError': None}, 'debugInfo': {'type': 'LLMError', 'message': "OpenAI API hits exception: TypeError: argument of type 'NoneType' is not iterable", 'stackTrace': '\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 964, in _exec\n    output, aggregation_inputs = self._exec_inner_with_trace(\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 864, in _exec_inner_with_trace\n    output, aggregation_inputs = self._exec_inner(\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 895, in _exec_inner\n    output, nodes_outputs = self._traverse_nodes(inputs, context)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1146, in _traverse_nodes\n    nodes_outputs, bypassed_nodes = self._submit_to_scheduler(context, inputs, batch_nodes)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1183, in _submit_to_scheduler\n    return scheduler.execute(self._line_timeout_sec)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 131, in execute\n    raise e\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 113, in execute\n    self._dag_manager.complete_nodes(self._collect_outputs(completed_futures))\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 160, in _collect_outputs\n    each_node_result = each_future.result()\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/_base.py", line 439, in result\n    return self.__get_result()\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/_base.py", line 391, in __get_result\n    raise self._exception\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/thread.py", line 58, in run\n    result = self.fn(*self.args, **self.kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 181, in _exec_single_node_in_thread\n    result = context.invoke_tool(node, f, kwargs=kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 90, in invoke_tool\n    result = self._invoke_tool_inner(node, f, kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 201, in _invoke_tool_inner\n    raise e\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 182, in _invoke_tool_inner\n    return f(**kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_trace.py", line 469, in wrapped\n    output = func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 590, in wrapper\n    raise LLMError(message=error_message)\n', 'innerException': {'type': 'TypeError', 'message': "argument of type 'NoneType' is not iterable", 'stackTrace': 'Traceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 523, in wrapper\n    return func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/aoai.py", line 124, in chat\n    messages = build_messages(prompt, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 719, in build_messages\n    escape_dict = build_escape_dict(inputs_to_escape=inputs_to_escape, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 622, in build_escape_dict\n    if _should_escape(k, v, inputs_to_escape):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 615, in _should_escape\n    return (isinstance(v, PromptResult) and v.get_escape_mapping()) or k in inputs_to_escape\n', 'innerException': None}}}), LineError(line_number=1, error={'message': "OpenAI API hits exception: TypeError: argument of type 'NoneType' is not iterable", 'messageFormat': '', 'messageParameters': {}, 'referenceCode': 'Tool/promptflow.tools.aoai', 'code': 'UserError', 'innerError': {'code': 'LLMError', 'innerError': None}, 'debugInfo': {'type': 'LLMError', 'message': "OpenAI API hits exception: TypeError: argument of type 'NoneType' is not iterable", 'stackTrace': '\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 964, in _exec\n    output, aggregation_inputs = self._exec_inner_with_trace(\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 864, in _exec_inner_with_trace\n    output, aggregation_inputs = self._exec_inner(\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 895, in _exec_inner\n    output, nodes_outputs = self._traverse_nodes(inputs, context)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1146, in _traverse_nodes\n    nodes_outputs, bypassed_nodes = self._submit_to_scheduler(context, inputs, batch_nodes)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1183, in _submit_to_scheduler\n    return scheduler.execute(self._line_timeout_sec)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 131, in execute\n    raise e\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 113, in execute\n    self._dag_manager.complete_nodes(self._collect_outputs(completed_futures))\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 160, in _collect_outputs\n    each_node_result = each_future.result()\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/_base.py", line 439, in result\n    return self.__get_result()\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/_base.py", line 391, in __get_result\n    raise self._exception\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/thread.py", line 58, in run\n    result = self.fn(*self.args, **self.kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 181, in _exec_single_node_in_thread\n    result = context.invoke_tool(node, f, kwargs=kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 90, in invoke_tool\n    result = self._invoke_tool_inner(node, f, kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 201, in _invoke_tool_inner\n    raise e\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 182, in _invoke_tool_inner\n    return f(**kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_trace.py", line 469, in wrapped\n    output = func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 590, in wrapper\n    raise LLMError(message=error_message)\n', 'innerException': {'type': 'TypeError', 'message': "argument of type 'NoneType' is not iterable", 'stackTrace': 'Traceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 523, in wrapper\n    return func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/aoai.py", line 124, in chat\n    messages = build_messages(prompt, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 719, in build_messages\n    escape_dict = build_escape_dict(inputs_to_escape=inputs_to_escape, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 622, in build_escape_dict\n    if _should_escape(k, v, inputs_to_escape):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 615, in _should_escape\n    return (isinstance(v, PromptResult) and v.get_escape_mapping()) or k in inputs_to_escape\n', 'innerException': None}}})], aggr_error_dict={}, batch_error_dict=None)).completed_lines
self = <executor.e2etests.test_batch_engine.TestBatch object at 0x7f06dacb7e20>
dev_connections = {'aoai_assistant_connection': {'module': 'promptflow.connections', 'name': 'aoai_assistant_connection', 'type': 'Azure...ai.azure.com/', 'api_key': 'ac21d4a97d044b28990a740132f40d35', 'api_type': 'azure', 'api_version': '2024-02-01'}}, ...}

    def test_batch_run_then_eval(self, dev_connections):
        batch_resutls, output_dir = submit_batch_run(
            SAMPLE_FLOW, {"url": "${data.url}"}, connections=dev_connections, return_output_dir=True
        )
        nlines = get_batch_inputs_line(SAMPLE_FLOW)
>       assert batch_resutls.completed_lines == nlines
E       assert 0 == 2
E        +  where 0 = BatchResult(status=<Status.Completed: 'Completed'>, total_lines=2, completed_lines=0, failed_lines=2, node_status={'fetch_text_content_from_url.completed': 2, 'prepare_examples.completed': 2, 'summarize_text_content.failed': 2}, start_time=datetime.datetime(2024, 4, 22, 12, 25, 43, 789040), end_time=datetime.datetime(2024, 4, 22, 12, 25, 50, 960587), metrics={}, system_metrics=SystemMetrics(total_tokens=0, prompt_tokens=0, completion_tokens=0, duration=7.171547), error_summary=ErrorSummary(failed_user_error_lines=2, failed_system_error_lines=0, error_list=[LineError(line_number=0, error={'message': "OpenAI API hits exception: TypeError: argument of type 'NoneType' is not iterable", 'messageFormat': '', 'messageParameters': {}, 'referenceCode': 'Tool/promptflow.tools.aoai', 'code': 'UserError', 'innerError': {'code': 'LLMError', 'innerError': None}, 'debugInfo': {'type': 'LLMError', 'message': "OpenAI API hits exception: TypeError: argument of type 'NoneType' is not iterable", 'stackTrace': '\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 964, in _exec\n    output, aggregation_inputs = self._exec_inner_with_trace(\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 864, in _exec_inner_with_trace\n    output, aggregation_inputs = self._exec_inner(\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 895, in _exec_inner\n    output, nodes_outputs = self._traverse_nodes(inputs, context)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1146, in _traverse_nodes\n    nodes_outputs, bypassed_nodes = self._submit_to_scheduler(context, inputs, batch_nodes)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1183, in _submit_to_scheduler\n    return scheduler.execute(self._line_timeout_sec)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 131, in execute\n    raise e\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 113, in execute\n    self._dag_manager.complete_nodes(self._collect_outputs(completed_futures))\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 160, in _collect_outputs\n    each_node_result = each_future.result()\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/_base.py", line 439, in result\n    return self.__get_result()\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/_base.py", line 391, in __get_result\n    raise self._exception\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/thread.py", line 58, in run\n    result = self.fn(*self.args, **self.kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 181, in _exec_single_node_in_thread\n    result = context.invoke_tool(node, f, kwargs=kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 90, in invoke_tool\n    result = self._invoke_tool_inner(node, f, kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 201, in _invoke_tool_inner\n    raise e\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 182, in _invoke_tool_inner\n    return f(**kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_trace.py", line 469, in wrapped\n    output = func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 590, in wrapper\n    raise LLMError(message=error_message)\n', 'innerException': {'type': 'TypeError', 'message': "argument of type 'NoneType' is not iterable", 'stackTrace': 'Traceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 523, in wrapper\n    return func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/aoai.py", line 124, in chat\n    messages = build_messages(prompt, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 719, in build_messages\n    escape_dict = build_escape_dict(inputs_to_escape=inputs_to_escape, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 622, in build_escape_dict\n    if _should_escape(k, v, inputs_to_escape):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 615, in _should_escape\n    return (isinstance(v, PromptResult) and v.get_escape_mapping()) or k in inputs_to_escape\n', 'innerException': None}}}), LineError(line_number=1, error={'message': "OpenAI API hits exception: TypeError: argument of type 'NoneType' is not iterable", 'messageFormat': '', 'messageParameters': {}, 'referenceCode': 'Tool/promptflow.tools.aoai', 'code': 'UserError', 'innerError': {'code': 'LLMError', 'innerError': None}, 'debugInfo': {'type': 'LLMError', 'message': "OpenAI API hits exception: TypeError: argument of type 'NoneType' is not iterable", 'stackTrace': '\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 964, in _exec\n    output, aggregation_inputs = self._exec_inner_with_trace(\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 864, in _exec_inner_with_trace\n    output, aggregation_inputs = self._exec_inner(\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 895, in _exec_inner\n    output, nodes_outputs = self._traverse_nodes(inputs, context)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1146, in _traverse_nodes\n    nodes_outputs, bypassed_nodes = self._submit_to_scheduler(context, inputs, batch_nodes)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1183, in _submit_to_scheduler\n    return scheduler.execute(self._line_timeout_sec)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 131, in execute\n    raise e\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 113, in execute\n    self._dag_manager.complete_nodes(self._collect_outputs(completed_futures))\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 160, in _collect_outputs\n    each_node_result = each_future.result()\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/_base.py", line 439, in result\n    return self.__get_result()\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/_base.py", line 391, in __get_result\n    raise self._exception\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/thread.py", line 58, in run\n    result = self.fn(*self.args, **self.kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 181, in _exec_single_node_in_thread\n    result = context.invoke_tool(node, f, kwargs=kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 90, in invoke_tool\n    result = self._invoke_tool_inner(node, f, kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 201, in _invoke_tool_inner\n    raise e\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 182, in _invoke_tool_inner\n    return f(**kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_trace.py", line 469, in wrapped\n    output = func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 590, in wrapper\n    raise LLMError(message=error_message)\n', 'innerException': {'type': 'TypeError', 'message': "argument of type 'NoneType' is not iterable", 'stackTrace': 'Traceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 523, in wrapper\n    return func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/aoai.py", line 124, in chat\n    messages = build_messages(prompt, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 719, in build_messages\n    escape_dict = build_escape_dict(inputs_to_escape=inputs_to_escape, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 622, in build_escape_dict\n    if _should_escape(k, v, inputs_to_escape):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 615, in _should_escape\n    return (isinstance(v, PromptResult) and v.get_escape_mapping()) or k in inputs_to_escape\n', 'innerException': None}}})], aggr_error_dict={}, batch_error_dict=None)).completed_lines

/home/runner/work/promptflow/promptflow/src/promptflow/tests/executor/e2etests/test_batch_engine.py:240: AssertionError

Check warning on line 0 in tests.executor.e2etests.test_batch_engine.TestBatch

See this annotation in the file changed.

@github-actions github-actions / Executor E2E Test Result [user/yalu4/escape_2](https://github.com/microsoft/promptflow/actions/workflows/promptflow-executor-e2e-test.yml?query=branch:user/yalu4/escape_2++)

test_batch_with_openai_metrics (tests.executor.e2etests.test_batch_engine.TestBatch) failed

artifacts/Test Results (Python 3.9) (OS ubuntu-latest)/test-results.xml [took 6s]
Raw output
assert 0 == 2
 +  where 0 = len([])
self = <executor.e2etests.test_batch_engine.TestBatch object at 0x7f06dacbe7c0>
dev_connections = {'aoai_assistant_connection': {'module': 'promptflow.connections', 'name': 'aoai_assistant_connection', 'type': 'Azure...ai.azure.com/', 'api_key': 'ac21d4a97d044b28990a740132f40d35', 'api_type': 'azure', 'api_version': '2024-02-01'}}, ...}

    def test_batch_with_openai_metrics(self, dev_connections):
        inputs_mapping = {"url": "${data.url}"}
        batch_result, output_dir = submit_batch_run(
            SAMPLE_FLOW, inputs_mapping, connections=dev_connections, return_output_dir=True
        )
        nlines = get_batch_inputs_line(SAMPLE_FLOW)
        outputs = load_jsonl(output_dir / OUTPUT_FILE_NAME)
>       assert len(outputs) == nlines
E       assert 0 == 2
E        +  where 0 = len([])

/home/runner/work/promptflow/promptflow/src/promptflow/tests/executor/e2etests/test_batch_engine.py:299: AssertionError

Check warning on line 0 in tests.executor.e2etests.test_batch_engine.TestBatch

See this annotation in the file changed.

@github-actions github-actions / Executor E2E Test Result [user/yalu4/escape_2](https://github.com/microsoft/promptflow/actions/workflows/promptflow-executor-e2e-test.yml?query=branch:user/yalu4/escape_2++)

test_batch_resume[web_classification-web_classification_default_20240207_165606_643000] (tests.executor.e2etests.test_batch_engine.TestBatch) failed

artifacts/Test Results (Python 3.9) (OS ubuntu-latest)/test-results.xml [took 6s]
Raw output
assert 2 == 3
 +  where 2 = BatchResult(status=<Status.Completed: 'Completed'>, total_lines=3, completed_lines=2, failed_lines=1, node_status={'prepare_examples.completed': 3, 'classify_with_llm.completed': 2, 'summarize_text_content.completed': 2, 'convert_to_dict.completed': 2, 'fetch_text_content_from_url.completed': 3, 'summarize_text_content.failed': 1}, start_time=datetime.datetime(2024, 4, 22, 12, 26, 47, 293542), end_time=datetime.datetime(2024, 4, 22, 12, 26, 53, 628456), metrics={}, system_metrics=SystemMetrics(total_tokens=2599, prompt_tokens=2122, completion_tokens=477, duration=6.334914), error_summary=ErrorSummary(failed_user_error_lines=1, failed_system_error_lines=0, error_list=[LineError(line_number=1, error={'message': "OpenAI API hits exception: TypeError: argument of type 'NoneType' is not iterable", 'messageFormat': '', 'messageParameters': {}, 'referenceCode': 'Tool/promptflow.tools.aoai', 'code': 'UserError', 'innerError': {'code': 'LLMError', 'innerError': None}, 'debugInfo': {'type': 'LLMError', 'message': "OpenAI API hits exception: TypeError: argument of type 'NoneType' is not iterable", 'stackTrace': '\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 964, in _exec\n    output, aggregation_inputs = self._exec_inner_with_trace(\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 864, in _exec_inner_with_trace\n    output, aggregation_inputs = self._exec_inner(\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 895, in _exec_inner\n    output, nodes_outputs = self._traverse_nodes(inputs, context)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1146, in _traverse_nodes\n    nodes_outputs, bypassed_nodes = self._submit_to_scheduler(context, inputs, batch_nodes)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1183, in _submit_to_scheduler\n    return scheduler.execute(self._line_timeout_sec)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 131, in execute\n    raise e\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 113, in execute\n    self._dag_manager.complete_nodes(self._collect_outputs(completed_futures))\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 160, in _collect_outputs\n    each_node_result = each_future.result()\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/_base.py", line 439, in result\n    return self.__get_result()\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/_base.py", line 391, in __get_result\n    raise self._exception\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/thread.py", line 58, in run\n    result = self.fn(*self.args, **self.kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 181, in _exec_single_node_in_thread\n    result = context.invoke_tool(node, f, kwargs=kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 90, in invoke_tool\n    result = self._invoke_tool_inner(node, f, kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 201, in _invoke_tool_inner\n    raise e\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 182, in _invoke_tool_inner\n    return f(**kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_trace.py", line 469, in wrapped\n    output = func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 590, in wrapper\n    raise LLMError(message=error_message)\n', 'innerException': {'type': 'TypeError', 'message': "argument of type 'NoneType' is not iterable", 'stackTrace': 'Traceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 523, in wrapper\n    return func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/aoai.py", line 124, in chat\n    messages = build_messages(prompt, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 719, in build_messages\n    escape_dict = build_escape_dict(inputs_to_escape=inputs_to_escape, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 622, in build_escape_dict\n    if _should_escape(k, v, inputs_to_escape):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 615, in _should_escape\n    return (isinstance(v, PromptResult) and v.get_escape_mapping()) or k in inputs_to_escape\n', 'innerException': None}}})], aggr_error_dict={}, batch_error_dict=None)).completed_lines
self = <executor.e2etests.test_batch_engine.TestBatch object at 0x7f06dacb75e0>
flow_folder = 'web_classification'
resume_from_run_name = 'web_classification_default_20240207_165606_643000'
dev_connections = {'aoai_assistant_connection': {'module': 'promptflow.connections', 'name': 'aoai_assistant_connection', 'type': 'Azure...ai.azure.com/', 'api_key': 'ac21d4a97d044b28990a740132f40d35', 'api_type': 'azure', 'api_version': '2024-02-01'}}, ...}

    @pytest.mark.parametrize(
        "flow_folder, resume_from_run_name",
        [("web_classification", "web_classification_default_20240207_165606_643000")],
    )
    def test_batch_resume(self, flow_folder, resume_from_run_name, dev_connections):
        run_storage = LocalStorageOperations(Run(flow="web_classification"))
        batch_engine = BatchEngine(
            get_yaml_file(flow_folder),
            get_flow_folder(flow_folder),
            connections=dev_connections,
            storage=run_storage,
        )
        input_dirs = {"data": get_flow_inputs_file(flow_folder, file_name="data.jsonl")}
        output_dir = Path(mkdtemp())
        inputs_mapping = {"url": "${data.url}"}
    
        run_folder = RUNS_ROOT / resume_from_run_name
        mock_resume_from_run = MockRun(resume_from_run_name, run_folder)
        resume_from_run_storage = LocalStorageOperations(mock_resume_from_run)
        resume_from_run_output_dir = resume_from_run_storage.outputs_folder
        resume_run_id = mock_resume_from_run.name + "_resume"
        resume_run_batch_results = batch_engine.run(
            input_dirs,
            inputs_mapping,
            output_dir,
            resume_run_id,
            resume_from_run_storage=resume_from_run_storage,
            resume_from_run_output_dir=resume_from_run_output_dir,
        )
    
        nlines = 3
        assert resume_run_batch_results.total_lines == nlines
>       assert resume_run_batch_results.completed_lines == nlines
E       assert 2 == 3
E        +  where 2 = BatchResult(status=<Status.Completed: 'Completed'>, total_lines=3, completed_lines=2, failed_lines=1, node_status={'prepare_examples.completed': 3, 'classify_with_llm.completed': 2, 'summarize_text_content.completed': 2, 'convert_to_dict.completed': 2, 'fetch_text_content_from_url.completed': 3, 'summarize_text_content.failed': 1}, start_time=datetime.datetime(2024, 4, 22, 12, 26, 47, 293542), end_time=datetime.datetime(2024, 4, 22, 12, 26, 53, 628456), metrics={}, system_metrics=SystemMetrics(total_tokens=2599, prompt_tokens=2122, completion_tokens=477, duration=6.334914), error_summary=ErrorSummary(failed_user_error_lines=1, failed_system_error_lines=0, error_list=[LineError(line_number=1, error={'message': "OpenAI API hits exception: TypeError: argument of type 'NoneType' is not iterable", 'messageFormat': '', 'messageParameters': {}, 'referenceCode': 'Tool/promptflow.tools.aoai', 'code': 'UserError', 'innerError': {'code': 'LLMError', 'innerError': None}, 'debugInfo': {'type': 'LLMError', 'message': "OpenAI API hits exception: TypeError: argument of type 'NoneType' is not iterable", 'stackTrace': '\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 964, in _exec\n    output, aggregation_inputs = self._exec_inner_with_trace(\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 864, in _exec_inner_with_trace\n    output, aggregation_inputs = self._exec_inner(\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 895, in _exec_inner\n    output, nodes_outputs = self._traverse_nodes(inputs, context)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1146, in _traverse_nodes\n    nodes_outputs, bypassed_nodes = self._submit_to_scheduler(context, inputs, batch_nodes)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1183, in _submit_to_scheduler\n    return scheduler.execute(self._line_timeout_sec)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 131, in execute\n    raise e\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 113, in execute\n    self._dag_manager.complete_nodes(self._collect_outputs(completed_futures))\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 160, in _collect_outputs\n    each_node_result = each_future.result()\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/_base.py", line 439, in result\n    return self.__get_result()\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/_base.py", line 391, in __get_result\n    raise self._exception\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/thread.py", line 58, in run\n    result = self.fn(*self.args, **self.kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 181, in _exec_single_node_in_thread\n    result = context.invoke_tool(node, f, kwargs=kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 90, in invoke_tool\n    result = self._invoke_tool_inner(node, f, kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 201, in _invoke_tool_inner\n    raise e\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 182, in _invoke_tool_inner\n    return f(**kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_trace.py", line 469, in wrapped\n    output = func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 590, in wrapper\n    raise LLMError(message=error_message)\n', 'innerException': {'type': 'TypeError', 'message': "argument of type 'NoneType' is not iterable", 'stackTrace': 'Traceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 523, in wrapper\n    return func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/aoai.py", line 124, in chat\n    messages = build_messages(prompt, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 719, in build_messages\n    escape_dict = build_escape_dict(inputs_to_escape=inputs_to_escape, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 622, in build_escape_dict\n    if _should_escape(k, v, inputs_to_escape):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 615, in _should_escape\n    return (isinstance(v, PromptResult) and v.get_escape_mapping()) or k in inputs_to_escape\n', 'innerException': None}}})], aggr_error_dict={}, batch_error_dict=None)).completed_lines

/home/runner/work/promptflow/promptflow/src/promptflow/tests/executor/e2etests/test_batch_engine.py:418: AssertionError

Check notice on line 0 in .github

See this annotation in the file changed.

@github-actions github-actions / Executor E2E Test Result [user/yalu4/escape_2](https://github.com/microsoft/promptflow/actions/workflows/promptflow-executor-e2e-test.yml?query=branch:user/yalu4/escape_2++)

5 skipped tests found

There are 5 skipped tests, see "Raw output" for the full list of skipped tests.
Raw output
tests.executor.e2etests.test_assistant.TestAssistant ‑ test_assistant_package_tool_with_conn[assistant-with-package-tool]
tests.executor.e2etests.test_assistant.TestAssistant ‑ test_assistant_tool_with_connection[assistant-tool-with-connection-line_input0]
tests.executor.e2etests.test_assistant.TestAssistant ‑ test_assistant_with_image[food-calorie-assistant-line_input0]
tests.executor.e2etests.test_executor_happypath.TestExecutor ‑ test_executor_exec_line[package_tools]
tests.executor.e2etests.test_executor_happypath.TestExecutor ‑ test_executor_exec_node[package_tools-search_by_text-flow_inputs5-None]

Check notice on line 0 in .github

See this annotation in the file changed.

@github-actions github-actions / Executor E2E Test Result [user/yalu4/escape_2](https://github.com/microsoft/promptflow/actions/workflows/promptflow-executor-e2e-test.yml?query=branch:user/yalu4/escape_2++)

240 tests found

There are 240 tests, see "Raw output" for the full list of tests.
Raw output
tests.executor.e2etests.test_activate.TestExecutorActivate ‑ test_aggregate_bypassed_nodes
tests.executor.e2etests.test_activate.TestExecutorActivate ‑ test_all_nodes_bypassed
tests.executor.e2etests.test_activate.TestExecutorActivate ‑ test_batch_run_activate
tests.executor.e2etests.test_activate.TestExecutorActivate ‑ test_flow_run_activate[activate_condition_always_met]
tests.executor.e2etests.test_activate.TestExecutorActivate ‑ test_flow_run_activate[activate_with_no_inputs]
tests.executor.e2etests.test_activate.TestExecutorActivate ‑ test_flow_run_activate[all_depedencies_bypassed_with_activate_met]
tests.executor.e2etests.test_activate.TestExecutorActivate ‑ test_flow_run_activate[conditional_flow_with_activate]
tests.executor.e2etests.test_activate.TestExecutorActivate ‑ test_invalid_activate_config
tests.executor.e2etests.test_assistant.TestAssistant ‑ test_assistant_package_tool_with_conn[assistant-with-package-tool]
tests.executor.e2etests.test_assistant.TestAssistant ‑ test_assistant_tool_with_connection[assistant-tool-with-connection-line_input0]
tests.executor.e2etests.test_assistant.TestAssistant ‑ test_assistant_with_image[food-calorie-assistant-line_input0]
tests.executor.e2etests.test_async.TestAsync ‑ test_exec_line_async[async_tools-expected_result0]
tests.executor.e2etests.test_async.TestAsync ‑ test_exec_line_async[async_tools_with_sync_tools-expected_result1]
tests.executor.e2etests.test_async.TestAsync ‑ test_executor_node_concurrency[async_tools-concurrency_levels0-expected_concurrency0]
tests.executor.e2etests.test_async.TestAsync ‑ test_executor_node_concurrency[async_tools_with_sync_tools-concurrency_levels1-expected_concurrency1]
tests.executor.e2etests.test_batch_engine.TestBatch ‑ test_batch_resume[web_classification-web_classification_default_20240207_165606_643000]
tests.executor.e2etests.test_batch_engine.TestBatch ‑ test_batch_resume_aggregation[classification_accuracy_evaluation-classification_accuracy_evaluation_default_20240208_152402_694000]
tests.executor.e2etests.test_batch_engine.TestBatch ‑ test_batch_resume_aggregation_with_image[eval_flow_with_image_resume-eval_flow_with_image_resume_default_20240305_111258_103000]
tests.executor.e2etests.test_batch_engine.TestBatch ‑ test_batch_run[prompt_tools-inputs_mapping1]
tests.executor.e2etests.test_batch_engine.TestBatch ‑ test_batch_run[sample_flow_with_functions-inputs_mapping3]
tests.executor.e2etests.test_batch_engine.TestBatch ‑ test_batch_run[script_with___file__-inputs_mapping2]
tests.executor.e2etests.test_batch_engine.TestBatch ‑ test_batch_run[web_classification_no_variants-inputs_mapping0]
tests.executor.e2etests.test_batch_engine.TestBatch ‑ test_batch_run_failure[connection_as_input-input_mapping0-InputNotFound-The input for flow cannot be empty in batch mode. Please review your flow and provide valid inputs.]
tests.executor.e2etests.test_batch_engine.TestBatch ‑ test_batch_run_failure[script_with___file__-input_mapping1-EmptyInputsData-Couldn't find any inputs data at the given input paths. Please review the provided path and consider resubmitting.]
tests.executor.e2etests.test_batch_engine.TestBatch ‑ test_batch_run_in_existing_loop
tests.executor.e2etests.test_batch_engine.TestBatch ‑ test_batch_run_line_result[simple_aggregation-batch_input0-str]
tests.executor.e2etests.test_batch_engine.TestBatch ‑ test_batch_run_line_result[simple_aggregation-batch_input1-str]
tests.executor.e2etests.test_batch_engine.TestBatch ‑ test_batch_run_line_result[simple_aggregation-batch_input2-str]
tests.executor.e2etests.test_batch_engine.TestBatch ‑ test_batch_run_then_eval
tests.executor.e2etests.test_batch_engine.TestBatch ‑ test_batch_run_with_aggregation_failure
tests.executor.e2etests.test_batch_engine.TestBatch ‑ test_batch_storage
tests.executor.e2etests.test_batch_engine.TestBatch ‑ test_batch_with_default_input
tests.executor.e2etests.test_batch_engine.TestBatch ‑ test_batch_with_line_number
tests.executor.e2etests.test_batch_engine.TestBatch ‑ test_batch_with_metrics
tests.executor.e2etests.test_batch_engine.TestBatch ‑ test_batch_with_openai_metrics
tests.executor.e2etests.test_batch_engine.TestBatch ‑ test_batch_with_partial_failure
tests.executor.e2etests.test_batch_engine.TestBatch ‑ test_chat_group_batch_run[chat_group/cloud_batch_runs/chat_group_simulation-chat_group/cloud_batch_runs/chat_group_copilot-5-inputs.json]
tests.executor.e2etests.test_batch_engine.TestBatch ‑ test_chat_group_batch_run[chat_group/cloud_batch_runs/chat_group_simulation-chat_group/cloud_batch_runs/chat_group_copilot-5-inputs_using_default_value.json]
tests.executor.e2etests.test_batch_engine.TestBatch ‑ test_chat_group_batch_run_early_stop[chat_group/cloud_batch_runs/chat_group_copilot-chat_group/cloud_batch_runs/chat_group_simulation_error-5-inputs.json]
tests.executor.e2etests.test_batch_engine.TestBatch ‑ test_chat_group_batch_run_early_stop[chat_group/cloud_batch_runs/chat_group_simulation_error-chat_group/cloud_batch_runs/chat_group_copilot-5-inputs.json]
tests.executor.e2etests.test_batch_engine.TestBatch ‑ test_chat_group_batch_run_multi_inputs[chat_group/cloud_batch_runs/chat_group_simulation-chat_group/cloud_batch_runs/chat_group_copilot-5-simulation_input.json-copilot_input.json]
tests.executor.e2etests.test_batch_engine.TestBatch ‑ test_chat_group_batch_run_stop_signal[chat_group/cloud_batch_runs/chat_group_simulation_stop_signal-chat_group/cloud_batch_runs/chat_group_copilot-5-inputs.json]
tests.executor.e2etests.test_batch_engine.TestBatch ‑ test_forkserver_mode_batch_run[prompt_tools-inputs_mapping1]
tests.executor.e2etests.test_batch_engine.TestBatch ‑ test_forkserver_mode_batch_run[sample_flow_with_functions-inputs_mapping3]
tests.executor.e2etests.test_batch_engine.TestBatch ‑ test_forkserver_mode_batch_run[script_with___file__-inputs_mapping2]
tests.executor.e2etests.test_batch_engine.TestBatch ‑ test_forkserver_mode_batch_run[web_classification_no_variants-inputs_mapping0]
tests.executor.e2etests.test_batch_engine.TestBatch ‑ test_spawn_mode_batch_run[prompt_tools-inputs_mapping1]
tests.executor.e2etests.test_batch_engine.TestBatch ‑ test_spawn_mode_batch_run[sample_flow_with_functions-inputs_mapping3]
tests.executor.e2etests.test_batch_engine.TestBatch ‑ test_spawn_mode_batch_run[script_with___file__-inputs_mapping2]
tests.executor.e2etests.test_batch_engine.TestBatch ‑ test_spawn_mode_batch_run[web_classification_no_variants-inputs_mapping0]
tests.executor.e2etests.test_batch_server.TestBatchServer ‑ test_batch_run_in_server_mode
tests.executor.e2etests.test_batch_timeout.TestBatchTimeout ‑ test_batch_timeout[one_line_of_bulktest_timeout-3-600-Line 2 execution timeout for exceeding 3 seconds-Status.Completed]
tests.executor.e2etests.test_batch_timeout.TestBatchTimeout ‑ test_batch_timeout[one_line_of_bulktest_timeout-600-5-Line 2 execution timeout for exceeding-Status.Failed]
tests.executor.e2etests.test_batch_timeout.TestBatchTimeout ‑ test_batch_with_line_timeout[one_line_of_bulktest_timeout]
tests.executor.e2etests.test_batch_timeout.TestBatchTimeout ‑ test_batch_with_one_line_timeout[one_line_of_bulktest_timeout]
tests.executor.e2etests.test_concurent_execution.TestConcurrentExecution ‑ test_concurrent_run
tests.executor.e2etests.test_concurent_execution.TestConcurrentExecution ‑ test_concurrent_run_with_exception
tests.executor.e2etests.test_concurent_execution.TestConcurrentExecution ‑ test_linear_run
tests.executor.e2etests.test_csharp_executor_proxy.TestCSharpExecutorProxy ‑ test_batch
tests.executor.e2etests.test_csharp_executor_proxy.TestCSharpExecutorProxy ‑ test_batch_cancel
tests.executor.e2etests.test_csharp_executor_proxy.TestCSharpExecutorProxy ‑ test_batch_execution_error
tests.executor.e2etests.test_csharp_executor_proxy.TestCSharpExecutorProxy ‑ test_batch_validation_error
tests.executor.e2etests.test_eager_flow.TestEagerFlow ‑ test_batch_run[basic_callable_class-inputs_mapping2-<lambda>-init_kwargs2]
tests.executor.e2etests.test_eager_flow.TestEagerFlow ‑ test_batch_run[dummy_flow_with_trace-inputs_mapping0-<lambda>-None]
tests.executor.e2etests.test_eager_flow.TestEagerFlow ‑ test_batch_run[flow_with_dataclass_output-inputs_mapping1-<lambda>-None]
tests.executor.e2etests.test_eager_flow.TestEagerFlow ‑ test_batch_run_with_init_multiple_workers[1-<lambda>]
tests.executor.e2etests.test_eager_flow.TestEagerFlow ‑ test_batch_run_with_init_multiple_workers[2-<lambda>]
tests.executor.e2etests.test_eager_flow.TestEagerFlow ‑ test_batch_run_with_invalid_case
tests.executor.e2etests.test_executor_execution_failures.TestExecutorFailures ‑ test_executor_exec_line_fail[async_tools_failures-async_fail-In tool raise_an_exception_async: dummy_input]
tests.executor.e2etests.test_executor_execution_failures.TestExecutorFailures ‑ test_executor_exec_line_fail[sync_tools_failures-sync_fail-In tool raise_an_exception: dummy_input]
tests.executor.e2etests.test_executor_execution_failures.TestExecutorFailures ‑ test_executor_exec_line_fail_with_exception[async_tools_failures-async_fail-In tool raise_an_exception_async: dummy_input]
tests.executor.e2etests.test_executor_execution_failures.TestExecutorFailures ‑ test_executor_exec_line_fail_with_exception[sync_tools_failures-sync_fail-In tool raise_an_exception: dummy_input]
tests.executor.e2etests.test_executor_execution_failures.TestExecutorFailures ‑ test_executor_exec_node_fail[async_tools_failures-async_fail-In tool raise_an_exception_async: dummy_input]
tests.executor.e2etests.test_executor_execution_failures.TestExecutorFailures ‑ test_executor_exec_node_fail[sync_tools_failures-sync_fail-In tool raise_an_exception: dummy_input]
tests.executor.e2etests.test_executor_happypath.TestExecutor ‑ test_chat_flow_stream_mode
tests.executor.e2etests.test_executor_happypath.TestExecutor ‑ test_convert_flow_input_types[simple_flow_with_python_tool]
tests.executor.e2etests.test_executor_happypath.TestExecutor ‑ test_execute_flow[output-intermediate-True-2]
tests.executor.e2etests.test_executor_happypath.TestExecutor ‑ test_execute_flow[output_1-intermediate_1-False-1]
tests.executor.e2etests.test_executor_happypath.TestExecutor ‑ test_executor_creation_with_default_input
tests.executor.e2etests.test_executor_happypath.TestExecutor ‑ test_executor_creation_with_default_variants[web_classification]
tests.executor.e2etests.test_executor_happypath.TestExecutor ‑ test_executor_exec_line[async_tools]
tests.executor.e2etests.test_executor_happypath.TestExecutor ‑ test_executor_exec_line[async_tools_with_sync_tools]
tests.executor.e2etests.test_executor_happypath.TestExecutor ‑ test_executor_exec_line[connection_as_input]
tests.executor.e2etests.test_executor_happypath.TestExecutor ‑ test_executor_exec_line[package_tools]
tests.executor.e2etests.test_executor_happypath.TestExecutor ‑ test_executor_exec_line[prompt_tools]
tests.executor.e2etests.test_executor_happypath.TestExecutor ‑ test_executor_exec_line[script_with___file__]
tests.executor.e2etests.test_executor_happypath.TestExecutor ‑ test_executor_exec_line[script_with_import]
tests.executor.e2etests.test_executor_happypath.TestExecutor ‑ test_executor_exec_line[tool_with_assistant_definition]
tests.executor.e2etests.test_executor_happypath.TestExecutor ‑ test_executor_exec_line[web_classification_no_variants]
tests.executor.e2etests.test_executor_happypath.TestExecutor ‑ test_executor_exec_node[connection_as_input-conn_node-None-None]
tests.executor.e2etests.test_executor_happypath.TestExecutor ‑ test_executor_exec_node[package_tools-search_by_text-flow_inputs5-None]
tests.executor.e2etests.test_executor_happypath.TestExecutor ‑ test_executor_exec_node[prompt_tools-summarize_text_content_prompt-flow_inputs1-dependency_nodes_outputs1]
tests.executor.e2etests.test_executor_happypath.TestExecutor ‑ test_executor_exec_node[script_with___file__-node1-flow_inputs2-None]
tests.executor.e2etests.test_executor_happypath.TestExecutor ‑ test_executor_exec_node[script_with___file__-node2-None-dependency_nodes_outputs3]
tests.executor.e2etests.test_executor_happypath.TestExecutor ‑ test_executor_exec_node[script_with___file__-node3-None-None]
tests.executor.e2etests.test_executor_happypath.TestExecutor ‑ test_executor_exec_node[script_with_import-node1-flow_inputs8-None]
tests.executor.e2etests.test_executor_happypath.TestExecutor ‑ test_executor_exec_node[simple_aggregation-accuracy-flow_inputs7-dependency_nodes_outputs7]
tests.executor.e2etests.test_executor_happypath.TestExecutor ‑ test_executor_exec_node[web_classification_no_variants-summarize_text_content-flow_inputs0-dependency_nodes_outputs0]
tests.executor.e2etests.test_executor_happypath.TestExecutor ‑ test_executor_exec_node_with_llm_node
tests.executor.e2etests.test_executor_happypath.TestExecutor ‑ test_executor_for_script_tool_with_init
tests.executor.e2etests.test_executor_happypath.TestExecutor ‑ test_executor_node_overrides
tests.executor.e2etests.test_executor_happypath.TestExecutor ‑ test_flow_with_no_inputs_and_output[no_inputs_outputs]
tests.executor.e2etests.test_executor_happypath.TestExecutor ‑ test_long_running_log
tests.executor.e2etests.test_executor_validation.TestValidation ‑ test_batch_run_input_type_invalid[simple_flow_with_python_tool-inputs_mapping0-The input for flow is incorrect. The value for flow input 'num' in line 0 of input data does not match the expected type 'int'. Please change flow input type or adjust the input value in your input data.-InputTypeError]
tests.executor.e2etests.test_executor_validation.TestValidation ‑ test_batch_run_raise_on_line_failure[simple_flow_with_python_tool-batch_input0-True-Exception]
tests.executor.e2etests.test_executor_validation.TestValidation ‑ test_batch_run_raise_on_line_failure[simple_flow_with_python_tool-batch_input1-False-InputTypeError]
tests.executor.e2etests.test_executor_validation.TestValidation ‑ test_batch_run_raise_on_line_failure[simple_flow_with_python_tool-batch_input2-True-None]
tests.executor.e2etests.test_executor_validation.TestValidation ‑ test_batch_run_raise_on_line_failure[simple_flow_with_python_tool-batch_input3-False-None]
tests.executor.e2etests.test_executor_validation.TestValidation ‑ test_executor_create_failure_type[source_file_missing-flow.dag.python.yaml-ResolveToolError-InvalidSource]
tests.executor.e2etests.test_executor_validation.TestValidation ‑ test_executor_create_failure_type_and_message[flow_input_reference_invalid-flow.dag.yaml-InputReferenceNotFound-None-Invalid node definitions found in the flow graph. Node 'divide_num' references flow input 'num_1' which is not defined in your flow. To resolve this issue, please review your flow, ensuring that you either add the missing flow inputs or adjust node reference to the correct flow input.]
tests.executor.e2etests.test_executor_validation.TestValidation ‑ test_executor_create_failure_type_and_message[flow_llm_with_wrong_conn-flow.dag.yaml-ResolveToolError-InvalidConnectionType-Tool load failed in 'wrong_llm': (InvalidConnectionType) Connection type CustomConnection is not supported for LLM.]
tests.executor.e2etests.test_executor_validation.TestValidation ‑ test_executor_create_failure_type_and_message[flow_output_reference_invalid-flow.dag.yaml-EmptyOutputReference-None-The output 'content' for flow is incorrect. The reference is not specified for the output 'content' in the flow. To rectify this, ensure that you accurately specify the reference in the flow.]
tests.executor.e2etests.test_executor_validation.TestValidation ‑ test_executor_create_failure_type_and_message[node_circular_dependency-flow.dag.yaml-NodeCircularDependency-None-Invalid node definitions found in the flow graph. Node circular dependency has been detected among the nodes in your flow. Kindly review the reference relationships for the nodes ['divide_num', 'divide_num_1', 'divide_num_2'] and resolve the circular reference issue in the flow.]
tests.executor.e2etests.test_executor_validation.TestValidation ‑ test_executor_create_failure_type_and_message[node_reference_not_found-flow.dag.yaml-NodeReferenceNotFound-None-Invalid node definitions found in the flow graph. Node 'divide_num_2' references a non-existent node 'divide_num_3' in your flow. Please review your flow to ensure that the node name is accurately specified.]
tests.executor.e2etests.test_executor_validation.TestValidation ‑ test_executor_create_failure_type_and_message[nodes_names_duplicated-flow.dag.yaml-DuplicateNodeName-None-Invalid node definitions found in the flow graph. Node with name 'stringify_num' appears more than once in the node definitions in your flow, which is not allowed. To address this issue, please review your flow and either rename or remove nodes with identical names.]
tests.executor.e2etests.test_executor_validation.TestValidation ‑ test_executor_create_failure_type_and_message[outputs_reference_not_valid-flow.dag.yaml-OutputReferenceNotFound-None-The output 'content' for flow is incorrect. The output 'content' references non-existent node 'another_stringify_num' in your flow. To resolve this issue, please carefully review your flow and correct the reference definition for the output in question.]
tests.executor.e2etests.test_executor_validation.TestValidation ‑ test_executor_create_failure_type_and_message[outputs_with_invalid_flow_inputs_ref-flow.dag.yaml-OutputReferenceNotFound-None-The output 'num' for flow is incorrect. The output 'num' references non-existent flow input 'num11' in your flow. Please carefully review your flow and correct the reference definition for the output in question.]
tests.executor.e2etests.test_executor_validation.TestValidation ‑ test_executor_create_failure_type_and_message[source_file_missing-flow.dag.jinja.yaml-ResolveToolError-InvalidSource-Tool load failed in 'summarize_text_content': (InvalidSource) Node source path 'summarize_text_content__variant_1.jinja2' is invalid on node 'summarize_text_content'.]
tests.executor.e2etests.test_executor_validation.TestValidation ‑ test_flow_run_execution_errors[flow_output_unserializable-line_input0-FlowOutputUnserializable-The output 'content' for flow is incorrect. The output value is not JSON serializable. JSON dump failed: (TypeError) Object of type UnserializableClass is not JSON serializable. Please verify your flow output and make sure the value serializable.]
tests.executor.e2etests.test_executor_validation.TestValidation ‑ test_flow_run_input_type_invalid[python_tool_with_simple_image_without_default-line_input2-InputNotFound]
tests.executor.e2etests.test_executor_validation.TestValidation ‑ test_flow_run_input_type_invalid[simple_flow_with_python_tool-line_input0-InputNotFound]
tests.executor.e2etests.test_executor_validation.TestValidation ‑ test_flow_run_input_type_invalid[simple_flow_with_python_tool-line_input1-InputTypeError]
tests.executor.e2etests.test_executor_validation.TestValidation ‑ test_flow_run_with_duplicated_inputs[llm_tool_with_duplicated_inputs-Invalid inputs {'prompt'} in prompt template of node llm_tool_with_duplicated_inputs. These inputs are duplicated with the parameters of AzureOpenAI.completion.]
tests.executor.e2etests.test_executor_validation.TestValidation ‑ test_flow_run_with_duplicated_inputs[prompt_tool_with_duplicated_inputs-Invalid inputs {'template'} in prompt template of node prompt_tool_with_duplicated_inputs. These inputs are duplicated with the reserved parameters of prompt tool.]
tests.executor.e2etests.test_executor_validation.TestValidation ‑ test_invalid_flow_dag[invalid_connection-ResolveToolError-ConnectionNotFound]
tests.executor.e2etests.test_executor_validation.TestValidation ‑ test_invalid_flow_dag[tool_type_missing-ResolveToolError-NotImplementedError]
tests.executor.e2etests.test_executor_validation.TestValidation ‑ test_invalid_flow_dag[wrong_api-ResolveToolError-APINotFound]
tests.executor.e2etests.test_executor_validation.TestValidation ‑ test_invalid_flow_dag[wrong_module-FailedToImportModule-None]
tests.executor.e2etests.test_executor_validation.TestValidation ‑ test_invalid_flow_run_inputs_should_not_saved_to_run_info
tests.executor.e2etests.test_executor_validation.TestValidation ‑ test_node_topology_in_order[web_classification_no_variants-web_classification_no_variants_unordered]
tests.executor.e2etests.test_executor_validation.TestValidation ‑ test_single_node_input_type_invalid[path_root0-simple_flow_with_python_tool-divide_num-line_input0-InputNotFound-The input for node is incorrect. Node input 'num' is not found in input data for node 'divide_num'. Please verify the inputs data for the node.]
tests.executor.e2etests.test_executor_validation.TestValidation ‑ test_single_node_input_type_invalid[path_root1-simple_flow_with_python_tool-divide_num-line_input1-InputTypeError-The input for node is incorrect. Value for input 'num' of node 'divide_num' is not type 'int'. Please review and rectify the input data.]
tests.executor.e2etests.test_executor_validation.TestValidation ‑ test_single_node_input_type_invalid[path_root2-flow_input_reference_invalid-divide_num-line_input2-InputNotFound-The input for node is incorrect. Node input 'num_1' is not found from flow inputs of node 'divide_num'. Please review the node definition in your flow.]
tests.executor.e2etests.test_executor_validation.TestValidation ‑ test_single_node_input_type_invalid[path_root3-simple_flow_with_python_tool-bad_node_name-line_input3-SingleNodeValidationError-Validation failed when attempting to execute the node. Node 'bad_node_name' is not found in flow 'flow.dag.yaml'. Please change node name or correct the flow file.]
tests.executor.e2etests.test_executor_validation.TestValidation ‑ test_single_node_input_type_invalid[path_root4-node_missing_type_or_source-divide_num-line_input4-SingleNodeValidationError-Validation failed when attempting to execute the node. Properties 'source' or 'type' are not specified for Node 'divide_num' in flow 'flow.dag.yaml'. Please make sure these properties are in place and try again.]
tests.executor.e2etests.test_executor_validation.TestValidation ‑ test_valid_flow_run_inpust_should_saved_to_run_info
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_batch_engine_with_image[chat_flow_with_image-input_dirs3-inputs_mapping3-answer-2-False]
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_batch_engine_with_image[eval_flow_with_composite_image-input_dirs5-inputs_mapping5-output-2-True]
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_batch_engine_with_image[eval_flow_with_simple_image-input_dirs4-inputs_mapping4-output-2-True]
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_batch_engine_with_image[python_tool_with_composite_image-input_dirs2-inputs_mapping2-output-2-False]
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_batch_engine_with_image[python_tool_with_simple_image-input_dirs0-inputs_mapping0-output-4-False]
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_batch_engine_with_image[python_tool_with_simple_image_with_default-input_dirs1-inputs_mapping1-output-4-False]
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_batch_run_then_eval_with_image
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_executor_exec_aggregation_with_image[eval_flow_with_composite_image-inputs6]
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_executor_exec_aggregation_with_image[eval_flow_with_composite_image-inputs7]
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_executor_exec_aggregation_with_image[eval_flow_with_composite_image-inputs8]
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_executor_exec_aggregation_with_image[eval_flow_with_simple_image-inputs0]
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_executor_exec_aggregation_with_image[eval_flow_with_simple_image-inputs1]
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_executor_exec_aggregation_with_image[eval_flow_with_simple_image-inputs2]
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_executor_exec_aggregation_with_image[eval_flow_with_simple_image-inputs3]
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_executor_exec_aggregation_with_image[eval_flow_with_simple_image-inputs4]
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_executor_exec_aggregation_with_image[eval_flow_with_simple_image-inputs5]
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_executor_exec_line_with_image[chat_flow_with_image-inputs9]
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_executor_exec_line_with_image[python_tool_with_composite_image-inputs6]
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_executor_exec_line_with_image[python_tool_with_composite_image-inputs7]
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_executor_exec_line_with_image[python_tool_with_composite_image-inputs8]
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_executor_exec_line_with_image[python_tool_with_image_nested_api_calls-inputs10]
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_executor_exec_line_with_image[python_tool_with_simple_image-inputs0]
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_executor_exec_line_with_image[python_tool_with_simple_image-inputs1]
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_executor_exec_line_with_image[python_tool_with_simple_image-inputs2]
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_executor_exec_line_with_image[python_tool_with_simple_image-inputs3]
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_executor_exec_line_with_image[python_tool_with_simple_image-inputs4]
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_executor_exec_line_with_image[python_tool_with_simple_image-inputs5]
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_executor_exec_node_with_image[python_tool_with_composite_image-python_node-flow_inputs2-None]
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_executor_exec_node_with_image[python_tool_with_composite_image-python_node_2-flow_inputs3-None]
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_executor_exec_node_with_image[python_tool_with_composite_image-python_node_3-flow_inputs4-dependency_nodes_outputs4]
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_executor_exec_node_with_image[python_tool_with_simple_image-python_node-flow_inputs0-None]
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_executor_exec_node_with_image[python_tool_with_simple_image-python_node_2-flow_inputs1-dependency_nodes_outputs1]
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_executor_exec_node_with_image_storage_and_path[None-False-.]
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_executor_exec_node_with_image_storage_and_path[None-True-test_storage]
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_executor_exec_node_with_image_storage_and_path[test_path-False-test_path]
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_executor_exec_node_with_image_storage_and_path[test_path-True-test_storage]
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_executor_exec_node_with_invalid_default_value[python_tool_with_invalid_default_value-python_node_2-flow_inputs0-dependency_nodes_outputs0]
tests.executor.e2etests.test_image.TestExecutorWithOpenaiVisionImage ‑ test_batch_engine_with_image[chat_flow_with_openai_vision_image-input_dirs1-inputs_mapping1-answer-2]
tests.executor.e2etests.test_image.TestExecutorWithOpenaiVisionImage ‑ test_batch_engine_with_image[python_tool_with_openai_vision_image-input_dirs0-inputs_mapping0-output-4]
tests.executor.e2etests.test_image.TestExecutorWithOpenaiVisionImage ‑ test_executor_exec_line_with_image[chat_flow_with_openai_vision_image-inputs6]
tests.executor.e2etests.test_image.TestExecutorWithOpenaiVisionImage ‑ test_executor_exec_line_with_image[python_tool_with_openai_vision_image-inputs0]
tests.executor.e2etests.test_image.TestExecutorWithOpenaiVisionImage ‑ test_executor_exec_line_with_image[python_tool_with_openai_vision_image-inputs1]
tests.executor.e2etests.test_image.TestExecutorWithOpenaiVisionImage ‑ test_executor_exec_line_with_image[python_tool_with_openai_vision_image-inputs2]
tests.executor.e2etests.test_image.TestExecutorWithOpenaiVisionImage ‑ test_executor_exec_line_with_image[python_tool_with_openai_vision_image-inputs3]
tests.executor.e2etests.test_image.TestExecutorWithOpenaiVisionImage ‑ test_executor_exec_line_with_image[python_tool_with_openai_vision_image-inputs4]
tests.executor.e2etests.test_image.TestExecutorWithOpenaiVisionImage ‑ test_executor_exec_line_with_image[python_tool_with_openai_vision_image-inputs5]
tests.executor.e2etests.test_image.TestExecutorWithOpenaiVisionImage ‑ test_executor_exec_node_with_image[python_tool_with_openai_vision_image-python_node-flow_inputs0-None]
tests.executor.e2etests.test_image.TestExecutorWithOpenaiVisionImage ‑ test_executor_exec_node_with_image[python_tool_with_openai_vision_image-python_node_2-flow_inputs1-dependency_nodes_outputs1]
tests.executor.e2etests.test_image.TestExecutorWithOpenaiVisionImage ‑ test_executor_exec_node_with_image_storage_and_path[None-False-.]
tests.executor.e2etests.test_image.TestExecutorWithOpenaiVisionImage ‑ test_executor_exec_node_with_image_storage_and_path[None-True-test_storage]
tests.executor.e2etests.test_image.TestExecutorWithOpenaiVisionImage ‑ test_executor_exec_node_with_image_storage_and_path[test_path-False-test_path]
tests.executor.e2etests.test_image.TestExecutorWithOpenaiVisionImage ‑ test_executor_exec_node_with_image_storage_and_path[test_path-True-test_storage]
tests.executor.e2etests.test_langchain.TestLangchain ‑ test_batch_with_langchain[flow_with_langchain_traces-inputs_mapping0]
tests.executor.e2etests.test_langchain.TestLangchain ‑ test_batch_with_langchain[openai_chat_api_flow-inputs_mapping1]
tests.executor.e2etests.test_langchain.TestLangchain ‑ test_batch_with_langchain[openai_completion_api_flow-inputs_mapping2]
tests.executor.e2etests.test_logs.TestExecutorLogs ‑ test_activate_config_log
tests.executor.e2etests.test_logs.TestExecutorLogs ‑ test_async_log_in_worker_thread
tests.executor.e2etests.test_logs.TestExecutorLogs ‑ test_batch_run_flow_logs[flow_root_dir0-print_input_flow-8]
tests.executor.e2etests.test_logs.TestExecutorLogs ‑ test_batch_run_flow_logs[flow_root_dir1-print_input_flex-2]
tests.executor.e2etests.test_logs.TestExecutorLogs ‑ test_executor_logs[print_input_flow]
tests.executor.e2etests.test_logs.TestExecutorLogs ‑ test_log_progress[simple_flow_with_ten_inputs-inputs_mapping0]
tests.executor.e2etests.test_logs.TestExecutorLogs ‑ test_long_run_log
tests.executor.e2etests.test_logs.TestExecutorLogs ‑ test_node_logs[print_input_flow]
tests.executor.e2etests.test_logs.TestExecutorLogs ‑ test_node_logs_in_executor_logs[print_input_flow]
tests.executor.e2etests.test_package_tool.TestPackageTool ‑ test_custom_llm_tool_with_duplicated_inputs
tests.executor.e2etests.test_package_tool.TestPackageTool ‑ test_executor_package_tool_with_conn
tests.executor.e2etests.test_package_tool.TestPackageTool ‑ test_executor_package_with_prompt_tool
tests.executor.e2etests.test_package_tool.TestPackageTool ‑ test_package_tool_execution[wrong_package_in_package_tools-ResolveToolError-PackageToolNotFoundError-Tool load failed in 'search_by_text': (PackageToolNotFoundError) Package tool 'promptflow.tools.serpapi11.SerpAPI.search' is not found in the current environment. All available package tools are: ['promptflow.tools.azure_content_safety.AzureContentSafety.analyze_text', 'promptflow.tools.azure_detect.AzureDetect.get_language'].]
tests.executor.e2etests.test_package_tool.TestPackageTool ‑ test_package_tool_execution[wrong_tool_in_package_tools-ResolveToolError-PackageToolNotFoundError-Tool load failed in 'search_by_text': (PackageToolNotFoundError) Package tool 'promptflow.tools.serpapi.SerpAPI.search_11' is not found in the current environment. All available package tools are: ['promptflow.tools.azure_content_safety.AzureContentSafety.analyze_text', 'promptflow.tools.azure_detect.AzureDetect.get_language'].]
tests.executor.e2etests.test_package_tool.TestPackageTool ‑ test_package_tool_load_error[tool_with_init_error-Tool load failed in 'tool_with_init_error': (ToolLoadError) Failed to load package tool 'Tool with init error': (Exception) Tool load error.]
tests.executor.e2etests.test_script_tool_generator.TestScriptToolGenerator ‑ test_generate_script_tool_meta_with_dynamic_list
tests.executor.e2etests.test_script_tool_generator.TestScriptToolGenerator ‑ test_generate_script_tool_meta_with_enabled_by_value
tests.executor.e2etests.test_script_tool_generator.TestScriptToolGenerator ‑ test_generate_script_tool_meta_with_generated_by
tests.executor.e2etests.test_script_tool_generator.TestScriptToolGenerator ‑ test_generate_script_tool_meta_with_invalid_dynamic_list
tests.executor.e2etests.test_script_tool_generator.TestScriptToolGenerator ‑ test_generate_script_tool_meta_with_invalid_enabled_by
tests.executor.e2etests.test_script_tool_generator.TestScriptToolGenerator ‑ test_generate_script_tool_meta_with_invalid_icon
tests.executor.e2etests.test_script_tool_generator.TestScriptToolGenerator ‑ test_generate_script_tool_meta_with_invalid_schema
tests.executor.e2etests.test_telemetry.TestExecutorTelemetry ‑ test_executor_openai_telemetry
tests.executor.e2etests.test_telemetry.TestExecutorTelemetry ‑ test_executor_openai_telemetry_with_batch_run
tests.executor.e2etests.test_traces.TestExecutorTraces ‑ test_executor_generator_tools
tests.executor.e2etests.test_traces.TestExecutorTraces ‑ test_executor_openai_api_flow[llm_tool-inputs4]
tests.executor.e2etests.test_traces.TestExecutorTraces ‑ test_executor_openai_api_flow[llm_tool-inputs5]
tests.executor.e2etests.test_traces.TestExecutorTraces ‑ test_executor_openai_api_flow[openai_chat_api_flow-inputs0]
tests.executor.e2etests.test_traces.TestExecutorTraces ‑ test_executor_openai_api_flow[openai_chat_api_flow-inputs1]
tests.executor.e2etests.test_traces.TestExecutorTraces ‑ test_executor_openai_api_flow[openai_completion_api_flow-inputs2]
tests.executor.e2etests.test_traces.TestExecutorTraces ‑ test_executor_openai_api_flow[openai_completion_api_flow-inputs3]
tests.executor.e2etests.test_traces.TestExecutorTraces ‑ test_flow_with_trace[flow_with_trace]
tests.executor.e2etests.test_traces.TestExecutorTraces ‑ test_flow_with_trace[flow_with_trace_async]
tests.executor.e2etests.test_traces.TestExecutorTraces ‑ test_trace_behavior_with_generator_node[False]
tests.executor.e2etests.test_traces.TestExecutorTraces ‑ test_trace_behavior_with_generator_node[True]
tests.executor.e2etests.test_traces.TestOTelTracer ‑ test_flow_with_nested_tool
tests.executor.e2etests.test_traces.TestOTelTracer ‑ test_flow_with_traced_function
tests.executor.e2etests.test_traces.TestOTelTracer ‑ test_otel_trace[flow_with_trace-inputs0-5]
tests.executor.e2etests.test_traces.TestOTelTracer ‑ test_otel_trace[flow_with_trace_async-inputs1-5]
tests.executor.e2etests.test_traces.TestOTelTracer ‑ test_otel_trace_with_batch
tests.executor.e2etests.test_traces.TestOTelTracer ‑ test_otel_trace_with_embedding[openai_embedding_api_flow-inputs0-3]
tests.executor.e2etests.test_traces.TestOTelTracer ‑ test_otel_trace_with_embedding[openai_embedding_api_flow_with_token-inputs1-3]
tests.executor.e2etests.test_traces.TestOTelTracer ‑ test_otel_trace_with_llm[flow_with_async_llm_tasks-inputs5-False-6]
tests.executor.e2etests.test_traces.TestOTelTracer ‑ test_otel_trace_with_llm[llm_tool-inputs4-False-4]
tests.executor.e2etests.test_traces.TestOTelTracer ‑ test_otel_trace_with_llm[openai_chat_api_flow-inputs0-False-3]
tests.executor.e2etests.test_traces.TestOTelTracer ‑ test_otel_trace_with_llm[openai_chat_api_flow-inputs1-True-4]
tests.executor.e2etests.test_traces.TestOTelTracer ‑ test_otel_trace_with_llm[openai_completion_api_flow-inputs2-False-3]
tests.executor.e2etests.test_traces.TestOTelTracer ‑ test_otel_trace_with_llm[openai_completion_api_flow-inputs3-True-4]
tests.executor.e2etests.test_traces.TestOTelTracer ‑ test_otel_trace_with_prompt[llm_tool-inputs0-joke.jinja2]