Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Llamafile connection error breaks application start-up #7508

Closed
1 task done
SelvaKumarDevPy opened this issue Jul 19, 2024 · 7 comments
Closed
1 task done

Llamafile connection error breaks application start-up #7508

SelvaKumarDevPy opened this issue Jul 19, 2024 · 7 comments
Assignees
Labels
bug Something isn't working Classic AutoGPT Agent

Comments

@SelvaKumarDevPy
Copy link

⚠️ Search for existing issues first ⚠️

  • I have searched the existing issues, and there is no existing issue for my problem

Which Operating System are you using?

Windows

Which version of AutoGPT are you using?

Latest Release

Do you use OpenAI GPT-3 or GPT-4?

GPT-4

Which area covers your issue best?

Installation and setup

Describe your issue.

Getting error after installing when starting the autogpt.

I have manually tested the api key which works fine.

@SelvaKumarDevPy ➜ /workspaces/codespaces-jupyter/AutoGPT/autogpt (master) $ ./autogpt.sh 
Missing packages:
autogpt-forge (*) @ file:///workspaces/codespaces-jupyter/AutoGPT/forge, click, distro (>=1.8.0,<2.0.0), fastapi (>=0.109.1,<0.110.0), hypercorn (>=0.14.4,<0.15.0), openai (>=1.7.2,<2.0.0), orjson (>=3.8.10,<4.0.0), pydantic (>=2.7.2,<3.0.0), python-dotenv (>=1.0.0,<2.0.0), sentry-sdk (>=1.40.4,<2.0.0)

Installing dependencies from lock file

No dependencies to install or update

Installing the current project: agpt (0.5.0)

Finished installing packages! Starting AutoGPT...

2024-07-19 14:26:45,443 INFO  HTTP Request: GET https://api.openai.com/v1/models "HTTP/1.1 200 OK"
2024-07-19 14:26:45,831 INFO  HTTP Request: GET https://api.openai.com/v1/models "HTTP/1.1 200 OK"
2024-07-19 14:26:46,237 INFO  HTTP Request: GET https://api.groq.com/openai/v1/models "HTTP/1.1 200 OK"
  | exceptiongroup.ExceptionGroup: multiple connection attempts failed (2 sub-exceptions)
  +-+---------------- 1 ----------------
    | Traceback (most recent call last):
    |   File "/home/codespace/.cache/pypoetry/virtualenvs/agpt-qG7m1V9w-py3.10/lib/python3.10/site-packages/anyio/_core/_sockets.py", line 170, in try_connect
    |     stream = await asynclib.connect_tcp(remote_host, remote_port, local_address)
    |   File "/home/codespace/.cache/pypoetry/virtualenvs/agpt-qG7m1V9w-py3.10/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 2258, in connect_tcp
    |     await get_running_loop().create_connection(
    |   File "/usr/local/python/3.10.13/lib/python3.10/asyncio/base_events.py", line 1076, in create_connection
    |     raise exceptions[0]
    |   File "/usr/local/python/3.10.13/lib/python3.10/asyncio/base_events.py", line 1060, in create_connection
    |     sock = await self._connect_sock(
    |   File "/usr/local/python/3.10.13/lib/python3.10/asyncio/base_events.py", line 969, in _connect_sock
    |     await self.sock_connect(sock, address)
    |   File "/usr/local/python/3.10.13/lib/python3.10/asyncio/selector_events.py", line 501, in sock_connect
    |     return await fut
    |   File "/usr/local/python/3.10.13/lib/python3.10/asyncio/selector_events.py", line 541, in _sock_connect_cb
    |     raise OSError(err, f'Connect call failed {address}')
    | ConnectionRefusedError: [Errno 111] Connect call failed ('::1', 8080, 0, 0)
    +---------------- 2 ----------------
    | Traceback (most recent call last):
    |   File "/home/codespace/.cache/pypoetry/virtualenvs/agpt-qG7m1V9w-py3.10/lib/python3.10/site-packages/anyio/_core/_sockets.py", line 170, in try_connect
    |     stream = await asynclib.connect_tcp(remote_host, remote_port, local_address)
    |   File "/home/codespace/.cache/pypoetry/virtualenvs/agpt-qG7m1V9w-py3.10/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 2258, in connect_tcp
    |     await get_running_loop().create_connection(
    |   File "/usr/local/python/3.10.13/lib/python3.10/asyncio/base_events.py", line 1076, in create_connection
    |     raise exceptions[0]
    |   File "/usr/local/python/3.10.13/lib/python3.10/asyncio/base_events.py", line 1060, in create_connection
    |     sock = await self._connect_sock(
    |   File "/usr/local/python/3.10.13/lib/python3.10/asyncio/base_events.py", line 969, in _connect_sock
    |     await self.sock_connect(sock, address)
    |   File "/usr/local/python/3.10.13/lib/python3.10/asyncio/selector_events.py", line 501, in sock_connect
    |     return await fut
    |   File "/usr/local/python/3.10.13/lib/python3.10/asyncio/selector_events.py", line 541, in _sock_connect_cb
    |     raise OSError(err, f'Connect call failed {address}')
    | ConnectionRefusedError: [Errno 111] Connect call failed ('127.0.0.1', 8080)
    +------------------------------------

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/home/codespace/.cache/pypoetry/virtualenvs/agpt-qG7m1V9w-py3.10/lib/python3.10/site-packages/httpcore/_exceptions.py", line 10, in map_exceptions
    yield
  File "/home/codespace/.cache/pypoetry/virtualenvs/agpt-qG7m1V9w-py3.10/lib/python3.10/site-packages/httpcore/_backends/anyio.py", line 114, in connect_tcp
    stream: anyio.abc.ByteStream = await anyio.connect_tcp(
  File "/home/codespace/.cache/pypoetry/virtualenvs/agpt-qG7m1V9w-py3.10/lib/python3.10/site-packages/anyio/_core/_sockets.py", line 232, in connect_tcp
    raise OSError("All connection attempts failed") from cause
OSError: All connection attempts failed

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/home/codespace/.cache/pypoetry/virtualenvs/agpt-qG7m1V9w-py3.10/lib/python3.10/site-packages/httpx/_transports/default.py", line 60, in map_httpcore_exceptions
    yield
  File "/home/codespace/.cache/pypoetry/virtualenvs/agpt-qG7m1V9w-py3.10/lib/python3.10/site-packages/httpx/_transports/default.py", line 353, in handle_async_request
    resp = await self._pool.handle_async_request(req)
  File "/home/codespace/.cache/pypoetry/virtualenvs/agpt-qG7m1V9w-py3.10/lib/python3.10/site-packages/httpcore/_async/connection_pool.py", line 262, in handle_async_request
    raise exc
  File "/home/codespace/.cache/pypoetry/virtualenvs/agpt-qG7m1V9w-py3.10/lib/python3.10/site-packages/httpcore/_async/connection_pool.py", line 245, in handle_async_request
    response = await connection.handle_async_request(request)
  File "/home/codespace/.cache/pypoetry/virtualenvs/agpt-qG7m1V9w-py3.10/lib/python3.10/site-packages/httpcore/_async/connection.py", line 92, in handle_async_request
    raise exc
  File "/home/codespace/.cache/pypoetry/virtualenvs/agpt-qG7m1V9w-py3.10/lib/python3.10/site-packages/httpcore/_async/connection.py", line 69, in handle_async_request
    stream = await self._connect(request)
  File "/home/codespace/.cache/pypoetry/virtualenvs/agpt-qG7m1V9w-py3.10/lib/python3.10/site-packages/httpcore/_async/connection.py", line 117, in _connect
    stream = await self._network_backend.connect_tcp(**kwargs)
  File "/home/codespace/.cache/pypoetry/virtualenvs/agpt-qG7m1V9w-py3.10/lib/python3.10/site-packages/httpcore/_backends/auto.py", line 31, in connect_tcp
    return await self._backend.connect_tcp(
  File "/home/codespace/.cache/pypoetry/virtualenvs/agpt-qG7m1V9w-py3.10/lib/python3.10/site-packages/httpcore/_backends/anyio.py", line 112, in connect_tcp
    with map_exceptions(exc_map):
  File "/usr/local/python/3.10.13/lib/python3.10/contextlib.py", line 153, in __exit__
    self.gen.throw(typ, value, traceback)
  File "/home/codespace/.cache/pypoetry/virtualenvs/agpt-qG7m1V9w-py3.10/lib/python3.10/site-packages/httpcore/_exceptions.py", line 14, in map_exceptions
    raise to_exc(exc) from exc
httpcore.ConnectError: All connection attempts failed

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/home/codespace/.cache/pypoetry/virtualenvs/agpt-qG7m1V9w-py3.10/lib/python3.10/site-packages/openai/_base_client.py", line 1423, in _request
    response = await self._client.send(
  File "/home/codespace/.cache/pypoetry/virtualenvs/agpt-qG7m1V9w-py3.10/lib/python3.10/site-packages/httpx/_client.py", line 1617, in send
    response = await self._send_handling_auth(
  File "/home/codespace/.cache/pypoetry/virtualenvs/agpt-qG7m1V9w-py3.10/lib/python3.10/site-packages/httpx/_client.py", line 1645, in _send_handling_auth
    response = await self._send_handling_redirects(
  File "/home/codespace/.cache/pypoetry/virtualenvs/agpt-qG7m1V9w-py3.10/lib/python3.10/site-packages/httpx/_client.py", line 1682, in _send_handling_redirects
    response = await self._send_single_request(request)
  File "/home/codespace/.cache/pypoetry/virtualenvs/agpt-qG7m1V9w-py3.10/lib/python3.10/site-packages/httpx/_client.py", line 1719, in _send_single_request
    response = await transport.handle_async_request(request)
  File "/home/codespace/.cache/pypoetry/virtualenvs/agpt-qG7m1V9w-py3.10/lib/python3.10/site-packages/httpx/_transports/default.py", line 352, in handle_async_request
    with map_httpcore_exceptions():
  File "/usr/local/python/3.10.13/lib/python3.10/contextlib.py", line 153, in __exit__
    self.gen.throw(typ, value, traceback)
  File "/home/codespace/.cache/pypoetry/virtualenvs/agpt-qG7m1V9w-py3.10/lib/python3.10/site-packages/httpx/_transports/default.py", line 77, in map_httpcore_exceptions
    raise mapped_exc(message) from exc
httpx.ConnectError: All connection attempts failed

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "/home/codespace/.cache/pypoetry/virtualenvs/agpt-qG7m1V9w-py3.10/lib/python3.10/site-packages/click/core.py", line 1157, in __call__
    return self.main(*args, **kwargs)
  File "/home/codespace/.cache/pypoetry/virtualenvs/agpt-qG7m1V9w-py3.10/lib/python3.10/site-packages/click/core.py", line 1078, in main
    rv = self.invoke(ctx)
  File "/home/codespace/.cache/pypoetry/virtualenvs/agpt-qG7m1V9w-py3.10/lib/python3.10/site-packages/click/core.py", line 1666, in invoke
    rv = super().invoke(ctx)
  File "/home/codespace/.cache/pypoetry/virtualenvs/agpt-qG7m1V9w-py3.10/lib/python3.10/site-packages/click/core.py", line 1434, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/home/codespace/.cache/pypoetry/virtualenvs/agpt-qG7m1V9w-py3.10/lib/python3.10/site-packages/click/core.py", line 783, in invoke
    return __callback(*args, **kwargs)
  File "/home/codespace/.cache/pypoetry/virtualenvs/agpt-qG7m1V9w-py3.10/lib/python3.10/site-packages/click/decorators.py", line 33, in new_func
    return f(get_current_context(), *args, **kwargs)
  File "/workspaces/codespaces-jupyter/AutoGPT/autogpt/autogpt/app/cli.py", line 19, in cli
    ctx.invoke(run)
  File "/home/codespace/.cache/pypoetry/virtualenvs/agpt-qG7m1V9w-py3.10/lib/python3.10/site-packages/click/core.py", line 783, in invoke
    return __callback(*args, **kwargs)
  File "/workspaces/codespaces-jupyter/AutoGPT/autogpt/autogpt/app/cli.py", line 144, in run
    run_auto_gpt(
  File "/workspaces/codespaces-jupyter/AutoGPT/autogpt/autogpt/app/utils.py", line 245, in wrapper
    return asyncio.run(f(*args, **kwargs))
  File "/usr/local/python/3.10.13/lib/python3.10/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/python/3.10.13/lib/python3.10/asyncio/base_events.py", line 649, in run_until_complete
    return future.result()
  File "/workspaces/codespaces-jupyter/AutoGPT/autogpt/autogpt/app/main.py", line 105, in run_auto_gpt
    await apply_overrides_to_config(
  File "/workspaces/codespaces-jupyter/AutoGPT/autogpt/autogpt/app/configurator.py", line 54, in apply_overrides_to_config
    config.fast_llm = await check_model(config.fast_llm, "fast_llm")
  File "/workspaces/codespaces-jupyter/AutoGPT/autogpt/autogpt/app/configurator.py", line 69, in check_model
    models = await multi_provider.get_available_chat_models()
  File "/workspaces/codespaces-jupyter/AutoGPT/forge/forge/llm/providers/multi.py", line 72, in get_available_chat_models
    models.extend(await provider.get_available_chat_models())
  File "/workspaces/codespaces-jupyter/AutoGPT/forge/forge/llm/providers/_openai_base.py", line 135, in get_available_chat_models
    all_available_models = await self.get_available_models()
  File "/workspaces/codespaces-jupyter/AutoGPT/forge/forge/llm/providers/llamafile/llamafile.py", line 131, in get_available_models
    _models = (await self._client.models.list()).data
  File "/home/codespace/.cache/pypoetry/virtualenvs/agpt-qG7m1V9w-py3.10/lib/python3.10/site-packages/openai/_base_client.py", line 269, in _get_page
    return await self._client.request(self._page_cls, self._options)
  File "/home/codespace/.cache/pypoetry/virtualenvs/agpt-qG7m1V9w-py3.10/lib/python3.10/site-packages/openai/_base_client.py", line 1394, in request
    return await self._request(
  File "/home/codespace/.cache/pypoetry/virtualenvs/agpt-qG7m1V9w-py3.10/lib/python3.10/site-packages/openai/_base_client.py", line 1447, in _request
    return await self._retry_request(
  File "/home/codespace/.cache/pypoetry/virtualenvs/agpt-qG7m1V9w-py3.10/lib/python3.10/site-packages/openai/_base_client.py", line 1516, in _retry_request
    return await self._request(
  File "/home/codespace/.cache/pypoetry/virtualenvs/agpt-qG7m1V9w-py3.10/lib/python3.10/site-packages/openai/_base_client.py", line 1447, in _request
    return await self._retry_request(
  File "/home/codespace/.cache/pypoetry/virtualenvs/agpt-qG7m1V9w-py3.10/lib/python3.10/site-packages/openai/_base_client.py", line 1516, in _retry_request
    return await self._request(
  File "/home/codespace/.cache/pypoetry/virtualenvs/agpt-qG7m1V9w-py3.10/lib/python3.10/site-packages/openai/_base_client.py", line 1457, in _request
    raise APIConnectionError(request=request) from err
openai.APIConnectionError: Connection error.
@SelvaKumarDevPy ➜ /workspaces/codespaces-jupyter/AutoGPT/autogpt (master) $ 

Upload Activity Log Content

No response

Upload Error Log Content

No response

@maximus1127
Copy link

after dealing with package conflicts for 3 days across windows, wsl, and ubuntu....i too am stuck here. any help would be greatly appreciated. this install process was supremely convoluted and frustrating. (i'm willing to admit it's my own unfamiliarity with python and its ecosystem)

@QuinRiva
Copy link

I'm also encountering this error. For context, I have successfully used AutoGPT in the past (about 12 months ago), so I'm familiar with the setup process.

@QuinRiva
Copy link

I've run it in debug mode, so hopefully this is helpful - my sense is that it's an issue connecting to the local server rather than anything to do with openai?

 ~/A/autogpt (master) [1]> ./autogpt.sh run --debug
Missing packages:
autogpt-forge (*) @ file:///home/quin/AutoGPT/forge, colorama (>=0.4.6,<0.5.0), distro (>=1.8.0,<2.0.0), fastapi (>=0.109.1,<0.110.0), gitpython (>=3.1.32,<4.0.0), hypercorn (>=0.14.4,<0.15.0), orjson (>=3.8.10,<4.0.0), python-dotenv (>=1.0.0,<2.0.0), sentry-sdk (>=1.40.4,<2.0.0)

Installing dependencies from lock file

No dependencies to install or update

Installing the current project: agpt (0.5.0)

Finished installing packages! Starting AutoGPT...

2024-07-20 21:13:18,984 DEBUG _config.py:79  load_ssl_context verify=True cert=None trust_env=True http2=False
2024-07-20 21:13:18,985 DEBUG _config.py:146  load_verify_locations cafile='/home/quin/.cache/pypoetry/virtualenvs/agpt-iLIIu-Uy-py3.10/lib/python3.10/site-packages/certifi/cacert.pem'
2024-07-20 21:13:19,025 DEBUG _config.py:79  load_ssl_context verify=True cert=None trust_env=True http2=False
2024-07-20 21:13:19,026 DEBUG _config.py:146  load_verify_locations cafile='/home/quin/.cache/pypoetry/virtualenvs/agpt-iLIIu-Uy-py3.10/lib/python3.10/site-packages/certifi/cacert.pem'
2024-07-20 21:13:19,072 DEBUG _trace.py:85  connect_tcp.started host='api.openai.com' port=443 local_address=None timeout=5.0 socket_options=None
2024-07-20 21:13:19,389 DEBUG _trace.py:85  connect_tcp.complete return_value=<httpcore._backends.anyio.AnyIOStream object at 0x7f7020597910>
2024-07-20 21:13:19,390 DEBUG _trace.py:85  start_tls.started ssl_context=<ssl.SSLContext object at 0x7f706e42a6c0> server_hostname='api.openai.com' timeout=5.0
2024-07-20 21:13:19,559 DEBUG _trace.py:85  start_tls.complete return_value=<httpcore._backends.anyio.AnyIOStream object at 0x7f7020597a00>
2024-07-20 21:13:19,559 DEBUG _trace.py:85  send_request_headers.started request=<Request [b'GET']>
2024-07-20 21:13:19,560 DEBUG _trace.py:85  send_request_headers.complete
2024-07-20 21:13:19,560 DEBUG _trace.py:85  send_request_body.started request=<Request [b'GET']>
2024-07-20 21:13:19,560 DEBUG _trace.py:85  send_request_body.complete
2024-07-20 21:13:19,560 DEBUG _trace.py:85  receive_response_headers.started request=<Request [b'GET']>
2024-07-20 21:13:19,894 DEBUG _trace.py:85  receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'Date', b'Sat, 20 Jul 2024 11:13:18 GMT'), (b'Content-Type', b'application/json'), (b'Transfer-Encoding', b'chunked'), (b'Connection', b'keep-alive'), (b'openai-version', b'2020-10-01'), (b'openai-organization', b'user-REDACTED'), (b'x-request-id', b'req_3761db147962e17168cbdab33feb5059'), (b'openai-processing-ms', b'113'), (b'access-control-allow-origin', b'*'), (b'strict-transport-security', b'max-age=15552000; includeSubDomains; preload'), (b'CF-Cache-Status', b'DYNAMIC'), (b'Set-Cookie', b'__cf_bm=hbl9IW.REDACTED.0gfdt.YsRt8tpp3t55ikyY_9JdGcekzw; path=/; expires=Sat, 20-Jul-24 11:43:18 GMT; domain=.api.openai.com; HttpOnly; Secure; SameSite=None'), (b'X-Content-Type-Options', b'nosniff'), (b'Set-Cookie', b'_cfuvid=REDACTED; path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None'), (b'Server', b'cloudflare'), (b'CF-RAY', b'8a62856bdf38f99f-SJC'), (b'Content-Encoding', b'gzip'), (b'alt-svc', b'h3=":443"; ma=86400')])
2024-07-20 21:13:19,895 INFO _client.py:1729  HTTP Request: GET https://api.openai.com/v1/models "HTTP/1.1 200 OK"
2024-07-20 21:13:19,896 DEBUG _trace.py:85  receive_response_body.started request=<Request [b'GET']>
2024-07-20 21:13:19,896 DEBUG _trace.py:85  receive_response_body.complete
2024-07-20 21:13:19,896 DEBUG _trace.py:85  response_closed.started
2024-07-20 21:13:19,896 DEBUG _trace.py:85  response_closed.complete
2024-07-20 21:13:19,898 DEBUG multi.py:125  Checking if openai is available...
2024-07-20 21:13:19,898 DEBUG multi.py:136  OpenAIProvider not yet in cache, trying to init...
2024-07-20 21:13:19,898 DEBUG multi.py:150  Loading OpenAICredentials...
2024-07-20 21:13:19,899 DEBUG _config.py:79  load_ssl_context verify=True cert=None trust_env=True http2=False
2024-07-20 21:13:19,899 DEBUG _config.py:146  load_verify_locations cafile='/home/quin/.cache/pypoetry/virtualenvs/agpt-iLIIu-Uy-py3.10/lib/python3.10/site-packages/certifi/cacert.pem'
2024-07-20 21:13:19,937 DEBUG _config.py:79  load_ssl_context verify=True cert=None trust_env=True http2=False
2024-07-20 21:13:19,938 DEBUG _config.py:146  load_verify_locations cafile='/home/quin/.cache/pypoetry/virtualenvs/agpt-iLIIu-Uy-py3.10/lib/python3.10/site-packages/certifi/cacert.pem'
2024-07-20 21:13:19,978 DEBUG multi.py:170  Initialized OpenAIProvider!
2024-07-20 21:13:19,979 DEBUG _trace.py:85  connect_tcp.started host='api.openai.com' port=443 local_address=None timeout=5.0 socket_options=None
2024-07-20 21:13:20,295 DEBUG _trace.py:85  connect_tcp.complete return_value=<httpcore._backends.anyio.AnyIOStream object at 0x7f701fc486a0>
2024-07-20 21:13:20,296 DEBUG _trace.py:85  start_tls.started ssl_context=<ssl.SSLContext object at 0x7f702082dac0> server_hostname='api.openai.com' timeout=5.0
2024-07-20 21:13:20,463 DEBUG _trace.py:85  start_tls.complete return_value=<httpcore._backends.anyio.AnyIOStream object at 0x7f70205c22c0>
2024-07-20 21:13:20,463 DEBUG _trace.py:85  send_request_headers.started request=<Request [b'GET']>
2024-07-20 21:13:20,463 DEBUG _trace.py:85  send_request_headers.complete
2024-07-20 21:13:20,463 DEBUG _trace.py:85  send_request_body.started request=<Request [b'GET']>
2024-07-20 21:13:20,464 DEBUG _trace.py:85  send_request_body.complete
2024-07-20 21:13:20,464 DEBUG _trace.py:85  receive_response_headers.started request=<Request [b'GET']>
2024-07-20 21:13:21,492 DEBUG _trace.py:85  receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'Date', b'Sat, 20 Jul 2024 11:13:20 GMT'), (b'Content-Type', b'application/json'), (b'Transfer-Encoding', b'chunked'), (b'Connection', b'keep-alive'), (b'openai-version', b'2020-10-01'),  ### Redacted ###, (b'openai-processing-ms', b'811'), (b'access-control-allow-origin', b'*'), (b'strict-transport-security', b'max-age=15552000; includeSubDomains; preload'), (b'CF-Cache-Status', b'DYNAMIC'),  ### Redacted ### ; path=/; expires=Sat, 20-Jul-24 11:43:20 GMT; domain=.api.openai.com; HttpOnly; Secure; SameSite=None'), (b'X-Content-Type-Options', b'nosniff'),  ### Redacted ### domain=.api.openai.com; HttpOnly; Secure; SameSite=None'), (b'Server', b'cloudflare'), (b'CF-RAY', b'8a6285717dd47ae0-SJC'), (b'Content-Encoding', b'gzip'), (b'alt-svc', b'h3=":443"; ma=86400')])
2024-07-20 21:13:21,493 INFO _client.py:1729  HTTP Request: GET https://api.openai.com/v1/models "HTTP/1.1 200 OK"
2024-07-20 21:13:21,493 DEBUG _trace.py:85  receive_response_body.started request=<Request [b'GET']>
2024-07-20 21:13:21,493 DEBUG _trace.py:85  receive_response_body.complete
2024-07-20 21:13:21,494 DEBUG _trace.py:85  response_closed.started
2024-07-20 21:13:21,494 DEBUG _trace.py:85  response_closed.complete
2024-07-20 21:13:21,495 DEBUG multi.py:128  openai is available!
2024-07-20 21:13:21,495 DEBUG multi.py:125  Checking if anthropic is available...
2024-07-20 21:13:21,496 DEBUG multi.py:136  AnthropicProvider not yet in cache, trying to init...
2024-07-20 21:13:21,496 DEBUG multi.py:150  Loading AnthropicCredentials...
2024-07-20 21:13:21,496 DEBUG multi.py:155  Could not load (required) AnthropicCredentials
2024-07-20 21:13:21,496 DEBUG multi.py:125  Checking if groq is available...
2024-07-20 21:13:21,496 DEBUG multi.py:136  GroqProvider not yet in cache, trying to init...
2024-07-20 21:13:21,496 DEBUG multi.py:150  Loading GroqCredentials...
2024-07-20 21:13:21,497 DEBUG multi.py:155  Could not load (required) GroqCredentials
2024-07-20 21:13:21,497 DEBUG multi.py:125  Checking if llamafile is available...
2024-07-20 21:13:21,497 DEBUG multi.py:136  LlamafileProvider not yet in cache, trying to init...
2024-07-20 21:13:21,497 DEBUG multi.py:150  Loading LlamafileCredentials...
2024-07-20 21:13:21,497 DEBUG _config.py:79  load_ssl_context verify=True cert=None trust_env=True http2=False
2024-07-20 21:13:21,498 DEBUG _config.py:146  load_verify_locations cafile='/home/quin/.cache/pypoetry/virtualenvs/agpt-iLIIu-Uy-py3.10/lib/python3.10/site-packages/certifi/cacert.pem'
2024-07-20 21:13:21,535 DEBUG multi.py:170  Initialized LlamafileProvider!
2024-07-20 21:13:21,536 DEBUG _trace.py:85  connect_tcp.started host='localhost' port=8080 local_address=None timeout=5.0 socket_options=None
2024-07-20 21:13:21,538 DEBUG _trace.py:85  connect_tcp.failed exception=ConnectError(OSError('All connection attempts failed'))
2024-07-20 21:13:22,348 DEBUG _trace.py:85  connect_tcp.started host='localhost' port=8080 local_address=None timeout=5.0 socket_options=None
2024-07-20 21:13:22,349 DEBUG _trace.py:85  connect_tcp.failed exception=ConnectError(OSError('All connection attempts failed'))
2024-07-20 21:13:24,074 DEBUG _trace.py:85  connect_tcp.started host='localhost' port=8080 local_address=None timeout=5.0 socket_options=None
2024-07-20 21:13:24,075 DEBUG _trace.py:85  connect_tcp.failed exception=ConnectError(OSError('All connection attempts failed'))
Traceback (most recent call last):
  File "/home/quin/.cache/pypoetry/virtualenvs/agpt-iLIIu-Uy-py3.10/lib/python3.10/site-packages/anyio/_core/_sockets.py", line 170, in try_connect
    stream = await asynclib.connect_tcp(remote_host, remote_port, local_address)
  File "/home/quin/.cache/pypoetry/virtualenvs/agpt-iLIIu-Uy-py3.10/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 2258, in connect_tcp
    await get_running_loop().create_connection(
  File "/usr/lib/python3.10/asyncio/base_events.py", line 1076, in create_connection
    raise exceptions[0]
  File "/usr/lib/python3.10/asyncio/base_events.py", line 1060, in create_connection
    sock = await self._connect_sock(
  File "/usr/lib/python3.10/asyncio/base_events.py", line 969, in _connect_sock
    await self.sock_connect(sock, address)
  File "/usr/lib/python3.10/asyncio/selector_events.py", line 501, in sock_connect
    return await fut
  File "/usr/lib/python3.10/asyncio/selector_events.py", line 541, in _sock_connect_cb
    raise OSError(err, f'Connect call failed {address}')
ConnectionRefusedError: [Errno 111] Connect call failed ('127.0.0.1', 8080)

@maximus1127
Copy link

I agree that it seems to be a local server issue. However, I'm still not sure how to fix it. Doing a directory search for "8080" seems to yield results in regards to the benchmarking feature or something non-related to using openAi like llama files or other things. I don't have it set up for llama, just using my openAi key so those configuration files don't seem to be what I need. Also confirmed that nothing is occupying port 8080 on my machine so there isn't anything obvious that would block the connection.

@izk8
Copy link

izk8 commented Jul 23, 2024

Ran into this issue on two machines. Modded llamafile.py based on fix and some work to get it to run:

class LlamafileProvider(
    BaseOpenAIChatProvider[LlamafileModelName, LlamafileSettings],
):
    EMBEDDING_MODELS = LLAMAFILE_EMBEDDING_MODELS
    CHAT_MODELS = LLAMAFILE_CHAT_MODELS
    MODELS = {**CHAT_MODELS, **EMBEDDING_MODELS}

    default_settings = LlamafileSettings(
        name="llamafile_provider",
        description=(
            "Provides chat completion and embedding services "
            "through a llamafile instance"
        ),
        configuration=LlamafileConfiguration(),
    )

    _settings: LlamafileSettings
    _credentials: LlamafileCredentials
    _configuration: LlamafileConfiguration
    name: str = "LlamafileProvider"  # Add this line

    async def get_available_models(self) -> Sequence[ChatModelInfo[LlamafileModelName]]:
        try:
            _models = (await self._client.models.list()).data
            self._logger.debug(f"Retrieved llamafile models: {_models}")

            clean_model_ids = [clean_model_name(m.id) for m in _models]
            self._logger.debug(f"Cleaned llamafile model IDs: {clean_model_ids}")

            return [
                LLAMAFILE_CHAT_MODELS[id]
                for id in clean_model_ids
                if id in LLAMAFILE_CHAT_MODELS
            ]
        except Exception as e:
            self._logger.warning(f"Failed to get available models from provider {self.name}: {e}")
            return []

    def get_tokenizer(self, model_name: LlamafileModelName) -> LlamafileTokenizer:
        return LlamafileTokenizer(self._credentials)

    # (Keep the rest unchanged)

Copy link
Member

Pwuts commented Jul 23, 2024

Thank you very much for reporting this, and sorry for breaking it (#7091). Fix coming ASAP.

@Pwuts Pwuts self-assigned this Jul 23, 2024
@Pwuts Pwuts added bug Something isn't working Classic AutoGPT Agent labels Jul 23, 2024 — with Linear
@Pwuts Pwuts changed the title Connection Error with OpenAI Llamafile connection error breaks application start-up Jul 23, 2024
@Pwuts Pwuts closed this as completed in e7885f9 Jul 23, 2024
@izk8
Copy link

izk8 commented Jul 23, 2024

Thank you @Pwuts

elvinmahmudov pushed a commit to elvinmahmudov/AutoGPT that referenced this issue Jul 24, 2024
Fixes Significant-Gravitas#7508

- Amend `app/configurator.py:check_model(..)` to check multiple models at once and save duplicate API calls
- Amend `MultiProvider.get_available_providers()` to verify availability by fetching models and handle failure
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working Classic AutoGPT Agent
Projects
None yet
Development

No branches or pull requests

5 participants