Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Convert test_sql to pytest idiom #54936

Merged
merged 5 commits into from
Sep 7, 2023
Merged

Conversation

WillAyd
Copy link
Member

@WillAyd WillAyd commented Sep 1, 2023

Should make onboarding the new ADBC drivers a little easier.

This can be done in a few phases/PRs

@jorisvandenbossche

@@ -476,17 +491,25 @@ def sqlite_conn(sqlite_engine):


@pytest.fixture
def sqlite_iris_str(sqlite_str, iris_path):
def sqlite_iris_str(sqlite_str, iris_path, types_data):
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think in the long run it would be better to not have separate _iris fixtures. The challenge is there is inconsistency across the drivers right now whether they work with the iris loading function (sqlite_building and sqlite_str being two that come to mind). So didn't try to change that much in this PR; leaving it to a future enhancement

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm personally I would prefer to have separate fixtures for the connection and connection + iris

@pytest.fixture
def connectable():
    with connection as con:
        yield conn
        conn.dump_all_tables_and_dispose()


@pytest.fixture
def connectable_with_iris(connectable):
      connectable.insert_iris_data()
      yield connectable
      connectable.delete_iris_data()

It makes it clear which tests specifically need iris data and ensures that there's no leftover state for subsequent tests if there's a failure

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds good. No objection to that either - just will take a little more effort to get us there, which I don't plan on doing in this PR

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's also the types table - do you think that should be a separate fixture or rolled into the iris one?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds good. No objection to that either - just will take a little more effort to get us there, which I don't plan on doing in this PR

No worries, I can help out with this when I have spare cycles

There's also the types table - do you think that should be a separate fixture or rolled into the iris one?

I don't remember how the types tables are used but my first reaction is that they should

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we can also make it so the iris / type table fixtures re-use the all_connectables fixture, instead of duplicating each connection for iris / type. So loosely would look something like this:

@pytest.fixture
def postgres_conn():
    ...

@pytest.fixture
def sqlite_conn():
    ...

all_connectable = [postgres_conn, slqite_conn, ...]

@pytest.fixture
def iris_data(all_connectable):
    ...

@pytest.fixture
def types_data(all_connectable):
    ...

Today we have sqlite_conn_iris, postgres_conn_iris iterations and the types tables are loosely mixed in somewhere

@@ -138,7 +138,7 @@ def _parse_date_columns(data_frame, parse_dates):
if isinstance(df_col.dtype, DatetimeTZDtype) or col_name in parse_dates:
try:
fmt = parse_dates[col_name]
except TypeError:
except (KeyError, TypeError):
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I couldn't reproduce this locally but it seems like this code didn't like the case when DateColWithTz was inferred to be temporal and was not included in parse_dates

@pytest.mark.parametrize("conn", all_connectable)
def test_api_to_sql_index_label_multiindex(conn, request):
conn_name = conn
if "mysql" in conn_name:
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wasn't able to reproduce this locally using MariaDB but kept seeing it on CI. Not sure if it is a version thing or MySQL <> MariaDB difference.

@WillAyd
Copy link
Member Author

WillAyd commented Sep 7, 2023

I think this is mergable. Plan is to keep getting rid of these classes in follow ups; let me know of any feedback

@mroeschke mroeschke added Testing pandas testing functions or related to the test suite IO SQL to_sql, read_sql, read_sql_query labels Sep 7, 2023
@mroeschke mroeschke added this to the 2.2 milestone Sep 7, 2023
@mroeschke mroeschke merged commit cea0cc0 into pandas-dev:main Sep 7, 2023
39 checks passed
@mroeschke
Copy link
Member

Thanks @WillAyd

@WillAyd WillAyd deleted the refactor-test-sql branch September 7, 2023 18:19
mroeschke pushed a commit to mroeschke/pandas that referenced this pull request Sep 11, 2023
* Convert test_sql to pytest idiom

* Try KeyError catch

* Added drop_view to existing test method

* xfail MySQL issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
IO SQL to_sql, read_sql, read_sql_query Testing pandas testing functions or related to the test suite
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants