You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Inspired by this issue, sparklyr/sparklyr#537, it would be useful to have a dplyr function to retrieve the type of each column.
For local tibbles, this is not so important, since there are already easy ways of doing that. However, for remote data sources, it becomes tricky. For example, with a tibble whose data is in a database table, you need to write something like this:
It would be useful to have a function (tbl_schema?) that returns the remote database/Spark data type of each field. To make it easy to work with programmatically, the return value should be a tibble with two columns: name and type.
The text was updated successfully, but these errors were encountered:
Inspired by this issue, sparklyr/sparklyr#537, it would be useful to have a
dplyr
function to retrieve the type of each column.For local tibbles, this is not so important, since there are already easy ways of doing that. However, for remote data sources, it becomes tricky. For example, with a tibble whose data is in a database table, you need to write something like this:
It would be useful to have a function (
tbl_schema
?) that returns the remote database/Spark data type of each field. To make it easy to work with programmatically, the return value should be a tibble with two columns:name
andtype
.The text was updated successfully, but these errors were encountered: