Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement Table.replace for the database backend #8986

Merged
merged 94 commits into from
Feb 29, 2024
Merged
Show file tree
Hide file tree
Changes from 63 commits
Commits
Show all changes
94 commits
Select commit Hold shift + click to select a range
756dfa9
one test
GregoryTravis Jan 30, 2024
58da2fd
hack
GregoryTravis Jan 31, 2024
f2cd94a
Merge branch 'develop' into wip/gmt/8578-Table.replace
GregoryTravis Jan 31, 2024
0501d05
revert hack
GregoryTravis Jan 31, 2024
fcab4e0
use merge
GregoryTravis Jan 31, 2024
f2a58f6
unhack
GregoryTravis Jan 31, 2024
5ed3fed
example
GregoryTravis Jan 31, 2024
8caac00
tests
GregoryTravis Jan 31, 2024
74bc949
tests
GregoryTravis Jan 31, 2024
69753fa
scramble lookup table order in tests
GregoryTravis Jan 31, 2024
4cf1309
duplicate inputs
GregoryTravis Jan 31, 2024
aae2bd2
self-lookup
GregoryTravis Jan 31, 2024
d458ed8
type test
GregoryTravis Jan 31, 2024
f924690
remove incorrect test
GregoryTravis Feb 1, 2024
bf142c0
remove materialize
GregoryTravis Feb 1, 2024
b97841c
db stub
GregoryTravis Feb 1, 2024
b88633e
unused imports, widgets
GregoryTravis Feb 1, 2024
2a02eb0
cleanup
GregoryTravis Feb 1, 2024
2b7ad13
Merge branch 'develop' into wip/gmt/8578-Table.replace
GregoryTravis Feb 1, 2024
8b7c361
rename replace_column to column
GregoryTravis Feb 1, 2024
9a084d0
docs, comment, cleanup
GregoryTravis Feb 1, 2024
64d43f3
comment
GregoryTravis Feb 1, 2024
4acb007
fix docs
GregoryTravis Feb 1, 2024
df35418
convert from Map
GregoryTravis Feb 1, 2024
5ac57ce
changelog
GregoryTravis Feb 1, 2024
d01152d
cleanup
GregoryTravis Feb 1, 2024
69cdb47
Merge branch 'develop' into wip/gmt/8578-Table.replace
GregoryTravis Feb 2, 2024
76368e7
review
GregoryTravis Feb 2, 2024
4ea9c22
review
GregoryTravis Feb 2, 2024
a269166
db except map
GregoryTravis Feb 2, 2024
b6cb8cc
wip
GregoryTravis Feb 2, 2024
940bfd3
move implementation to Replace_Helpers
GregoryTravis Feb 2, 2024
bb7cb2f
make_table_from_map method
GregoryTravis Feb 2, 2024
6c1d530
wip
GregoryTravis Feb 2, 2024
a475905
wip
GregoryTravis Feb 2, 2024
f9aaf71
review
GregoryTravis Feb 5, 2024
7ad9723
Merge branch 'develop' into wip/gmt/8578-Table.replace
GregoryTravis Feb 5, 2024
fe389f8
review
GregoryTravis Feb 5, 2024
645805b
update docs
GregoryTravis Feb 5, 2024
5450561
merge
GregoryTravis Feb 5, 2024
a6523c0
one col
GregoryTravis Feb 5, 2024
4dec328
wip
GregoryTravis Feb 5, 2024
d56bb09
two columns
GregoryTravis Feb 5, 2024
947ebcf
tests pass
GregoryTravis Feb 5, 2024
8ad298d
vector.transpose
GregoryTravis Feb 5, 2024
e9c1a2c
cleanup
GregoryTravis Feb 5, 2024
324cbef
wip
GregoryTravis Feb 5, 2024
7a5854c
parser error
GregoryTravis Feb 6, 2024
02a0d54
Merge branch 'develop' into wip/gmt/8578-Table.replace
GregoryTravis Feb 6, 2024
de5f91e
Merge branch 'wip/gmt/8578-Table.replace' into wip/gmt/8578-Table.rep…
GregoryTravis Feb 6, 2024
ea2876d
fix parser failure
GregoryTravis Feb 6, 2024
6fff368
wip
GregoryTravis Feb 6, 2024
80dbc76
Merge branch 'develop' into wip/gmt/8578-Table.replace
GregoryTravis Feb 6, 2024
8afbb74
Merge branch 'wip/gmt/8578-Table.replace' into wip/gmt/8578-Table.rep…
GregoryTravis Feb 6, 2024
e74b67d
wip
GregoryTravis Feb 6, 2024
4b34631
one test passes
GregoryTravis Feb 6, 2024
b241fbe
more tests
GregoryTravis Feb 6, 2024
21b6b80
wip
GregoryTravis Feb 6, 2024
fd9212a
max size tests
GregoryTravis Feb 6, 2024
fa7c024
from map tests
GregoryTravis Feb 6, 2024
fbea5a9
docs
GregoryTravis Feb 6, 2024
0073e46
Literal_Values
GregoryTravis Feb 6, 2024
a5bf58a
cleanup
GregoryTravis Feb 6, 2024
5a1b6d0
no table_builder
GregoryTravis Feb 6, 2024
ebf5087
Merge branch 'wip/gmt/8578-Table.replace' into wip/gmt/8578-Table.rep…
GregoryTravis Feb 6, 2024
cdddcc8
merge
GregoryTravis Feb 7, 2024
76ff386
wip
GregoryTravis Feb 7, 2024
22a76bb
wip
GregoryTravis Feb 7, 2024
0090a20
changelog
GregoryTravis Feb 7, 2024
b99627c
merge
GregoryTravis Feb 8, 2024
0028687
use proxy
GregoryTravis Feb 8, 2024
5eaf502
Merge branch 'develop' into wip/gmt/8578-Table.replace-db
GregoryTravis Feb 9, 2024
5a772f0
better error for length mismatch
GregoryTravis Feb 9, 2024
2c47cfe
wip
GregoryTravis Feb 9, 2024
32452e8
move transpose into Vector
GregoryTravis Feb 9, 2024
9bce18d
vector/array docs match
GregoryTravis Feb 9, 2024
bd3a300
merge
GregoryTravis Feb 12, 2024
4174e9b
warning on empty lookup table, tests
GregoryTravis Feb 12, 2024
ae66716
merge
GregoryTravis Feb 12, 2024
a1558b7
edge cases in empty lookup table
GregoryTravis Feb 12, 2024
3aeb792
merge
GregoryTravis Feb 13, 2024
5c0a54a
enable empty-table tests for db backend
GregoryTravis Feb 13, 2024
6af697b
review
GregoryTravis Feb 14, 2024
12a4c8e
Merge branch 'develop' into wip/gmt/8578-Table.replace-db
GregoryTravis Feb 14, 2024
88e022f
merge
GregoryTravis Feb 14, 2024
cd3fab6
no widgets for from/to columns
GregoryTravis Feb 16, 2024
046488c
use parameter name, not variable name
GregoryTravis Feb 16, 2024
2a7d8ae
Merge branch 'develop' into wip/gmt/8578-Table.replace-db
GregoryTravis Feb 16, 2024
fe72e3b
merge
GregoryTravis Feb 20, 2024
ea47095
fix merge
GregoryTravis Feb 20, 2024
7df7499
Merge branch 'develop' into wip/gmt/8578-Table.replace-db
GregoryTravis Feb 26, 2024
7cd0bb8
Merge branch 'develop' into wip/gmt/8578-Table.replace-db
GregoryTravis Feb 29, 2024
6a4ad4b
move implementation to Array_Like_Helpers
GregoryTravis Feb 29, 2024
72f36c7
fix tests
GregoryTravis Feb 29, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -611,6 +611,7 @@
`Filter_Condition`.][8865]
- [Added `File_By_Line` type allowing processing a file line by line. New faster
JSON parser based off Jackson.][8719]
- [Implemented `Table.replace` for the in-memory backend.][8935]

[debug-shortcuts]:
https://github.com/enso-org/enso/blob/develop/app/gui/docs/product/shortcuts.md#debug
Expand Down Expand Up @@ -878,6 +879,7 @@
[8816]: https://github.com/enso-org/enso/pull/8816
[8849]: https://github.com/enso-org/enso/pull/8849
[8865]: https://github.com/enso-org/enso/pull/8865
[8935]: https://github.com/enso-org/enso/pull/8935

#### Enso Compiler

Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,55 @@
import project.Any.Any
import project.Data.Array.Array
import project.Data.Vector.Vector
import project.Error.Error
import project.Errors.Illegal_Argument.Illegal_Argument
import project.Runtime

## GROUP Selections
Swaps the rows and columns of a matrix represented by a `Vector` of `Vectors`.

! Error Conditions

- If the rowws (subvectors) do not all have the same length, an
`Illegal_Argument` error is raised.

> Example
Transpose a `Vector` of `Vectors`.`

"Hello, world!".reverse

GregoryTravis marked this conversation as resolved.
Show resolved Hide resolved
matrix = [[0, 1, 2], [3, 4, 5], [6, 7, 8]]
# +---+---+---+
# | 0 | 1 | 2 |
# +---+---+---+
# | 3 | 4 | 5 |
# +---+---+---+
# | 6 | 7 | 8 |
# +---+---+---+
GregoryTravis marked this conversation as resolved.
Show resolved Hide resolved

transposed = [[0, 3, 6], [1, 4, 7], [2, 5, 8]]
# +---+---+---+
# | 0 | 3 | 6 |
# +---+---+---+
# | 1 | 4 | 7 |
# +---+---+---+
# | 2 | 5 | 8 |
# +---+---+---+

matrix.transposed == transposed
# => True
Vector.transpose : Vector (Vector Any) ! Illegal_Argument
Vector.transpose self =
GregoryTravis marked this conversation as resolved.
Show resolved Hide resolved
if self.is_empty then [] else
length = self.length
first_subvector_length = self.at 0 . length
builders = Vector.new first_subvector_length (_-> Vector.new_builder length)
result = self.map v->
if v.length != first_subvector_length then Error.throw (Illegal_Argument.Error "Transpose requires that all rows be the same length") else
v.map_with_index i-> x->
builders.at i . append x
result.if_not_error <|
builders.map .to_vector
GregoryTravis marked this conversation as resolved.
Show resolved Hide resolved

Array.transpose : Vector (Vector Any) ! Illegal_Argument
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should Array.transpose have the same docs as Vector.tranpose? Or is it not exposed to the user?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should have on both as we make the APIs the same.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done. How does an end-user actually make use of Arrays? I changed the docs for Array to say Array instead of Vector, assuming that end-users are actually aware of Array as a distinct type, but is that right?

Array.transpose self = Vector.transpose self
138 changes: 136 additions & 2 deletions distribution/lib/Standard/Database/0.0.0-dev/src/Data/Table.enso
Original file line number Diff line number Diff line change
Expand Up @@ -34,6 +34,7 @@ import Standard.Table.Internal.Aggregate_Column_Helper
import Standard.Table.Internal.Column_Naming_Helper.Column_Naming_Helper
import Standard.Table.Internal.Constant_Column.Constant_Column
import Standard.Table.Internal.Problem_Builder.Problem_Builder
import Standard.Table.Internal.Replace_Helpers
import Standard.Table.Internal.Table_Helpers
import Standard.Table.Internal.Table_Helpers.Table_Column_Helper
import Standard.Table.Internal.Unique_Name_Strategy.Unique_Name_Strategy
Expand Down Expand Up @@ -926,6 +927,54 @@ type Table
on_problems.attach_problems_before problems <|
Warning.set result []

## PRIVATE
A helper that creates a two-column literal table from a `Map`.
make_table_from_map : Map Any Any -> Text -> Text -> Table
make_table_from_map self map key_column_name value_column_name =
total_size = map.size * 2

if map.is_empty then Error.throw (Illegal_Argument.Error "Map argument cannot be empty") else
if total_size > MAX_LITERAL_ELEMENT_COUNT then Error.throw (Illegal_Argument.Error "Map argument is too large ("+map.size.to_text+" entries): materialize a table into the database instead") else
keys_and_values = map.to_vector
self.make_table_from_vectors [keys_and_values.map .first, keys_and_values.map .second] [key_column_name, value_column_name]

## PRIVATE
A helper that creates a literal table from `Vector`s.
make_table_from_vectors : Vector (Vector Any) -> Vector Text -> Table
make_table_from_vectors self column_vectors column_names =
Runtime.assert (column_vectors.length == column_names.length) "Vectors and column names must have the same length"

# Assume the columns are all the same length; if not, it will be an error anyway.
total_size = if column_vectors.is_empty || column_vectors.at 0 . is_empty then 0 else
column_vectors.length * (column_vectors.at 0 . length)

if total_size == 0 then Error.throw (Illegal_Argument.Error "Vectors cannot be empty") else
if total_size > MAX_LITERAL_ELEMENT_COUNT then Error.throw (Illegal_Argument.Error "Too many elements for table literal ("+total_size.to_text+"): materialize a table into the database instead") else
type_mapping = self.connection.dialect.get_type_mapping

values_to_type_ref column_vector =
value_type = Value_Type_Helpers.find_common_type_for_arguments column_vector
sql_type = case value_type of
Nothing -> SQL_Type.null
_ -> type_mapping.value_type_to_sql value_type Problem_Behavior.Ignore
SQL_Type_Reference.from_constant sql_type

literal_table_name = self.connection.base_connection.table_naming_helper.generate_random_table_name "enso-literal-"

from_spec = From_Spec.Literal_Values column_vectors column_names literal_table_name
context = Context.for_subquery from_spec

internal_columns = 0.up_to column_vectors.length . map i->
column_vector = column_vectors.at i
column_name = column_names.at i

type_ref = values_to_type_ref column_vector.to_vector
generated_literal_column_name = "column"+(i+1).to_text
sql_expression = SQL_Expression.Column literal_table_name generated_literal_column_name
Internal_Column.Value column_name type_ref sql_expression

Table.Value literal_table_name self.connection internal_columns context

## PRIVATE

Create a constant column from a value.
Expand Down Expand Up @@ -1392,7 +1441,7 @@ type Table
In the Database backend, there are no guarantees related to ordering of
results.

? Error Conditions
! Error Conditions

- If this table or the lookup table is lacking any of the columns
specified in `key_columns`, a `Missing_Input_Columns` error is raised.
Expand All @@ -1403,7 +1452,7 @@ type Table
- If a column that is being updated from the lookup table has a type
that is not compatible with the type of the corresponding column in
this table, a `No_Common_Type` error is raised.
- If a key column contains `Nothing` values, either in the lookup table,
- If a key column contains `Nothing` values in the lookup table,
a `Null_Values_In_Key_Columns` error is raised.
- If `allow_unmatched_rows` is `False` and there are rows in this table
that do not have a matching row in the lookup table, an
Expand All @@ -1420,6 +1469,87 @@ type Table
Helpers.ensure_same_connection "table" [self, lookup_table] <|
Lookup_Query_Helper.build_lookup_query self lookup_table key_columns add_new_columns allow_unmatched_rows on_problems

## ALIAS find replace
GROUP Standard.Base.Calculations
ICON join
Replaces values in `column` using `lookup_table` to specify a
mapping from old to new values.

Arguments:
- lookup_table: the table to use as a mapping from old to new values. A
`Map` can also be used here (in which case passing `from_column` or
`to_column` is disallowed and will throw an `Illegal_Argument` error.
- column: the column within `self` to perform the replace on.
- from_column: the column within `lookup_table` to match against `column`
in `self`.
- to_column: the column within `lookup_table` to get new values from.
- allow_unmatched_rows: Specifies how to handle missing rows in the lookup.
If `False` (the default), an `Unmatched_Rows_In_Lookup` error is raised.
If `True`, the unmatched rows are left unchanged. Any new columns will
be filled with `Nothing`.
- on_problems: Specifies how to handle problems if they occur, reporting
them as warnings by default.

? Result Ordering

When operating in-memory, this operation preserves the order of rows
from this table (unlike `join`).
In the Database backend, there are no guarantees related to ordering of
results.

! Error Conditions

- If this table or the lookup table is lacking any of the columns
specified by `from_column`, `to_column`, or `column`, a
`Missing_Input_Columns` error is raised.
- If a single row is matched by multiple entries in the lookup table,
a `Non_Unique_Key` error is raised.
- If a column that is being updated from the lookup table has a type
that is not compatible with the type of the corresponding column in
this table, a `No_Common_Type` error is raised.
- If a key column contains `Nothing` values in the lookup table,
a `Null_Values_In_Key_Columns` error is raised.
- If `allow_unmatched_rows` is `False` and there are rows in this table
that do not have a matching row in the lookup table, an
`Unmatched_Rows_In_Lookup` error is raised.
- The following problems may be reported according to the `on_problems`
setting:
- If any of the `columns` is a floating-point type,
a `Floating_Point_Equality`.

> Example
Replace values in column 'x' using a lookup table.

table = Table.new [['x', [1, 2, 3, 4]], ['y', ['a', 'b', 'c', 'd']], ['z', ['e', 'f', 'g', 'h']]]
# | x | y | z
# ---+---+---+---
# 0 | 1 | a | e
# 1 | 2 | b | f
# 2 | 3 | c | g
# 3 | 4 | d | h

lookup_table = Table.new [['x', [1, 2, 3, 4]], ['new_x', [10, 20, 30, 40]]]
# | old_x | new_x
# ---+-------+-------
# 0 | 1 | 10
# 1 | 2 | 20
# 2 | 3 | 30
# 3 | 4 | 40

result = table.replace lookup_table 'x'
# | x | y | z
# ---+----+---+---
# 0 | 10 | a | e
# 1 | 20 | b | f
# 2 | 30 | c | g
# 3 | 40 | d | h
@column Widget_Helpers.make_column_name_selector
@from_column Widget_Helpers.make_column_name_selector
@to_column Widget_Helpers.make_column_name_selector
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

aren't these from the lookup_table so the selector is wrong?
We need to have widgets derived from first argument for this I think.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure what the notation would be for this -- is there documentation for the @ clauses? Or, where is it implemented? I don't see an example of a widget attached to a value other than self.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed.

replace : Table | Map -> (Text | Integer) -> (Text | Integer | Nothing) -> (Text | Integer | Nothing) -> Boolean -> Problem_Behavior -> Table ! Missing_Input_Columns | Non_Unique_Key | Unmatched_Rows_In_Lookup
replace self lookup_table:(Table | Map) column:(Text | Integer) from_column:(Text | Integer | Nothing)=Nothing to_column:(Text | Integer | Nothing)=Nothing allow_unmatched_rows:Boolean=True on_problems:Problem_Behavior=Problem_Behavior.Report_Warning =
Replace_Helpers.replace self lookup_table column from_column to_column allow_unmatched_rows on_problems

## ALIAS join by row position
GROUP Standard.Base.Calculations
ICON dataframes_join
Expand Down Expand Up @@ -2706,3 +2836,7 @@ Table.from (that:Materialized_Table) =

## PRIVATE
Table_Ref.from (that:Table) = Table_Ref.Value that

## PRIVATE
The largest dataset that can be used to make a literal table, expressed in number of elements.
MAX_LITERAL_ELEMENT_COUNT = 256
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps this should be bigger?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there any DB limit? I guess it is limited by the query size... I wonder if such literal table will be able to use hashjoins or will it always fall back to a linear scan. I guess for <256 it doesn't matter much. For larger values the size of the SQL query may start being quite problematic.

I think ideally we shouldn't cut off but instead we should be creating temporary tables - but definitely an improvement for a separate PR.
(I also wonder, given how often currently Enso re-evaluates the expressions, if we could create too much garbage in such way)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, that was my concern. My first thought was to create a temporary table, but that will be happening repeatedly during each evaluation.

I was able to create tables with ~18000 rows this way in both postgres and sqlite, and at that point I stopped testing. I'm more concerned about the size of the query being sent over the wire, but I don't really know at what point that becomes a problem. I figured that giving it a small would be a good first step. I think it covers the majority of use-cases.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Interesting that it worked for such large sizes.

I guess the primary concern indeed is the query size, especially as a temp table is likely sent in a more compact binary compact.

Anyway, seems all ok for now.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

256 a reasonable literal default. Agee with @radeusgd should be a temp table when gets bigger!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will this result in a lot of temporary tables being created and not eagerly cleaned up? That was my concern when doing it the literal way.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will this result in a lot of temporary tables being created and not eagerly cleaned up? That was my concern when doing it the literal way.

Yes indeed that is a concern.

I think we should implement such a feature as a separate PR, as it has non-trivial complexity.

Also, I think the concern of 'too many' temporary tables may also be increased due to the fact that currently the IDE seems to recompute unrelated nodes 'too often'. I think that if we resolve that, the amount of re-computations could be lower and thus the problem would not be as big.

As for implementing these temporary tables efficiently, we can leverage a few tricks:

  1. I expect that very often when the operation is re-run, the in-memory lookup table will actually be the same between re-runs. We can exploit that and try keeping a cache of already uploaded temporary tables, indexed by the hashcode of the table's contents (uploading the table requires us to scan its whole contents anyway, so the additional cost of computing the hashcode is negligible in this case). This way we can avoid uploading a new temporary table on each run, if we can detect that a 'matching' table was already uploaded before.
  2. We can exploit the Managed_Resource framework to try to clean up the tables once the references to them are GCed. We actually already implement a very similar feature - Hidden_Table_Registry. It is used to be able to re-use temporary hidden tables for dry-run operations, and clean them up once they are no longer needed. We could extend this registry to also support such temporary tables that are not used for dry-run (and accessed by name) but accessed by e.g. content's hash.

In fact we can merge the 2 approaches. If between runs the tables are the same we can avoid re-uploading them. Once all references to them are GCed we can clean them up.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Original file line number Diff line number Diff line change
Expand Up @@ -341,6 +341,10 @@ generate_from_part dialect from_spec = case from_spec of
dialect.wrap_identifier name ++ alias dialect as_name
From_Spec.Query raw_sql as_name ->
Builder.code raw_sql . paren ++ alias dialect as_name
From_Spec.Literal_Values vecs column_names as_name ->
Runtime.assert (vecs.length == column_names.length) "Vectors and column names must have the same length"
values = Builder.join ", " (vecs.transpose.map (vec-> Builder.join ", " (vec.map Builder.interpolation) . paren))
Builder.code "(VALUES " ++ values ++ ")" ++ alias dialect as_name
From_Spec.Join kind left_spec right_spec on ->
left = generate_from_part dialect left_spec
right = generate_from_part dialect right_spec
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -38,6 +38,17 @@ type From_Spec
the same table.
Query (raw_sql : Text) (alias : Text)

## PRIVATE

A query source consisting of a literal VALUES clause.

Arguments:
- column_vectors: the contents of the literal table's columns.
- column_names: the names of the literal table's columns,
- alias: the name by which the table can be referred to in other parts of
the query.
Literal_Values (column_vectors : Vector (Vector Any)) (column_names : Vector Text) (alias : Text)
GregoryTravis marked this conversation as resolved.
Show resolved Hide resolved

## PRIVATE

A query source that performs a join operation on two sources.
Expand Down
Loading
Loading