wengh commented on code in PR #49961:
URL: https://github.com/apache/spark/pull/49961#discussion_r1972843589


##########
python/pyspark/sql/datasource.py:
##########
@@ -234,6 +249,35 @@ def streamReader(self, schema: StructType) -> 
"DataSourceStreamReader":
         )
 
 
+ColumnPath = Tuple[str, ...]
+
+
+@dataclass(frozen=True)
+class Filter(ABC):
+    """
+    The base class for filters used for filter pushdown.
+
+    .. versionadded: 4.1.0
+
+    Notes
+    -----
+    Column references are represented as a tuple of strings. For example, the 
column
+    `col1` is represented as `("col1",)`, and the nested column `a.b.c` is
+    represented as `("a", "b", "c")`.
+
+    Literal values are represented as Python objects of types such as
+    `int`, `float`, `str`, `bool`, `datetime`, etc.
+    See `Data Types 
<https://spark.apache.org/docs/latest/sql-ref-datatypes.html>`_
+    for more information about how values are represented in Python.
+    """
+
+
+@dataclass(frozen=True)
+class EqualTo(Filter):
+    lhsColumnPath: ColumnPath
+    rhsValue: Any

Review Comment:
   Yeah we can rename the fields to `attribute` and `value` but still using 
`Tuple[str, ...]` for attribute rather than `str`



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to