korbit-ai[bot] commented on code in PR #33285:
URL: https://github.com/apache/superset/pull/33285#discussion_r2107609918
##########
superset/migrations/shared/migrate_viz/base.py:
##########
@@ -136,14 +137,29 @@
# because a source viz can be mapped to different target viz types
slc.viz_type = clz.target_viz_type
- # only backup params
- slc.params = json.dumps(
- {**clz.data, FORM_DATA_BAK_FIELD_NAME: form_data_bak}
- )
+ backup = {FORM_DATA_BAK_FIELD_NAME: form_data_bak}
+
+ query_context = try_load_json(slc.query_context)
+
+ if query_context:
+ if "form_data" in query_context:
+ query_context["form_data"] = clz.data
+
+ queries_bak = copy.deepcopy(query_context["queries"])
+
+ result = clz._build_query()
+ queries = result["queries"]
+ query_context["queries"] = queries
Review Comment:
### Missing Implementation of Query Builder <sub></sub>
<details>
<summary>Tell me more</summary>
###### What is the issue?
The _build_query() method is not implemented but is being called during
visualization migration. The method only has a docstring and no implementation.
###### Why this matters
This will raise NotImplementedError or return None, causing the migration to
fail when trying to access the 'queries' key from the result.
###### Suggested change ∙ *Feature Preview*
Either implement the _build_query() method or handle the case when it's not
implemented:
```python
def _build_query(self) -> dict[str, Any]:
# Basic implementation
return {"queries": self.data.get("queries", [])}
```
###### Provide feedback to improve future suggestions
[](https://app.korbit.ai/feedback/aa91ff46-6083-4491-9416-b83dd1994b51/3aa8060b-ae0e-4bbd-9320-82ac2299a41f/upvote)
[](https://app.korbit.ai/feedback/aa91ff46-6083-4491-9416-b83dd1994b51/3aa8060b-ae0e-4bbd-9320-82ac2299a41f?what_not_true=true)
[](https://app.korbit.ai/feedback/aa91ff46-6083-4491-9416-b83dd1994b51/3aa8060b-ae0e-4bbd-9320-82ac2299a41f?what_out_of_scope=true)
[](https://app.korbit.ai/feedback/aa91ff46-6083-4491-9416-b83dd1994b51/3aa8060b-ae0e-4bbd-9320-82ac2299a41f?what_not_in_standard=true)
[](https://app.korbit.ai/feedback/aa91ff46-6083-4491-9416-b83dd1994b51/3aa8060b-ae0e-4bbd-9320-82ac2299a41f)
</details>
<sub>
💬 Looking for more details? Reply to this comment to chat with Korbit.
</sub>
<!--- korbi internal id:53f8411b-acf6-45eb-acc6-4ea6330be09f -->
[](53f8411b-acf6-45eb-acc6-4ea6330be09f)
##########
superset/migrations/shared/migrate_viz/processors.py:
##########
@@ -155,6 +307,63 @@
if x_ticks_layout := self.data.get("x_ticks_layout"):
self.data["x_ticks_layout"] = 45 if x_ticks_layout == "45°" else 0
+ def _build_query(self) -> dict[str, Any]:
+ groupby = self.data.get("groupby")
+
+ def query_builder(base_query_object: dict[str, Any]) -> list[dict[str,
Any]]:
+ """
+ The `pivot_operator_in_runtime` determines how to pivot the
dataframe
+ returned from the raw query.
+ 1. If it's a time compared query, there will return a pivoted
+ dataframe that append time compared metrics.
+ """
+ extra_metrics = extract_extra_metrics(self.data)
+
+ pivot_operator_in_runtime = (
+ time_compare_pivot_operator(self.data, base_query_object)
+ if is_time_comparison(self.data, base_query_object)
+ else pivot_operator(self.data, base_query_object)
+ )
+
+ columns = (
+ ensure_is_array(get_x_axis_column(self.data))
+ if is_x_axis_set(self.data)
+ else []
+ ) + ensure_is_array(groupby)
+
+ time_offsets = (
+ self.data.get("time_compare")
+ if is_time_comparison(self.data, base_query_object)
+ else []
+ )
+
+ result = {
+ **base_query_object,
+ "metrics": (base_query_object.get("metrics") or []) +
extra_metrics,
+ "columns": columns,
+ "series_columns": groupby,
+ **({"is_timeseries": True} if not is_x_axis_set(self.data)
else {}),
+ # todo: move `normalize_order_by to extract_query_fields`
+ "orderby":
normalize_order_by(base_query_object).get("orderby"),
Review Comment:
### Unclear TODO Comment <sub></sub>
<details>
<summary>Tell me more</summary>
###### What is the issue?
The TODO comment uses inconsistent backtick formatting and lacks clarity
about why the change is needed.
###### Why this matters
Unclear TODO comments can lead to the suggested improvements being
overlooked or misimplemented.
###### Suggested change ∙ *Feature Preview*
# TODO: Move normalize_order_by() call to extract_query_fields() for
consistent query field handling
"orderby":
normalize_order_by(base_query_object).get("orderby"),
###### Provide feedback to improve future suggestions
[](https://app.korbit.ai/feedback/aa91ff46-6083-4491-9416-b83dd1994b51/8705ca85-dee1-4d78-b16c-c70aca9c0591/upvote)
[](https://app.korbit.ai/feedback/aa91ff46-6083-4491-9416-b83dd1994b51/8705ca85-dee1-4d78-b16c-c70aca9c0591?what_not_true=true)
[](https://app.korbit.ai/feedback/aa91ff46-6083-4491-9416-b83dd1994b51/8705ca85-dee1-4d78-b16c-c70aca9c0591?what_out_of_scope=true)
[](https://app.korbit.ai/feedback/aa91ff46-6083-4491-9416-b83dd1994b51/8705ca85-dee1-4d78-b16c-c70aca9c0591?what_not_in_standard=true)
[](https://app.korbit.ai/feedback/aa91ff46-6083-4491-9416-b83dd1994b51/8705ca85-dee1-4d78-b16c-c70aca9c0591)
</details>
<sub>
💬 Looking for more details? Reply to this comment to chat with Korbit.
</sub>
<!--- korbi internal id:1b91b59f-fa35-4f9c-bd8a-c505f78543ff -->
[](1b91b59f-fa35-4f9c-bd8a-c505f78543ff)
##########
superset/migrations/shared/migrate_viz/query_functions.py:
##########
@@ -0,0 +1,1507 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements. See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership. The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied. See the License for the
+# specific language governing permissions and limitations
+# under the License.
+import json
+import math
+from enum import Enum
+from typing import Any, Dict, List, Optional, Union
+
+
+class RollingType(Enum):
+ Mean = "mean"
+ Sum = "sum"
+ Std = "std"
+ Cumsum = "cumsum"
+
+
+class ComparisonType(Enum):
+ Values = "values"
+ Difference = "difference"
+ Percentage = "percentage"
+ Ratio = "ratio"
+
+
+class DatasourceType(Enum):
+ Table = "table"
+ Query = "query"
+ Dataset = "dataset"
+ SlTable = "sl_table"
+ SavedQuery = "saved_query"
+
+
+UNARY_OPERATORS = ["IS NOT NULL", "IS NULL"]
+BINARY_OPERATORS = [
+ "==",
+ "!=",
+ ">",
+ "<",
+ ">=",
+ "<=",
+ "ILIKE",
+ "LIKE",
+ "NOT LIKE",
+ "REGEX",
+ "TEMPORAL_RANGE",
+]
+SET_OPERATORS = ["IN", "NOT IN"]
+
+unary_operator_set = set(UNARY_OPERATORS)
+binary_operator_set = set(BINARY_OPERATORS)
+set_operator_set = set(SET_OPERATORS)
+
+
+class DatasourceKey:
+ def __init__(self, key: str):
+ id_str, type_str = key.split("__", 1)
+ self.id = int(id_str)
+ # Default to Table; if type_str is 'query', then use Query.
+ self.type = DatasourceType.Table
+ if type_str == "query":
+ self.type = DatasourceType.Query
+
+ def __str__(self) -> str:
+ return f"{self.id}__{self.type.value}"
+
+ def to_object(self) -> dict[str, Any]:
+ return {
+ "id": self.id,
+ "type": self.type.value,
+ }
+
+
+TIME_COMPARISON_SEPARATOR = "__"
+DTTM_ALIAS = "__timestamp"
+NO_TIME_RANGE = "No filter"
+
+EXTRA_FORM_DATA_OVERRIDE_EXTRA_KEYS = [
+ "relative_start",
+ "relative_end",
+ "time_grain_sqla",
+]
+
+EXTRA_FORM_DATA_APPEND_KEYS = [
+ "adhoc_filters",
+ "filters",
+ "interactive_groupby",
+ "interactive_highlight",
+ "interactive_drilldown",
+ "custom_form_data",
+]
+
+EXTRA_FORM_DATA_OVERRIDE_REGULAR_MAPPINGS = {
+ "granularity": "granularity",
+ "granularity_sqla": "granularity",
+ "time_column": "time_column",
+ "time_grain": "time_grain",
+ "time_range": "time_range",
+}
+
+EXTRA_FORM_DATA_OVERRIDE_REGULAR_KEYS = list(
+ EXTRA_FORM_DATA_OVERRIDE_REGULAR_MAPPINGS.keys()
+)
+
+EXTRA_FORM_DATA_OVERRIDE_KEYS = (
+ EXTRA_FORM_DATA_OVERRIDE_REGULAR_KEYS + EXTRA_FORM_DATA_OVERRIDE_EXTRA_KEYS
+)
+
+
+def ensure_is_array(value: Optional[Union[List[Any], Any]] = None) ->
List[Any]:
+ """
+ Ensure a nullable value input is a list. Useful when consolidating
+ input format from a select control.
+ """
+ if value is None:
+ return []
+ return value if isinstance(value, list) else [value]
+
+
+def is_empty(value: Any) -> bool:
+ """
+ A simple implementation similar to lodash's isEmpty.
+ Returns True if value is None or an empty collection.
+ """
+ if value is None:
+ return True
+ if isinstance(value, (list, dict, str, tuple, set)):
+ return len(value) == 0
+ return False
+
+
+def is_saved_metric(metric: Any) -> bool:
+ """Return True if metric is a saved metric (str)."""
+ return isinstance(metric, str)
+
+
+def is_adhoc_metric_simple(metric: Any) -> bool:
+ """Return True if metric dict is a simple adhoc metric."""
+ return (
+ not isinstance(metric, str)
+ and isinstance(metric, dict)
+ and metric.get("expressionType") == "SIMPLE"
+ )
+
+
+def is_adhoc_metric_sql(metric: Any) -> bool:
+ """Return True if metric dict is an SQL adhoc metric."""
+ return (
+ not isinstance(metric, str)
+ and isinstance(metric, dict)
+ and metric.get("expressionType") == "SQL"
+ )
+
+
+def is_query_form_metric(metric: Any) -> bool:
+ """Return True if metric is of any query form type."""
+ return (
+ is_saved_metric(metric)
+ or is_adhoc_metric_simple(metric)
+ or is_adhoc_metric_sql(metric)
+ )
+
+
+def get_metric_label(metric: Any | dict[str, Any]) -> Any | dict[str, Any]:
+ """
+ Get the label for a given metric.
+
+ Args:
+ metric (dict): The metric object.
+
+ Returns:
+ dict: The label of the metric.
+ """
+ if is_saved_metric(metric):
+ return metric
+ if "label" in metric and metric["label"]:
+ return metric["label"]
+ if is_adhoc_metric_simple(metric):
+ column_name = metric["column"].get("columnName") or
metric["column"].get(
+ "column_name"
+ )
+ return f"{metric['aggregate']}({column_name})"
+ return metric["sqlExpression"]
+
+
+def extract_extra_metrics(form_data: Dict[str, Any]) -> List[Any]:
+ """
+ Extract extra metrics from the form data.
+
+ Args:
+ form_data (Dict[str, Any]): The query form data.
+
+ Returns:
+ List[Any]: A list of extra metrics.
+ """
+ groupby = form_data.get("groupby", [])
+ timeseries_limit_metric = form_data.get("timeseries_limit_metric")
+ x_axis_sort = form_data.get("x_axis_sort")
+ metrics = form_data.get("metrics", [])
+
+ extra_metrics = []
+ limit_metric = (
+ ensure_is_array(timeseries_limit_metric)[0] if timeseries_limit_metric
else None
+ )
+
+ if (
+ not groupby
+ and limit_metric
+ and get_metric_label(limit_metric) == x_axis_sort
+ and not any(get_metric_label(metric) == x_axis_sort for metric in
metrics)
+ ):
+ extra_metrics.append(limit_metric)
+
+ return extra_metrics
+
+
+def get_metric_offsets_map(
+ form_data: dict[str, List[str]], query_object: dict[str, List[str]]
+) -> dict[str, Any]:
+ """
+ Return a dictionary mapping metric offset-labels to metric-labels.
+
+ Args:
+ form_data (Dict[str, List[str]]): The form data containing time
comparisons.
+ query_object (Dict[str, List[str]]): The query object containing
metrics.
+
+ Returns:
+ Dict[str, str]: A dictionary with offset-labels as keys and
metric-labels
+ as values.
+ """
+ query_metrics = ensure_is_array(query_object.get("metrics", []))
+ time_offsets = ensure_is_array(form_data.get("time_compare", []))
+
+ metric_labels = [get_metric_label(metric) for metric in query_metrics]
+ metric_offset_map = {}
+
+ for metric in metric_labels:
+ for offset in time_offsets:
+ key = f"{metric}{TIME_COMPARISON_SEPARATOR}{offset}"
+ metric_offset_map[key] = metric
+
+ return metric_offset_map
+
+
+def is_time_comparison(form_data: dict[str, Any], query_object: dict[str,
Any]) -> bool:
+ """
+ Determine if the query involves a time comparison.
+
+ Args:
+ form_data (dict): The form data containing query parameters.
+ query_object (dict): The query object.
+
+ Returns:
+ bool: True if it is a time comparison, False otherwise.
+ """
+ comparison_type = form_data.get("comparison_type")
+ metric_offset_map = get_metric_offsets_map(form_data, query_object)
+
+ return (
+ comparison_type in [ct.value for ct in ComparisonType]
+ and len(metric_offset_map) > 0
+ )
+
+
+def ensure_is_int(value: Any, default_value: Any = None) -> Any | float:
+ """
+ Convert the given value to an integer.
+ If conversion fails, returns default_value if provided,
+ otherwise returns NaN (as float('nan')).
+ """
+ try:
+ val = int(str(value))
+ except (ValueError, TypeError):
+ return default_value if default_value is not None else float("nan")
+ return val
+
+
+def is_physical_column(column: Any = None) -> bool:
+ """Return True if column is a physical column (string)."""
+ return isinstance(column, str)
+
+
+def is_adhoc_column(column: Any = None) -> bool:
+ """Return True if column is an adhoc column (object with SQL
expression)."""
+ if type(column) is not dict:
+ return False
+ return (
+ "sqlExpression" in column.keys()
+ and column["sqlExpression"] is not None
+ and "label" in column.keys()
+ and column["label"] is not None
+ and ("sqlExpression" not in column.keys() or column["expressionType"]
== "SQL")
+ )
+
+
+def is_query_form_column(column: Any) -> bool:
+ """Return True if column is either physical or adhoc."""
+ return is_physical_column(column) or is_adhoc_column(column)
+
+
+def is_x_axis_set(form_data: dict[str, Any]) -> bool:
+ """Return True if the x_axis is specified in form_data."""
+ return is_query_form_column(form_data.get("x_axis"))
+
+
+def get_x_axis_column(form_data: dict[str, Any]) -> Optional[Any]:
+ """Return x_axis column."""
+ if not (form_data.get("granularity_sqla") or form_data.get("x_axis")):
+ return None
+
+ if is_x_axis_set(form_data):
+ return form_data.get("x_axis")
+
+ return DTTM_ALIAS
+
+
+def get_column_label(column: Any) -> Optional[str]:
+ """Return the string label for a column."""
+ if is_physical_column(column):
+ return column
+ if column and column.get("label"):
+ return column.get("label")
+ return column.get("sqlExpression", None)
+
+
+def get_x_axis_label(form_data: dict[str, Any]) -> Optional[str]:
+ """Return the x_axis label from form_data."""
+ if col := get_x_axis_column(form_data):
+ return get_column_label(col)
+ return None
+
+
+def time_compare_pivot_operator(
+ form_data: dict[str, Any], query_object: dict[str, Any]
+) -> Optional[dict[str, Any]]:
+ """
+ A post-processing factory function for pivot operations.
+
+ Args:
+ form_data: The form data containing configuration
+ query_object: The query object with series and columns information
+
+ Returns:
+ Dictionary with pivot operation configuration or None
+ """
+ metric_offset_map = get_metric_offsets_map(form_data, query_object)
+ x_axis_label = get_x_axis_label(form_data)
+ columns = (
+ query_object.get("series_columns")
+ if query_object.get("series_columns") is not None
+ else query_object.get("columns")
+ )
+
+ if is_time_comparison(form_data, query_object) and x_axis_label:
+ # Create aggregates dictionary from metric offset map
+ metrics = list(metric_offset_map.values()) +
list(metric_offset_map.keys())
+ aggregates = {
+ metric: {"operator": "mean"} # use 'mean' aggregates to avoid
dropping NaN
+ for metric in metrics
+ }
+
+ return {
+ "operation": "pivot",
+ "options": {
+ "index": [x_axis_label],
+ "columns": [get_column_label(col) for col in
ensure_is_array(columns)],
+ "drop_missing_columns": not
form_data.get("show_empty_columns"),
+ "aggregates": aggregates,
+ },
+ }
+
+ return None
+
+
+def pivot_operator(
+ form_data: dict[str, Any], query_object: dict[str, Any]
+) -> Optional[dict[str, Any]]:
+ """
+ Construct a pivot operator configuration for post-processing.
+
+ This function extracts metric labels (including extra metrics) from the
query object
+ and form data, and retrieves the x-axis label. If both an x-axis label and
at
+ least one metric label are present, it builds a pivot configuration that
sets
+ the index as the x-axis label, transforms the columns via get_column_label,
+ and creates dummy 'mean' aggregates for each metric.
+
+ Args:
+ form_data (dict): The form data containing query parameters.
+ query_object (dict): The base query object containing metrics
+ and column information.
+
+ Returns:
+ dict or None: A dict with the pivot operator configuration
+ if the conditions are met,
+ otherwise None.
+ """
+ metric_labels = [
+ *ensure_is_array(query_object.get("metrics", [])),
+ *extract_extra_metrics(form_data),
+ ]
+ metric_labels = [get_metric_label(metric) for metric in metric_labels]
+ x_axis_label = get_x_axis_label(form_data)
+ columns = (
+ query_object.get("series_columns")
+ if query_object.get("series_columns") is not None
+ else query_object.get("columns")
+ )
+
+ if x_axis_label and metric_labels:
+ cols_list = [get_column_label(col) for col in ensure_is_array(columns)]
+ return {
+ "operation": "pivot",
+ "options": {
+ "index": [x_axis_label],
+ "columns": cols_list,
+ # Create 'dummy' mean aggregates to assign cell values in
pivot table
+ # using the 'mean' aggregates to avoid dropping NaN values
+ "aggregates": {
+ metric: {"operator": "mean"} for metric in metric_labels
+ },
+ "drop_missing_columns": not
form_data.get("show_empty_columns"),
+ },
+ }
+
+ return None
+
+
+def normalize_order_by(query_object: dict[str, Any]) -> dict[str, Any]:
+ """
+ Normalize the orderby clause in the query object.
+
+ If the "orderby" key already contains a valid clause (a list whose first
element
+ is a list of two elements, where the first element is truthy and the
second a bool),
+ the original query_object is returned. Otherwise, the function creates a
copy of
+ query_object, removes invalid orderby-related keys, and sets an orderby
clause based
+ on available keys: "series_limit_metric", "legacy_order_by", or the first
metric in
+ the "metrics" list. The sorting order is determined by the negation of
"order_desc".
+
+ Args:
+ query_object (dict): The query object containing orderby and related
keys.
+
+ Returns:
+ dict: A modified query object with a normalized "orderby" clause.
+ """
+ if (
+ isinstance(query_object.get("orderby"), list)
+ and len(query_object.get("orderby", [])) > 0
+ ):
+ # ensure a valid orderby clause
+ orderby_clause = query_object["orderby"][0]
+ if (
+ isinstance(orderby_clause, list)
+ and len(orderby_clause) == 2
+ and orderby_clause[0]
+ and isinstance(orderby_clause[1], bool)
+ ):
+ return query_object
+
+ # remove invalid orderby keys from a copy
+ clone_query_object = query_object.copy()
+ clone_query_object.pop("series_limit_metric", None)
+ clone_query_object.pop("legacy_order_by", None)
+ clone_query_object.pop("order_desc", None)
+ clone_query_object.pop("orderby", None)
+
+ is_asc = not query_object.get("order_desc", False)
+
+ if query_object.get("series_limit_metric") is not None and
query_object.get(
+ "series_limit_metric"
+ ):
+ return {
+ **clone_query_object,
+ "orderby": [[query_object["series_limit_metric"], is_asc]],
+ }
+
+ # todo: Removed `legacy_order_by` after refactoring
+ if query_object.get("legacy_order_by") is not None and query_object.get(
+ "legacy_order_by"
+ ):
+ return {
+ **clone_query_object,
+ "orderby": [[query_object["legacy_order_by"], is_asc]],
+ }
+
+ if (
+ isinstance(query_object.get("metrics"), list)
+ and len(query_object.get("metrics", [])) > 0
+ ):
+ return {**clone_query_object, "orderby": [[query_object["metrics"][0],
is_asc]]}
+
+ return clone_query_object
+
+
+def remove_duplicates(items: Any, hash_func: Any = None) -> list[Any]:
+ """
+ Remove duplicate items from a list.
+
+ Args:
+ items: List of items to deduplicate
+ hash_func: Optional function to generate a hash for comparison
+
+ Returns:
+ List with duplicates removed
+ """
+ if hash_func:
+ seen = set()
+ result = []
+ for x in items:
+ item_hash = hash_func(x)
+ if item_hash not in seen:
+ seen.add(item_hash)
+ result.append(x)
+ return result
+ else:
+ # Using Python's built-in uniqueness for lists
+ return list(dict.fromkeys(items)) # Preserves order in Python 3.7+
+
+
+def extract_fields_from_form_data(
+ rest_form_data: dict[str, Any],
+ query_field_aliases: dict[str, Any],
+ query_mode: Any | str,
+) -> tuple[list[Any], list[Any], list[Any]]:
+ """
+ Extract fields from form data based on aliases and query mode.
+
+ Args:
+ rest_form_data (dict): The residual form data.
+ query_field_aliases (dict): A mapping of key aliases.
+ query_mode (str): The query mode, e.g. 'aggregate' or 'raw'.
+
+ Returns:
+ tuple: A tuple of three lists: (columns, metrics, orderby)
+ """
+ columns = []
+ metrics = []
+ orderby = []
+
+ for key, value in rest_form_data.items():
+ if value is None:
+ continue
+
+ normalized_key = query_field_aliases.get(key, key)
+
+ if query_mode == "aggregate" and normalized_key == "columns":
+ continue
+ if query_mode == "raw" and normalized_key in ["groupby", "metrics"]:
+ continue
+
+ if normalized_key == "groupby":
+ normalized_key = "columns"
+
+ if normalized_key == "metrics":
+ metrics.extend(value if isinstance(value, list) else [value])
+ elif normalized_key == "columns":
+ columns.extend(value if isinstance(value, list) else [value])
+ elif normalized_key == "orderby":
+ orderby.extend(value if isinstance(value, list) else [value])
+
+ return columns, metrics, orderby
+
+
+def extract_query_fields(
+ form_data: dict[Any, Any], aliases: Any = None
+) -> Union[dict[str, Any]]:
+ """
+ Extract query fields from form data.
+
+ Args:
+ form_data: Form data residual
+ aliases: Query field aliases
+
+ Returns:
+ Dictionary with columns, metrics, and orderby fields
+ """
+ query_field_aliases = {
+ "metric": "metrics",
+ "metric_2": "metrics",
+ "secondary_metric": "metrics",
+ "x": "metrics",
+ "y": "metrics",
+ "size": "metrics",
+ "all_columns": "columns",
+ "series": "groupby",
+ "order_by_cols": "orderby",
+ }
+
+ if aliases:
+ query_field_aliases.update(aliases)
+ query_mode = form_data.pop("query_mode", None)
+ rest_form_data = form_data
+
+ columns, metrics, orderby = extract_fields_from_form_data(
+ rest_form_data, query_field_aliases, query_mode
+ )
+
+ result: dict[str, Any] = {
+ "columns": remove_duplicates(
+ [col for col in columns if col != ""], get_column_label
+ ),
+ "orderby": None,
+ }
+ if query_mode != "raw":
+ result["metrics"] = remove_duplicates(metrics, get_metric_label)
+ else:
+ result["metrics"] = None
+ if orderby:
+ result["orderby"] = []
+ for item in orderby:
+ if isinstance(item, str):
+ try:
+ result["orderby"].append(json.loads(item))
+ except Exception as err:
+ raise ValueError("Found invalid orderby options") from err
+ else:
+ result["orderby"].append(item)
+
+ return result
+
+
+def extract_extras(form_data: dict[str, Any]) -> dict[str, Any]:
+ """
+ Extract extras from the form_data analogous to the TS version.
+ """
+ applied_time_extras: dict[str, Any] = {}
+ filters: list[Any] = []
+ extras: dict[str, Any] = {}
+ extract: dict[str, Any] = {
+ "filters": filters,
+ "extras": extras,
+ "applied_time_extras": applied_time_extras,
+ }
+
+ # Mapping reserved columns to query field names
+ reserved_columns_to_query_field = {
+ "__time_range": "time_range",
+ "__time_col": "granularity_sqla",
+ "__time_grain": "time_grain_sqla",
+ "__granularity": "granularity",
+ }
+
+ extra_filters = form_data.get("extra_filters", [])
+ for filter_item in extra_filters:
+ col = filter_item.get("col")
+ # Check if filter col is reserved
+ if col in reserved_columns_to_query_field:
+ query_field = reserved_columns_to_query_field[col]
+ # Assign the filter value to the extract dict
+ extract[query_field] = filter_item.get("val")
+ applied_time_extras[col] = filter_item.get("val")
+ else:
+ filters.append(filter_item)
+
+ # SQL: set extra properties based on TS logic
+ if "time_grain_sqla" in form_data.keys() or "time_grain_sqla" in
extract.keys():
+ # If time_grain_sqla is set in form_data, use it
+ # Otherwise, use the value from extract
+ value = form_data.get("time_grain_sqla") or
form_data.get("time_grain_sqla")
+ extras["time_grain_sqla"] = value
+
+ extract["granularity"] = (
+ extract.get("granularity_sqla")
+ or form_data.get("granularity")
+ or form_data.get("granularity_sqla")
+ )
+ # Remove temporary keys
+ extract.pop("granularity_sqla", None)
+ extract.pop("time_grain_sqla", None)
+ if extract["granularity"] is None:
+ extract.pop("granularity", None)
+
+ return extract
+
+
+def is_defined(x: Any) -> bool:
+ """
+ Returns True if x is not None.
+ This is equivalent to checking that x is neither null nor undefined in
TypeScript.
+ """
+ return x is not None
+
+
+def sanitize_clause(clause: str) -> str:
+ """
+ Sanitize a SQL clause. If the clause contains '--', append a newline.
+ Then wrap the clause in parentheses.
+ """
+ if clause is None:
+ return ""
+ sanitized_clause = clause
+ if "--" in clause:
+ sanitized_clause = clause + "\n"
Review Comment:
### Inadequate SQL Comment Sanitization <sub></sub>
<details>
<summary>Tell me more</summary>
###### What is the issue?
The SQL sanitization function doesn't properly escape SQL comments, making
it vulnerable to SQL injection attacks. Simply appending a newline after '--'
is insufficient protection.
###### Why this matters
An attacker could still inject malicious SQL code by using alternate comment
styles or other SQL injection techniques. This could lead to unauthorized data
access or manipulation.
###### Suggested change ∙ *Feature Preview*
Replace with proper SQL parameterization or a robust SQL escaping library.
Example:
```python
from sqlalchemy import text
def sanitize_clause(clause: str) -> str:
if clause is None:
return ""
return text(clause).text
```
###### Provide feedback to improve future suggestions
[](https://app.korbit.ai/feedback/aa91ff46-6083-4491-9416-b83dd1994b51/d3990479-a372-4d91-840c-19add57dbd0e/upvote)
[](https://app.korbit.ai/feedback/aa91ff46-6083-4491-9416-b83dd1994b51/d3990479-a372-4d91-840c-19add57dbd0e?what_not_true=true)
[](https://app.korbit.ai/feedback/aa91ff46-6083-4491-9416-b83dd1994b51/d3990479-a372-4d91-840c-19add57dbd0e?what_out_of_scope=true)
[](https://app.korbit.ai/feedback/aa91ff46-6083-4491-9416-b83dd1994b51/d3990479-a372-4d91-840c-19add57dbd0e?what_not_in_standard=true)
[](https://app.korbit.ai/feedback/aa91ff46-6083-4491-9416-b83dd1994b51/d3990479-a372-4d91-840c-19add57dbd0e)
</details>
<sub>
💬 Looking for more details? Reply to this comment to chat with Korbit.
</sub>
<!--- korbi internal id:e3ff8475-ab17-4533-9adc-7ced51149733 -->
[](e3ff8475-ab17-4533-9adc-7ced51149733)
##########
superset/migrations/shared/migrate_viz/query_functions.py:
##########
@@ -0,0 +1,1507 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements. See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership. The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied. See the License for the
+# specific language governing permissions and limitations
+# under the License.
+import json
+import math
+from enum import Enum
+from typing import Any, Dict, List, Optional, Union
+
+
+class RollingType(Enum):
+ Mean = "mean"
+ Sum = "sum"
+ Std = "std"
+ Cumsum = "cumsum"
+
+
+class ComparisonType(Enum):
+ Values = "values"
+ Difference = "difference"
+ Percentage = "percentage"
+ Ratio = "ratio"
+
+
+class DatasourceType(Enum):
+ Table = "table"
+ Query = "query"
+ Dataset = "dataset"
+ SlTable = "sl_table"
+ SavedQuery = "saved_query"
+
+
+UNARY_OPERATORS = ["IS NOT NULL", "IS NULL"]
+BINARY_OPERATORS = [
+ "==",
+ "!=",
+ ">",
+ "<",
+ ">=",
+ "<=",
+ "ILIKE",
+ "LIKE",
+ "NOT LIKE",
+ "REGEX",
+ "TEMPORAL_RANGE",
+]
+SET_OPERATORS = ["IN", "NOT IN"]
+
+unary_operator_set = set(UNARY_OPERATORS)
+binary_operator_set = set(BINARY_OPERATORS)
+set_operator_set = set(SET_OPERATORS)
+
+
+class DatasourceKey:
+ def __init__(self, key: str):
+ id_str, type_str = key.split("__", 1)
+ self.id = int(id_str)
+ # Default to Table; if type_str is 'query', then use Query.
+ self.type = DatasourceType.Table
+ if type_str == "query":
+ self.type = DatasourceType.Query
+
+ def __str__(self) -> str:
+ return f"{self.id}__{self.type.value}"
+
+ def to_object(self) -> dict[str, Any]:
+ return {
+ "id": self.id,
+ "type": self.type.value,
+ }
+
+
+TIME_COMPARISON_SEPARATOR = "__"
+DTTM_ALIAS = "__timestamp"
+NO_TIME_RANGE = "No filter"
+
+EXTRA_FORM_DATA_OVERRIDE_EXTRA_KEYS = [
+ "relative_start",
+ "relative_end",
+ "time_grain_sqla",
+]
+
+EXTRA_FORM_DATA_APPEND_KEYS = [
+ "adhoc_filters",
+ "filters",
+ "interactive_groupby",
+ "interactive_highlight",
+ "interactive_drilldown",
+ "custom_form_data",
+]
+
+EXTRA_FORM_DATA_OVERRIDE_REGULAR_MAPPINGS = {
+ "granularity": "granularity",
+ "granularity_sqla": "granularity",
+ "time_column": "time_column",
+ "time_grain": "time_grain",
+ "time_range": "time_range",
+}
+
+EXTRA_FORM_DATA_OVERRIDE_REGULAR_KEYS = list(
+ EXTRA_FORM_DATA_OVERRIDE_REGULAR_MAPPINGS.keys()
+)
+
+EXTRA_FORM_DATA_OVERRIDE_KEYS = (
+ EXTRA_FORM_DATA_OVERRIDE_REGULAR_KEYS + EXTRA_FORM_DATA_OVERRIDE_EXTRA_KEYS
+)
+
+
+def ensure_is_array(value: Optional[Union[List[Any], Any]] = None) ->
List[Any]:
+ """
+ Ensure a nullable value input is a list. Useful when consolidating
+ input format from a select control.
+ """
+ if value is None:
+ return []
+ return value if isinstance(value, list) else [value]
+
+
+def is_empty(value: Any) -> bool:
+ """
+ A simple implementation similar to lodash's isEmpty.
+ Returns True if value is None or an empty collection.
+ """
+ if value is None:
+ return True
+ if isinstance(value, (list, dict, str, tuple, set)):
+ return len(value) == 0
+ return False
+
+
+def is_saved_metric(metric: Any) -> bool:
+ """Return True if metric is a saved metric (str)."""
+ return isinstance(metric, str)
+
+
+def is_adhoc_metric_simple(metric: Any) -> bool:
+ """Return True if metric dict is a simple adhoc metric."""
+ return (
+ not isinstance(metric, str)
+ and isinstance(metric, dict)
+ and metric.get("expressionType") == "SIMPLE"
+ )
+
+
+def is_adhoc_metric_sql(metric: Any) -> bool:
+ """Return True if metric dict is an SQL adhoc metric."""
+ return (
+ not isinstance(metric, str)
+ and isinstance(metric, dict)
+ and metric.get("expressionType") == "SQL"
+ )
+
+
+def is_query_form_metric(metric: Any) -> bool:
+ """Return True if metric is of any query form type."""
+ return (
+ is_saved_metric(metric)
+ or is_adhoc_metric_simple(metric)
+ or is_adhoc_metric_sql(metric)
+ )
+
+
+def get_metric_label(metric: Any | dict[str, Any]) -> Any | dict[str, Any]:
+ """
+ Get the label for a given metric.
+
+ Args:
+ metric (dict): The metric object.
+
+ Returns:
+ dict: The label of the metric.
+ """
+ if is_saved_metric(metric):
+ return metric
+ if "label" in metric and metric["label"]:
+ return metric["label"]
+ if is_adhoc_metric_simple(metric):
+ column_name = metric["column"].get("columnName") or
metric["column"].get(
+ "column_name"
+ )
+ return f"{metric['aggregate']}({column_name})"
+ return metric["sqlExpression"]
+
+
+def extract_extra_metrics(form_data: Dict[str, Any]) -> List[Any]:
+ """
+ Extract extra metrics from the form data.
+
+ Args:
+ form_data (Dict[str, Any]): The query form data.
+
+ Returns:
+ List[Any]: A list of extra metrics.
+ """
+ groupby = form_data.get("groupby", [])
+ timeseries_limit_metric = form_data.get("timeseries_limit_metric")
+ x_axis_sort = form_data.get("x_axis_sort")
+ metrics = form_data.get("metrics", [])
+
+ extra_metrics = []
+ limit_metric = (
+ ensure_is_array(timeseries_limit_metric)[0] if timeseries_limit_metric
else None
+ )
+
+ if (
+ not groupby
+ and limit_metric
+ and get_metric_label(limit_metric) == x_axis_sort
+ and not any(get_metric_label(metric) == x_axis_sort for metric in
metrics)
+ ):
+ extra_metrics.append(limit_metric)
+
+ return extra_metrics
+
+
+def get_metric_offsets_map(
+ form_data: dict[str, List[str]], query_object: dict[str, List[str]]
+) -> dict[str, Any]:
+ """
+ Return a dictionary mapping metric offset-labels to metric-labels.
+
+ Args:
+ form_data (Dict[str, List[str]]): The form data containing time
comparisons.
+ query_object (Dict[str, List[str]]): The query object containing
metrics.
+
+ Returns:
+ Dict[str, str]: A dictionary with offset-labels as keys and
metric-labels
+ as values.
+ """
+ query_metrics = ensure_is_array(query_object.get("metrics", []))
+ time_offsets = ensure_is_array(form_data.get("time_compare", []))
+
+ metric_labels = [get_metric_label(metric) for metric in query_metrics]
+ metric_offset_map = {}
+
+ for metric in metric_labels:
+ for offset in time_offsets:
+ key = f"{metric}{TIME_COMPARISON_SEPARATOR}{offset}"
+ metric_offset_map[key] = metric
+
+ return metric_offset_map
+
+
+def is_time_comparison(form_data: dict[str, Any], query_object: dict[str,
Any]) -> bool:
+ """
+ Determine if the query involves a time comparison.
+
+ Args:
+ form_data (dict): The form data containing query parameters.
+ query_object (dict): The query object.
+
+ Returns:
+ bool: True if it is a time comparison, False otherwise.
+ """
+ comparison_type = form_data.get("comparison_type")
+ metric_offset_map = get_metric_offsets_map(form_data, query_object)
+
+ return (
+ comparison_type in [ct.value for ct in ComparisonType]
+ and len(metric_offset_map) > 0
+ )
+
+
+def ensure_is_int(value: Any, default_value: Any = None) -> Any | float:
+ """
+ Convert the given value to an integer.
+ If conversion fails, returns default_value if provided,
+ otherwise returns NaN (as float('nan')).
+ """
+ try:
+ val = int(str(value))
+ except (ValueError, TypeError):
+ return default_value if default_value is not None else float("nan")
+ return val
+
+
+def is_physical_column(column: Any = None) -> bool:
+ """Return True if column is a physical column (string)."""
+ return isinstance(column, str)
+
+
+def is_adhoc_column(column: Any = None) -> bool:
+ """Return True if column is an adhoc column (object with SQL
expression)."""
+ if type(column) is not dict:
+ return False
+ return (
+ "sqlExpression" in column.keys()
+ and column["sqlExpression"] is not None
+ and "label" in column.keys()
+ and column["label"] is not None
+ and ("sqlExpression" not in column.keys() or column["expressionType"]
== "SQL")
+ )
+
+
+def is_query_form_column(column: Any) -> bool:
+ """Return True if column is either physical or adhoc."""
+ return is_physical_column(column) or is_adhoc_column(column)
+
+
+def is_x_axis_set(form_data: dict[str, Any]) -> bool:
+ """Return True if the x_axis is specified in form_data."""
+ return is_query_form_column(form_data.get("x_axis"))
+
+
+def get_x_axis_column(form_data: dict[str, Any]) -> Optional[Any]:
+ """Return x_axis column."""
+ if not (form_data.get("granularity_sqla") or form_data.get("x_axis")):
+ return None
+
+ if is_x_axis_set(form_data):
+ return form_data.get("x_axis")
+
+ return DTTM_ALIAS
+
+
+def get_column_label(column: Any) -> Optional[str]:
+ """Return the string label for a column."""
+ if is_physical_column(column):
+ return column
+ if column and column.get("label"):
+ return column.get("label")
+ return column.get("sqlExpression", None)
+
+
+def get_x_axis_label(form_data: dict[str, Any]) -> Optional[str]:
+ """Return the x_axis label from form_data."""
+ if col := get_x_axis_column(form_data):
+ return get_column_label(col)
+ return None
+
+
+def time_compare_pivot_operator(
+ form_data: dict[str, Any], query_object: dict[str, Any]
+) -> Optional[dict[str, Any]]:
+ """
+ A post-processing factory function for pivot operations.
+
+ Args:
+ form_data: The form data containing configuration
+ query_object: The query object with series and columns information
+
+ Returns:
+ Dictionary with pivot operation configuration or None
+ """
+ metric_offset_map = get_metric_offsets_map(form_data, query_object)
+ x_axis_label = get_x_axis_label(form_data)
+ columns = (
+ query_object.get("series_columns")
+ if query_object.get("series_columns") is not None
+ else query_object.get("columns")
+ )
+
+ if is_time_comparison(form_data, query_object) and x_axis_label:
+ # Create aggregates dictionary from metric offset map
+ metrics = list(metric_offset_map.values()) +
list(metric_offset_map.keys())
+ aggregates = {
+ metric: {"operator": "mean"} # use 'mean' aggregates to avoid
dropping NaN
+ for metric in metrics
+ }
+
+ return {
+ "operation": "pivot",
+ "options": {
+ "index": [x_axis_label],
+ "columns": [get_column_label(col) for col in
ensure_is_array(columns)],
+ "drop_missing_columns": not
form_data.get("show_empty_columns"),
+ "aggregates": aggregates,
+ },
+ }
+
+ return None
+
+
+def pivot_operator(
+ form_data: dict[str, Any], query_object: dict[str, Any]
+) -> Optional[dict[str, Any]]:
+ """
+ Construct a pivot operator configuration for post-processing.
+
+ This function extracts metric labels (including extra metrics) from the
query object
+ and form data, and retrieves the x-axis label. If both an x-axis label and
at
+ least one metric label are present, it builds a pivot configuration that
sets
+ the index as the x-axis label, transforms the columns via get_column_label,
+ and creates dummy 'mean' aggregates for each metric.
+
+ Args:
+ form_data (dict): The form data containing query parameters.
+ query_object (dict): The base query object containing metrics
+ and column information.
+
+ Returns:
+ dict or None: A dict with the pivot operator configuration
+ if the conditions are met,
+ otherwise None.
+ """
+ metric_labels = [
+ *ensure_is_array(query_object.get("metrics", [])),
+ *extract_extra_metrics(form_data),
+ ]
+ metric_labels = [get_metric_label(metric) for metric in metric_labels]
Review Comment:
### Inefficient Metric Label Processing <sub></sub>
<details>
<summary>Tell me more</summary>
###### What is the issue?
Multiple list concatenations and comprehensions are performed sequentially
when processing metric labels.
###### Why this matters
Creating intermediate lists and then processing them again creates
unnecessary memory allocations and iterations over the data.
###### Suggested change ∙ *Feature Preview*
Combine operations into a single comprehension:
```python
metric_labels = [get_metric_label(metric) for metric in itertools.chain(
ensure_is_array(query_object.get("metrics", [])),
extract_extra_metrics(form_data)
)]
```
###### Provide feedback to improve future suggestions
[](https://app.korbit.ai/feedback/aa91ff46-6083-4491-9416-b83dd1994b51/03f52f53-23f3-4582-9f27-60d839c216c6/upvote)
[](https://app.korbit.ai/feedback/aa91ff46-6083-4491-9416-b83dd1994b51/03f52f53-23f3-4582-9f27-60d839c216c6?what_not_true=true)
[](https://app.korbit.ai/feedback/aa91ff46-6083-4491-9416-b83dd1994b51/03f52f53-23f3-4582-9f27-60d839c216c6?what_out_of_scope=true)
[](https://app.korbit.ai/feedback/aa91ff46-6083-4491-9416-b83dd1994b51/03f52f53-23f3-4582-9f27-60d839c216c6?what_not_in_standard=true)
[](https://app.korbit.ai/feedback/aa91ff46-6083-4491-9416-b83dd1994b51/03f52f53-23f3-4582-9f27-60d839c216c6)
</details>
<sub>
💬 Looking for more details? Reply to this comment to chat with Korbit.
</sub>
<!--- korbi internal id:20ddb4ae-baac-4e3c-a99e-1e892e60d76e -->
[](20ddb4ae-baac-4e3c-a99e-1e892e60d76e)
##########
superset/migrations/shared/migrate_viz/query_functions.py:
##########
@@ -0,0 +1,1507 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements. See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership. The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied. See the License for the
+# specific language governing permissions and limitations
+# under the License.
+import json
+import math
+from enum import Enum
+from typing import Any, Dict, List, Optional, Union
+
+
+class RollingType(Enum):
+ Mean = "mean"
+ Sum = "sum"
+ Std = "std"
+ Cumsum = "cumsum"
+
+
+class ComparisonType(Enum):
+ Values = "values"
+ Difference = "difference"
+ Percentage = "percentage"
+ Ratio = "ratio"
+
+
+class DatasourceType(Enum):
+ Table = "table"
+ Query = "query"
+ Dataset = "dataset"
+ SlTable = "sl_table"
+ SavedQuery = "saved_query"
+
+
+UNARY_OPERATORS = ["IS NOT NULL", "IS NULL"]
+BINARY_OPERATORS = [
+ "==",
+ "!=",
+ ">",
+ "<",
+ ">=",
+ "<=",
+ "ILIKE",
+ "LIKE",
+ "NOT LIKE",
+ "REGEX",
+ "TEMPORAL_RANGE",
+]
+SET_OPERATORS = ["IN", "NOT IN"]
+
+unary_operator_set = set(UNARY_OPERATORS)
+binary_operator_set = set(BINARY_OPERATORS)
+set_operator_set = set(SET_OPERATORS)
+
+
+class DatasourceKey:
+ def __init__(self, key: str):
+ id_str, type_str = key.split("__", 1)
+ self.id = int(id_str)
+ # Default to Table; if type_str is 'query', then use Query.
+ self.type = DatasourceType.Table
+ if type_str == "query":
+ self.type = DatasourceType.Query
+
+ def __str__(self) -> str:
+ return f"{self.id}__{self.type.value}"
+
+ def to_object(self) -> dict[str, Any]:
+ return {
+ "id": self.id,
+ "type": self.type.value,
+ }
+
+
+TIME_COMPARISON_SEPARATOR = "__"
+DTTM_ALIAS = "__timestamp"
+NO_TIME_RANGE = "No filter"
+
+EXTRA_FORM_DATA_OVERRIDE_EXTRA_KEYS = [
+ "relative_start",
+ "relative_end",
+ "time_grain_sqla",
+]
+
+EXTRA_FORM_DATA_APPEND_KEYS = [
+ "adhoc_filters",
+ "filters",
+ "interactive_groupby",
+ "interactive_highlight",
+ "interactive_drilldown",
+ "custom_form_data",
+]
+
+EXTRA_FORM_DATA_OVERRIDE_REGULAR_MAPPINGS = {
+ "granularity": "granularity",
+ "granularity_sqla": "granularity",
+ "time_column": "time_column",
+ "time_grain": "time_grain",
+ "time_range": "time_range",
+}
+
+EXTRA_FORM_DATA_OVERRIDE_REGULAR_KEYS = list(
+ EXTRA_FORM_DATA_OVERRIDE_REGULAR_MAPPINGS.keys()
+)
+
+EXTRA_FORM_DATA_OVERRIDE_KEYS = (
+ EXTRA_FORM_DATA_OVERRIDE_REGULAR_KEYS + EXTRA_FORM_DATA_OVERRIDE_EXTRA_KEYS
+)
+
+
+def ensure_is_array(value: Optional[Union[List[Any], Any]] = None) ->
List[Any]:
+ """
+ Ensure a nullable value input is a list. Useful when consolidating
+ input format from a select control.
+ """
+ if value is None:
+ return []
+ return value if isinstance(value, list) else [value]
+
+
+def is_empty(value: Any) -> bool:
+ """
+ A simple implementation similar to lodash's isEmpty.
+ Returns True if value is None or an empty collection.
+ """
+ if value is None:
+ return True
+ if isinstance(value, (list, dict, str, tuple, set)):
+ return len(value) == 0
+ return False
+
+
+def is_saved_metric(metric: Any) -> bool:
+ """Return True if metric is a saved metric (str)."""
+ return isinstance(metric, str)
+
+
+def is_adhoc_metric_simple(metric: Any) -> bool:
+ """Return True if metric dict is a simple adhoc metric."""
+ return (
+ not isinstance(metric, str)
+ and isinstance(metric, dict)
+ and metric.get("expressionType") == "SIMPLE"
+ )
+
+
+def is_adhoc_metric_sql(metric: Any) -> bool:
+ """Return True if metric dict is an SQL adhoc metric."""
+ return (
+ not isinstance(metric, str)
+ and isinstance(metric, dict)
+ and metric.get("expressionType") == "SQL"
+ )
+
+
+def is_query_form_metric(metric: Any) -> bool:
+ """Return True if metric is of any query form type."""
+ return (
+ is_saved_metric(metric)
+ or is_adhoc_metric_simple(metric)
+ or is_adhoc_metric_sql(metric)
+ )
+
+
+def get_metric_label(metric: Any | dict[str, Any]) -> Any | dict[str, Any]:
+ """
+ Get the label for a given metric.
+
+ Args:
+ metric (dict): The metric object.
+
+ Returns:
+ dict: The label of the metric.
+ """
+ if is_saved_metric(metric):
+ return metric
+ if "label" in metric and metric["label"]:
+ return metric["label"]
+ if is_adhoc_metric_simple(metric):
+ column_name = metric["column"].get("columnName") or
metric["column"].get(
+ "column_name"
+ )
+ return f"{metric['aggregate']}({column_name})"
+ return metric["sqlExpression"]
+
+
+def extract_extra_metrics(form_data: Dict[str, Any]) -> List[Any]:
+ """
+ Extract extra metrics from the form data.
+
+ Args:
+ form_data (Dict[str, Any]): The query form data.
+
+ Returns:
+ List[Any]: A list of extra metrics.
+ """
+ groupby = form_data.get("groupby", [])
+ timeseries_limit_metric = form_data.get("timeseries_limit_metric")
+ x_axis_sort = form_data.get("x_axis_sort")
+ metrics = form_data.get("metrics", [])
+
+ extra_metrics = []
+ limit_metric = (
+ ensure_is_array(timeseries_limit_metric)[0] if timeseries_limit_metric
else None
+ )
+
+ if (
+ not groupby
+ and limit_metric
+ and get_metric_label(limit_metric) == x_axis_sort
+ and not any(get_metric_label(metric) == x_axis_sort for metric in
metrics)
+ ):
+ extra_metrics.append(limit_metric)
+
+ return extra_metrics
+
+
+def get_metric_offsets_map(
+ form_data: dict[str, List[str]], query_object: dict[str, List[str]]
+) -> dict[str, Any]:
+ """
+ Return a dictionary mapping metric offset-labels to metric-labels.
+
+ Args:
+ form_data (Dict[str, List[str]]): The form data containing time
comparisons.
+ query_object (Dict[str, List[str]]): The query object containing
metrics.
+
+ Returns:
+ Dict[str, str]: A dictionary with offset-labels as keys and
metric-labels
+ as values.
+ """
+ query_metrics = ensure_is_array(query_object.get("metrics", []))
+ time_offsets = ensure_is_array(form_data.get("time_compare", []))
+
+ metric_labels = [get_metric_label(metric) for metric in query_metrics]
+ metric_offset_map = {}
+
+ for metric in metric_labels:
+ for offset in time_offsets:
+ key = f"{metric}{TIME_COMPARISON_SEPARATOR}{offset}"
+ metric_offset_map[key] = metric
+
+ return metric_offset_map
+
+
+def is_time_comparison(form_data: dict[str, Any], query_object: dict[str,
Any]) -> bool:
+ """
+ Determine if the query involves a time comparison.
+
+ Args:
+ form_data (dict): The form data containing query parameters.
+ query_object (dict): The query object.
+
+ Returns:
+ bool: True if it is a time comparison, False otherwise.
+ """
+ comparison_type = form_data.get("comparison_type")
+ metric_offset_map = get_metric_offsets_map(form_data, query_object)
+
+ return (
+ comparison_type in [ct.value for ct in ComparisonType]
+ and len(metric_offset_map) > 0
+ )
+
+
+def ensure_is_int(value: Any, default_value: Any = None) -> Any | float:
+ """
+ Convert the given value to an integer.
+ If conversion fails, returns default_value if provided,
+ otherwise returns NaN (as float('nan')).
+ """
+ try:
+ val = int(str(value))
+ except (ValueError, TypeError):
+ return default_value if default_value is not None else float("nan")
+ return val
+
+
+def is_physical_column(column: Any = None) -> bool:
+ """Return True if column is a physical column (string)."""
+ return isinstance(column, str)
+
+
+def is_adhoc_column(column: Any = None) -> bool:
+ """Return True if column is an adhoc column (object with SQL
expression)."""
+ if type(column) is not dict:
+ return False
+ return (
+ "sqlExpression" in column.keys()
+ and column["sqlExpression"] is not None
+ and "label" in column.keys()
+ and column["label"] is not None
+ and ("sqlExpression" not in column.keys() or column["expressionType"]
== "SQL")
+ )
+
+
+def is_query_form_column(column: Any) -> bool:
+ """Return True if column is either physical or adhoc."""
+ return is_physical_column(column) or is_adhoc_column(column)
+
+
+def is_x_axis_set(form_data: dict[str, Any]) -> bool:
+ """Return True if the x_axis is specified in form_data."""
+ return is_query_form_column(form_data.get("x_axis"))
+
+
+def get_x_axis_column(form_data: dict[str, Any]) -> Optional[Any]:
+ """Return x_axis column."""
+ if not (form_data.get("granularity_sqla") or form_data.get("x_axis")):
+ return None
+
+ if is_x_axis_set(form_data):
+ return form_data.get("x_axis")
+
+ return DTTM_ALIAS
+
+
+def get_column_label(column: Any) -> Optional[str]:
+ """Return the string label for a column."""
+ if is_physical_column(column):
+ return column
+ if column and column.get("label"):
+ return column.get("label")
+ return column.get("sqlExpression", None)
+
+
+def get_x_axis_label(form_data: dict[str, Any]) -> Optional[str]:
+ """Return the x_axis label from form_data."""
+ if col := get_x_axis_column(form_data):
+ return get_column_label(col)
+ return None
+
+
+def time_compare_pivot_operator(
+ form_data: dict[str, Any], query_object: dict[str, Any]
+) -> Optional[dict[str, Any]]:
+ """
+ A post-processing factory function for pivot operations.
+
+ Args:
+ form_data: The form data containing configuration
+ query_object: The query object with series and columns information
+
+ Returns:
+ Dictionary with pivot operation configuration or None
+ """
+ metric_offset_map = get_metric_offsets_map(form_data, query_object)
+ x_axis_label = get_x_axis_label(form_data)
+ columns = (
+ query_object.get("series_columns")
+ if query_object.get("series_columns") is not None
+ else query_object.get("columns")
+ )
+
+ if is_time_comparison(form_data, query_object) and x_axis_label:
+ # Create aggregates dictionary from metric offset map
+ metrics = list(metric_offset_map.values()) +
list(metric_offset_map.keys())
+ aggregates = {
+ metric: {"operator": "mean"} # use 'mean' aggregates to avoid
dropping NaN
+ for metric in metrics
+ }
+
+ return {
+ "operation": "pivot",
+ "options": {
+ "index": [x_axis_label],
+ "columns": [get_column_label(col) for col in
ensure_is_array(columns)],
+ "drop_missing_columns": not
form_data.get("show_empty_columns"),
+ "aggregates": aggregates,
+ },
+ }
+
+ return None
+
+
+def pivot_operator(
+ form_data: dict[str, Any], query_object: dict[str, Any]
+) -> Optional[dict[str, Any]]:
+ """
+ Construct a pivot operator configuration for post-processing.
+
+ This function extracts metric labels (including extra metrics) from the
query object
+ and form data, and retrieves the x-axis label. If both an x-axis label and
at
+ least one metric label are present, it builds a pivot configuration that
sets
+ the index as the x-axis label, transforms the columns via get_column_label,
+ and creates dummy 'mean' aggregates for each metric.
+
+ Args:
+ form_data (dict): The form data containing query parameters.
+ query_object (dict): The base query object containing metrics
+ and column information.
+
+ Returns:
+ dict or None: A dict with the pivot operator configuration
+ if the conditions are met,
+ otherwise None.
+ """
+ metric_labels = [
+ *ensure_is_array(query_object.get("metrics", [])),
+ *extract_extra_metrics(form_data),
+ ]
+ metric_labels = [get_metric_label(metric) for metric in metric_labels]
+ x_axis_label = get_x_axis_label(form_data)
+ columns = (
+ query_object.get("series_columns")
+ if query_object.get("series_columns") is not None
+ else query_object.get("columns")
+ )
+
+ if x_axis_label and metric_labels:
+ cols_list = [get_column_label(col) for col in ensure_is_array(columns)]
+ return {
+ "operation": "pivot",
+ "options": {
+ "index": [x_axis_label],
+ "columns": cols_list,
+ # Create 'dummy' mean aggregates to assign cell values in
pivot table
+ # using the 'mean' aggregates to avoid dropping NaN values
+ "aggregates": {
+ metric: {"operator": "mean"} for metric in metric_labels
+ },
+ "drop_missing_columns": not
form_data.get("show_empty_columns"),
+ },
+ }
+
+ return None
+
+
+def normalize_order_by(query_object: dict[str, Any]) -> dict[str, Any]:
+ """
+ Normalize the orderby clause in the query object.
+
+ If the "orderby" key already contains a valid clause (a list whose first
element
+ is a list of two elements, where the first element is truthy and the
second a bool),
+ the original query_object is returned. Otherwise, the function creates a
copy of
+ query_object, removes invalid orderby-related keys, and sets an orderby
clause based
+ on available keys: "series_limit_metric", "legacy_order_by", or the first
metric in
+ the "metrics" list. The sorting order is determined by the negation of
"order_desc".
+
+ Args:
+ query_object (dict): The query object containing orderby and related
keys.
+
+ Returns:
+ dict: A modified query object with a normalized "orderby" clause.
+ """
+ if (
+ isinstance(query_object.get("orderby"), list)
+ and len(query_object.get("orderby", [])) > 0
+ ):
+ # ensure a valid orderby clause
+ orderby_clause = query_object["orderby"][0]
+ if (
+ isinstance(orderby_clause, list)
+ and len(orderby_clause) == 2
+ and orderby_clause[0]
+ and isinstance(orderby_clause[1], bool)
+ ):
+ return query_object
+
+ # remove invalid orderby keys from a copy
+ clone_query_object = query_object.copy()
+ clone_query_object.pop("series_limit_metric", None)
+ clone_query_object.pop("legacy_order_by", None)
+ clone_query_object.pop("order_desc", None)
+ clone_query_object.pop("orderby", None)
+
+ is_asc = not query_object.get("order_desc", False)
+
+ if query_object.get("series_limit_metric") is not None and
query_object.get(
+ "series_limit_metric"
+ ):
+ return {
+ **clone_query_object,
+ "orderby": [[query_object["series_limit_metric"], is_asc]],
+ }
+
+ # todo: Removed `legacy_order_by` after refactoring
+ if query_object.get("legacy_order_by") is not None and query_object.get(
+ "legacy_order_by"
+ ):
+ return {
+ **clone_query_object,
+ "orderby": [[query_object["legacy_order_by"], is_asc]],
+ }
+
+ if (
+ isinstance(query_object.get("metrics"), list)
+ and len(query_object.get("metrics", [])) > 0
+ ):
+ return {**clone_query_object, "orderby": [[query_object["metrics"][0],
is_asc]]}
+
+ return clone_query_object
+
+
+def remove_duplicates(items: Any, hash_func: Any = None) -> list[Any]:
+ """
+ Remove duplicate items from a list.
+
+ Args:
+ items: List of items to deduplicate
+ hash_func: Optional function to generate a hash for comparison
+
+ Returns:
+ List with duplicates removed
+ """
+ if hash_func:
+ seen = set()
+ result = []
+ for x in items:
+ item_hash = hash_func(x)
+ if item_hash not in seen:
+ seen.add(item_hash)
+ result.append(x)
+ return result
+ else:
+ # Using Python's built-in uniqueness for lists
+ return list(dict.fromkeys(items)) # Preserves order in Python 3.7+
+
+
+def extract_fields_from_form_data(
+ rest_form_data: dict[str, Any],
+ query_field_aliases: dict[str, Any],
+ query_mode: Any | str,
+) -> tuple[list[Any], list[Any], list[Any]]:
+ """
+ Extract fields from form data based on aliases and query mode.
+
+ Args:
+ rest_form_data (dict): The residual form data.
+ query_field_aliases (dict): A mapping of key aliases.
+ query_mode (str): The query mode, e.g. 'aggregate' or 'raw'.
+
+ Returns:
+ tuple: A tuple of three lists: (columns, metrics, orderby)
+ """
+ columns = []
+ metrics = []
+ orderby = []
+
+ for key, value in rest_form_data.items():
+ if value is None:
+ continue
+
+ normalized_key = query_field_aliases.get(key, key)
+
+ if query_mode == "aggregate" and normalized_key == "columns":
+ continue
+ if query_mode == "raw" and normalized_key in ["groupby", "metrics"]:
+ continue
+
+ if normalized_key == "groupby":
+ normalized_key = "columns"
+
+ if normalized_key == "metrics":
+ metrics.extend(value if isinstance(value, list) else [value])
+ elif normalized_key == "columns":
+ columns.extend(value if isinstance(value, list) else [value])
+ elif normalized_key == "orderby":
+ orderby.extend(value if isinstance(value, list) else [value])
+
+ return columns, metrics, orderby
+
+
+def extract_query_fields(
+ form_data: dict[Any, Any], aliases: Any = None
+) -> Union[dict[str, Any]]:
+ """
+ Extract query fields from form data.
+
+ Args:
+ form_data: Form data residual
+ aliases: Query field aliases
+
+ Returns:
+ Dictionary with columns, metrics, and orderby fields
+ """
+ query_field_aliases = {
+ "metric": "metrics",
+ "metric_2": "metrics",
+ "secondary_metric": "metrics",
+ "x": "metrics",
+ "y": "metrics",
+ "size": "metrics",
+ "all_columns": "columns",
+ "series": "groupby",
+ "order_by_cols": "orderby",
+ }
+
+ if aliases:
+ query_field_aliases.update(aliases)
+ query_mode = form_data.pop("query_mode", None)
+ rest_form_data = form_data
+
+ columns, metrics, orderby = extract_fields_from_form_data(
+ rest_form_data, query_field_aliases, query_mode
+ )
+
+ result: dict[str, Any] = {
+ "columns": remove_duplicates(
+ [col for col in columns if col != ""], get_column_label
+ ),
+ "orderby": None,
+ }
+ if query_mode != "raw":
+ result["metrics"] = remove_duplicates(metrics, get_metric_label)
+ else:
+ result["metrics"] = None
+ if orderby:
+ result["orderby"] = []
+ for item in orderby:
+ if isinstance(item, str):
+ try:
+ result["orderby"].append(json.loads(item))
+ except Exception as err:
+ raise ValueError("Found invalid orderby options") from err
+ else:
+ result["orderby"].append(item)
+
+ return result
+
+
+def extract_extras(form_data: dict[str, Any]) -> dict[str, Any]:
+ """
+ Extract extras from the form_data analogous to the TS version.
+ """
+ applied_time_extras: dict[str, Any] = {}
+ filters: list[Any] = []
+ extras: dict[str, Any] = {}
+ extract: dict[str, Any] = {
+ "filters": filters,
+ "extras": extras,
+ "applied_time_extras": applied_time_extras,
+ }
+
+ # Mapping reserved columns to query field names
+ reserved_columns_to_query_field = {
+ "__time_range": "time_range",
+ "__time_col": "granularity_sqla",
+ "__time_grain": "time_grain_sqla",
+ "__granularity": "granularity",
+ }
+
+ extra_filters = form_data.get("extra_filters", [])
+ for filter_item in extra_filters:
+ col = filter_item.get("col")
+ # Check if filter col is reserved
+ if col in reserved_columns_to_query_field:
+ query_field = reserved_columns_to_query_field[col]
+ # Assign the filter value to the extract dict
+ extract[query_field] = filter_item.get("val")
+ applied_time_extras[col] = filter_item.get("val")
+ else:
+ filters.append(filter_item)
+
+ # SQL: set extra properties based on TS logic
+ if "time_grain_sqla" in form_data.keys() or "time_grain_sqla" in
extract.keys():
+ # If time_grain_sqla is set in form_data, use it
+ # Otherwise, use the value from extract
+ value = form_data.get("time_grain_sqla") or
form_data.get("time_grain_sqla")
+ extras["time_grain_sqla"] = value
+
+ extract["granularity"] = (
+ extract.get("granularity_sqla")
+ or form_data.get("granularity")
+ or form_data.get("granularity_sqla")
+ )
+ # Remove temporary keys
+ extract.pop("granularity_sqla", None)
+ extract.pop("time_grain_sqla", None)
+ if extract["granularity"] is None:
+ extract.pop("granularity", None)
+
+ return extract
+
+
+def is_defined(x: Any) -> bool:
+ """
+ Returns True if x is not None.
+ This is equivalent to checking that x is neither null nor undefined in
TypeScript.
+ """
+ return x is not None
+
+
+def sanitize_clause(clause: str) -> str:
+ """
+ Sanitize a SQL clause. If the clause contains '--', append a newline.
+ Then wrap the clause in parentheses.
+ """
+ if clause is None:
+ return ""
+ sanitized_clause = clause
+ if "--" in clause:
+ sanitized_clause = clause + "\n"
+ return f"({sanitized_clause})"
+
+
+def is_unary_operator(operator: Any | str) -> bool:
+ """Return True if operator is unary."""
+ return operator in unary_operator_set
+
+
+def is_binary_operator(operator: Any | str) -> bool:
+ """Return True if operator is binary."""
+ return operator in binary_operator_set
+
+
+def is_set_operator(operator: Any | str) -> bool:
+ """Return True if operator is a set operator."""
+ return operator in set_operator_set
+
+
+def is_unary_adhoc_filter(filter_item: dict[str, Any]) -> bool:
+ """Return True if the filter's operator is unary."""
+ return is_unary_operator(filter_item.get("operator"))
+
+
+def is_binary_adhoc_filter(filter_item: dict[str, Any]) -> bool:
+ """Return True if the filter's operator is binary."""
+ return is_binary_operator(filter_item.get("operator"))
+
+
+def convert_filter(filter_item: dict[str, Any]) -> dict[str, Any]:
+ """Convert an adhoc filter to a query clause dict."""
+ subject = filter_item.get("subject")
+ if is_unary_adhoc_filter(filter_item):
+ operator = filter_item.get("operator")
+ return {"col": subject, "op": operator}
+ if is_binary_adhoc_filter(filter_item):
+ operator = filter_item.get("operator")
+ val = filter_item.get("comparator")
+ result = {"col": subject, "op": operator}
+ if val is not None:
+ result["val"] = val
+ return result
+ operator = filter_item.get("operator")
+ val = filter_item.get("comparator")
+ result = {"col": subject, "op": operator}
+ if val is not None:
+ result["val"] = val
+ return result
+
+
+def is_simple_adhoc_filter(filter_item: dict[str, Any]) -> bool:
+ """Return True if the filter is a simple adhoc filter."""
+ return filter_item.get("expressionType") == "SIMPLE"
+
+
+def process_filters(form_data: dict[str, Any]) -> dict[str, Any]:
+ """
+ Process filters from form_data:
+ - Split adhoc_filters according to clause and expression type.
+ - Build simple filter and freeform SQL clauses for WHERE/HAVING.
+ - Place freeform clauses into extras.
+ """
+ adhoc_filters = form_data.get("adhoc_filters", [])
+ extras = form_data.get("extras", {})
+ filters_list = form_data.get("filters", [])
+
+ # Copy filters_list into simple_where
+ simple_where = filters_list[:]
+ freeform_where = []
+ freeform_having = []
+
+ if where := form_data.get("where"):
+ freeform_where.append(where)
+
+ for filter_item in adhoc_filters:
+ clause = filter_item.get("clause")
+ if is_simple_adhoc_filter(filter_item):
+ filter_clause = convert_filter(filter_item)
+ if clause == "WHERE":
+ simple_where.append(filter_clause)
+ else:
+ sql_expression = filter_item.get("sqlExpression")
+ if clause == "WHERE":
+ freeform_where.append(sql_expression)
+ else:
+ freeform_having.append(sql_expression)
+
+ extras["having"] = " AND ".join([sanitize_clause(s) for s in
freeform_having])
+ extras["where"] = " AND ".join([sanitize_clause(s) for s in
freeform_where])
+
+ return {
+ "filters": simple_where,
+ "extras": extras,
+ }
+
+
+def override_extra_form_data(
+ query_object: dict[str, Any], override_form_data: dict[str, Any]
+) -> dict[str, Any]:
+ """
+ Override parts of the query_object with values from override_form_data.
+
+ Mimics the behavior of the TypeScript function:
+ - For keys in EXTRA_FORM_DATA_OVERRIDE_REGULAR_MAPPINGS,
+ if set in override_form_data, assign the value in query_object
+ under the mapped target key.
+ - For keys in EXTRA_FORM_DATA_OVERRIDE_EXTRA_KEYS,
+ if present in override_form_data, add them to query_object['extras'].
+ """
+ # Create a copy of the query object
+ overridden_form_data = query_object.copy()
+ # Ensure extras is a mutable copy of what's in query_object (or an empty
dict)
+ overridden_extras = overridden_form_data.get("extras", {}).copy()
+
+ # Process regular mappings
+ for key, target in EXTRA_FORM_DATA_OVERRIDE_REGULAR_MAPPINGS.items():
+ value = override_form_data.get(key)
+ if value is not None:
+ overridden_form_data[target] = value
+
+ # Process extra keys
+ for key in EXTRA_FORM_DATA_OVERRIDE_EXTRA_KEYS:
+ if key in override_form_data:
+ overridden_extras[key] = override_form_data[key]
+
+ if overridden_extras:
+ overridden_form_data["extras"] = overridden_extras
+
+ return overridden_form_data
+
+
+def build_query_object(
+ form_data: dict[str, Any], query_fields: Any = None
+) -> dict[str, Any]:
+ """
+ Build a query object from form data.
+
+ Args:
+ form_data: Dictionary containing form data
+ query_fields: Optional query field aliases
+
+ Returns:
+ Dictionary representing the query object
+ """
+ # Extract fields from form_data with defaults
+ annotation_layers = form_data.get("annotation_layers", [])
+ extra_form_data = form_data.get("extra_form_data", {})
+ time_range = form_data.get("time_range")
+ since = form_data.get("since")
+ until = form_data.get("until")
+ row_limit = form_data.get("row_limit")
+ row_offset = form_data.get("row_offset")
+ order_desc = form_data.get("order_desc")
+ limit: Any | int = form_data.get("limit")
+ timeseries_limit_metric = form_data.get("timeseries_limit_metric")
+ granularity = form_data.get("granularity")
+ url_params = form_data.get("url_params", {})
+ custom_params = form_data.get("custom_params", {})
+ series_columns = form_data.get("series_columns")
+ series_limit: Any | str = form_data.get("series_limit")
+ series_limit_metric = form_data.get("series_limit_metric")
+
+ # Create residual_form_data by removing extracted fields
+ residual_form_data = {
+ k: v
+ for k, v in form_data.items()
+ if k
+ not in [
+ "annotation_layers",
+ "extra_form_data",
+ "time_range",
+ "since",
+ "until",
+ "row_limit",
+ "row_offset",
+ "order_desc",
+ "limit",
+ "timeseries_limit_metric",
+ "granularity",
+ "url_params",
+ "custom_params",
+ "series_columns",
+ "series_limit",
+ "series_limit_metric",
+ ]
+ }
Review Comment:
### Inline Long Exclusion List <sub></sub>
<details>
<summary>Tell me more</summary>
###### What is the issue?
Long inline list of excluded keys makes the dictionary comprehension hard to
read and maintain.
###### Why this matters
Embedding a long list directly in a dictionary comprehension forces readers
to scroll horizontally and makes the logic harder to follow.
###### Suggested change ∙ *Feature Preview*
```python
EXCLUDED_FORM_DATA_KEYS = [
"annotation_layers",
"extra_form_data",
"time_range",
"since",
"until",
"row_limit",
"row_offset",
"order_desc",
"limit",
"timeseries_limit_metric",
"granularity",
"url_params",
"custom_params",
"series_columns",
"series_limit",
"series_limit_metric",
]
residual_form_data = {k: v for k, v in form_data.items() if k not in
EXCLUDED_FORM_DATA_KEYS}
```
###### Provide feedback to improve future suggestions
[](https://app.korbit.ai/feedback/aa91ff46-6083-4491-9416-b83dd1994b51/98967a9a-26f7-4cd6-b9d9-7df555eef676/upvote)
[](https://app.korbit.ai/feedback/aa91ff46-6083-4491-9416-b83dd1994b51/98967a9a-26f7-4cd6-b9d9-7df555eef676?what_not_true=true)
[](https://app.korbit.ai/feedback/aa91ff46-6083-4491-9416-b83dd1994b51/98967a9a-26f7-4cd6-b9d9-7df555eef676?what_out_of_scope=true)
[](https://app.korbit.ai/feedback/aa91ff46-6083-4491-9416-b83dd1994b51/98967a9a-26f7-4cd6-b9d9-7df555eef676?what_not_in_standard=true)
[](https://app.korbit.ai/feedback/aa91ff46-6083-4491-9416-b83dd1994b51/98967a9a-26f7-4cd6-b9d9-7df555eef676)
</details>
<sub>
💬 Looking for more details? Reply to this comment to chat with Korbit.
</sub>
<!--- korbi internal id:ddb2b854-2c6f-4726-b7a3-14146680c16e -->
[](ddb2b854-2c6f-4726-b7a3-14146680c16e)
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]