ik8 commented on PR #50536: URL: https://github.com/apache/spark/pull/50536#issuecomment-2798588230
> > This is already implemented [here](https://spark.apache.org/docs/latest/sql-ref-syntax-qry-select.html): > > ``` > > When spark.sql.parser.quotedRegexColumnNames is true, > > quoted identifiers (using backticks) in SELECT statement are interpreted as > > regular expressions and SELECT statement can take regex-based column specification. > > For example, below SQL will only take column c: > > SELECT `(a|b)?+.+` FROM (SELECT 1 as a, 2 as b, 3 as c) > > ``` > > @ik8 I don’t think this is the same. Imagine that you have a dataset with 200 columns and you want to select all but 20 columns, how would you do that? It would look really complex. > > I think select except is different from select regex @Gschiavon either way you will have to specify the columns you don't want to select. I think this is a good option to have(your pr with the EXCEPT), just wanted to let you know that it is already supported. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org