> What would you actually do with it? I am one of the users of these do-it-yourself functions, and I use them in my ETL pipelines heavily.
For me, data gets loaded into a staging table, all columns text, and I run a whole bunch of validation queries on the data prior to it moving to the next stage in the pipeline, a strongly typed staging table, where more validations are performed. So I currently check each column type with my custom can_convert_sometype(text) functions, and if the row has any columns that cannot convert, it marks a boolean to ignore moving that row to the next strongly typed table (thus avoiding the cast for those rows). For this ETL process, I need to give users feedback about why specific specific rows failed to be processed, so each of those validations also logs an error message for the user for each row failing a specific validation. So it's a two step process for me currently because of this, I would love if there was a better way to handle this type of work though, because my plpgsql functions using exception blocks are not exactly great for performance. >> Similar features are implemented in: >> - SQL Server (as TRY_CONVERT) >> - Oracle (as CONVERT([val] DEFAULT [expr] ON CONVERSION ERROR) > > Somehow, I don't think those have the semantics of what you suggest here. Agreed that they aren't the same exact feature, but I would very much love the ability to both know "will this cast fail?", and also be able to "try and cast, but if it fails just put this value and don't error". They both have uses IMO, and while having is_castable() functions built in would be great, I just want to express my desire for something like the above feature in SQL Server or Oracle as well.