zxs1633079383 opened a new issue, #36978:
URL: https://github.com/apache/shardingsphere/issues/36978

   # PostgreSQL Custom Types Parsing Issue in ShardingSphere-Proxy
   
   ## Description
   
   I am using **ShardingSphere-Proxy version 5.5.2** with PostgreSQL.  
   In my database, I have custom data types, for example `varbit`.  
   
   When `TableMetaData` is loaded, these columns are identified as JDBC type 
`1111` (i.e., `Types.OTHER`).  
   As a result, the system defaults to the JSON parser for these columns. This 
behavior makes it impossible to process custom PostgreSQL types correctly, 
because JSON parsing is not applicable for types like `varbit`.  
   
   This raises a general concern: how should other PostgreSQL custom business 
types be handled properly through ShardingSphere-Proxy?
   
   ## Observed Problem
   
   - Custom PostgreSQL types are treated as `Types.OTHER` (1111).  
   - The default JSON parser is applied, which fails for types like `varbit` or 
other user-defined types.  
   - Currently, multiple types would require multiple parsers unless a unified 
parser can be applied.  
   - Even in fast sequential operations, timestamp fields generated by 
`time.Now().UnixMilli()` can collide because of millisecond precision.
   
   ## Questions / Suggestions
   
   1. How can we register or configure a parser for PostgreSQL custom types in 
ShardingSphere-Proxy 5.5.2?  
   2. Is there a recommended way to map PostgreSQL `udt_name` to a parser so 
that multiple custom types can share a single generic parser?  
   3. How can PostgreSQL UDTs flow correctly through ShardingSphere-Proxy 
without breaking inserts or selects?
   
   ## Environment
   
   - ShardingSphere-Proxy version: 5.5.2
   - PostgreSQL
   - Custom types examples: `varbit`, `bit`, other user-defined types
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: 
[email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to