Dear David! I tried in in different servers, different databases.
1.) Windows local PG: LC_COLLATE = 'Hungarian_Hungary.1250' - ok. 2.) Linux remote PG: LC_CTYPE = 'en_US.UTF-8' - also wrong!!! - UTF problem??? 3.) Forcing C collation: - ok 4.) Replace '/' to 'A': - ok select replace('18/0113', '/', 'A') > replace('180/2010', '/', 'A'), replace('18/0113', '/', 'A') < replace('180/2010', '/', 'A'), replace('18/0212', '/', 'A') > replace('180/2010', '/', 'A'), replace('18/0212', '/', 'A') < replace('180/2010', '/', 'A') 5.) Replace '/' to space: - wrong 6.) Replace '/' to empty string: wrong select replace('18/0113', '/', '') > replace('180/2010', '/', ''), replace('18/0113', '/', '') < replace('180/2010', '/', ''), replace('18/0212', '/', '') > replace('180/2010', '/', ''), replace('18/0212', '/', '') < replace('180/2010', '/', '') I tried to simulate this in LibreOffice: Normal Removed As Space As 0 18/0113 180113 18 0113 1800113 18/0212 1802010 18 0212 18002010 180/2010 1802010 180 2010 18002010 180/2010 180212 180 2010 1800212 This thing could happen if '\' sign is not interpreted (as removed), or replaced by '0' in the background. The main problem that we have many searches in programme where we suppose good evaluation, and we have more sites with different servers (and collation). Hmmm... Thanks dd 2018-01-10 16:25 GMT+01:00 David G. Johnston <david.g.johns...@gmail.com>: > On Wednesday, January 10, 2018, Durumdara <durumd...@gmail.com> wrote: > >> >> The PG is 9.4 on Linux, the DataBase encoding is: >> ENCODING = 'UTF8' >> LC_COLLATE = 'hu_HU.UTF-8' >> LC_CTYPE = 'hu_HU.UTF-8' >> >> > The collection rules for hu_HU.UTF-8 probably pretend symbols don't exist, > this is not uncommon. You probably need to use 'C' collation. > > David J. >