On Sun, Oct 02, 2011 at 05:45:48PM +0200, Reuven M. Lerner wrote:
> quite grateful for that. (I really hadn't ever needed to deal with
> such issues in the past, having worked mostly with English and
> Hebrew, which don't have such accent marks.)
That isn't quite true about English. We have word
At 01:25 02/10/2011, Reuven M. Lerner wrote:
Hi, everyone. I'm working on a project on
PostgreSQL 9.0 (soon to be upgraded to 9.1,
given that we haven't yet launched). The
project will involve numerous text fields
containing English, Spanish, and
Portuguese. Some of those text fields w
>> I don't see the problem - you can have a dictionary, which does all work
>> on recognizing bare letters and output several versions. Have you seen
>> unaccent
>> dictionary ?
>
> This seems to be the direction that everyone is suggesting, and I'm quite
> grateful for that. (I really hadn't ever
Hi, Oleg. You wrote:
I don't see the problem - you can have a dictionary, which does all work
on recognizing bare letters and output several versions. Have you seen
unaccent
dictionary ?
This seems to be the direction that everyone is suggesting, and I'm
quite grateful for that. (I really h
I don't see the problem - you can have a dictionary, which does all work on
recognizing bare letters and output several versions. Have you seen unaccent
dictionary ?
Oleg
On Sun, 2 Oct 2011, Uwe Schroeder wrote:
Hi, everyone. Uwe wrote:
What kind of "client" are the users using? I assume yo
> Hi, everyone. Uwe wrote:
> > What kind of "client" are the users using? I assume you will have some
> > kind of user interface. For me this is a typical job for a user
> > interface. The number of letters with "equivalents" in different
> > languages are extremely limited, so a simple matching
Reuven M. Lerner wrote:
>> Hi, everyone. I'm working on a project on PostgreSQL 9.0 (soon
>> to be upgraded to 9.1, given that we haven't yet launched). The
>> project will involve numerous text fields containing English,
>> Spanish, and Portuguese. Some of those text fiel
Hi, everyone. Uwe wrote:
What kind of "client" are the users using? I assume you will have some kind
of user interface. For me this is a typical job for a user interface. The
number of letters with "equivalents" in different languages are extremely
limited, so a simple matching routine in the
One approach would be to "normalize" all the text and search against that.
That is, basically convert all non-ASCII characters to their equivalents.
I've had to do this in Solr for searching for the exact reasons you've
outlined: treat "ñ" as "n". Ditto for "ü" -> "u", "é" => "e", etc.
This is
> Hi, everyone. I'm working on a project on PostgreSQL 9.0 (soon to be
> upgraded to 9.1, given that we haven't yet launched). The project will
> involve numerous text fields containing English, Spanish, and Portuguese.
> Some of those text fields will be searchable by the user. That's easy
>
On Sun, 2011-10-02 at 01:25 +0200, Reuven M. Lerner wrote:
> Hi, everyone. I'm working on a project on PostgreSQL 9.0 (soon to be
> upgraded to 9.1, given that we haven't yet launched). The project
> will involve numerous text fields containing English, Spanish, and
> Portuguese. Some of those
Hi, everyone. I'm working on a project on PostgreSQL 9.0 (soon
to be upgraded to 9.1, given that we haven't yet launched). The
project will involve numerous text fields containing English,
Spanish, and Portuguese. Some of those text fields will be
searchable by
12 matches
Mail list logo