Thanks for your response. I am using dynamic schema. But I want to copy all 
_txt fields to _s fields. I know if I add copy statement in the managed schema 
file it will work , but I don’t want to do manual change. Is there a way I can 
use curl command to add copy command for all _txt columns in the managed schema 
file. 

Thanks
Subhasis Patra
240-755-2601
subhasis.pa...@e2open.com

-----Original Message-----
From: ufuk yılmaz <uyil...@vivaldi.net.INVALID> 
Sent: Monday, May 15, 2023 6:39 AM
To: users@solr.apache.org
Subject: Re: Streaming of Documents with text columns (_txt)

PHISH ALERT! CHECK VALIDITY IF CLICKING, SHARING, RESPONDING


There may be an auto generated field already in your schema then. It’s name 
should be something like yourfield_str

—

> On 12 May 2023, at 15:12, Subhasis Patra <subhasis.pa...@e2open.com.invalid> 
> wrote:
>
> I am using solr in cloud mode with schemaless mode. I don’t want to 
> update/touch managed schema. Is there any way I can send the copy of those 
> fields using solj ? So the text value will be copied to string value.
>
>
> Thanks
> Subhasis Patra
> 240-755-2601
> subhasis.pa...@e2open.com
>
> -----Original Message-----
> From: ufuk yılmaz <uyil...@vivaldi.net.INVALID>
> Sent: Thursday, May 11, 2023 5:58 AM
> To: users@solr.apache.org
> Subject: Re: Streaming of Documents with text columns (_txt)
>
> PHISH ALERT! CHECK VALIDITY IF CLICKING, SHARING, RESPONDING
>
>
> My solution to this kind of situation is to have a docValues enabled 
> copyField for each text field in the schema, so I can export all of 
> the fields when necessary
>
> -ufuk yilmaz
>
> —
>
>> On 11 May 2023, at 05:08, Subhasis Patra <subhasis.pa...@e2open.com> wrote:
>>
>> Hi All,
>>
>> I am using CloudSolrStream to get stream data for documents in Solr. I am 
>> using /export When documents have columns of type STRING, DATE, DOUBLE, 
>> LONG. It does not allow /export when documents have  _txt 
>> column(DocValues=false). So I use as below. I use _txt to support case 
>> insensitive search.
>>
>> StreamFactory factory = new
>> StreamFactory().withCollectionZkHost(collection, zkHost); 
>> StreamExpression streamExpression = 
>> StreamExpressionParser.parse("search(" + collection + ", 
>> q=\""+filter+"\", fl=\""+filedsCommaSeparated+"\",rows=\""+count+"\",
>> sort=\"id asc\")");
>>
>> This works, but it does not support memory management like /export. Limiting 
>> rows by using start parameters slows down the process.
>> Can anyone help me how to achieve this ?
>>
>>
>> Thanks
>> Subhasis Patra
>> 240-755-2601
>> subhasis.pa...@e2open.com<mailto:subhasis.pa...@e2open.com>
>>
>

Reply via email to