control how many replicas need to
confirm to the leader before the response is supplied to the client, as you can
with say MongoDB replicas.
On Friday, October 21, 2016 1:18 AM, Garth Grimm
wrote:
No matter where you send the update to initially, it will get sent to the
leader o
) always to the leader of the solr instances,does it automatically load
balance between the replicas?
Or do I have to hit each instance in a round robin way and have the load
balanced through the code?
Please advise the best way to do so..
Thank you very much again..
On Fri, Oct 21, 201
hat will handle the document update.
In general, Zookeeper really only provides the cloud configuration information
once (at most) during all the updates, the actual document update only gets
sent to solr nodes. There's definitely no need to distribute load between
zookeepers for this situation.
Have you evaluated whether the "mm" parameter might help?
https://cwiki.apache.org/confluence/display/solr/The+DisMax+Query+Parser#TheDisMaxQueryParser-Themm(MinimumShouldMatch)Parameter
-Original Message-
From: preeti kumari [mailto:preeti.bg...@gmail.com]
Sent: Friday, September 23, 20
Both.
One shard will have roughly half the documents, and the indices built from
them; the other shard will have the other half of the documents, and the
indices built from those.
There won't be one location that contains all the documents, nor all the
indices.
-Original Message-
From
I thought that if you start with 3 Zk nodes in the ensemble, and only lose 1,
it will have no effect on indexing at all, since you still have a quorum.
If you lose 2 (which takes you below quorum), then the cloud loses "confidence"
in which solr core is the leader of each shard and stops indexin
Yes.
-Original Message-
From: Yago Riveiro [mailto:yago.rive...@gmail.com]
Sent: Tuesday, December 22, 2015 5:51 AM
To: solr-user@lucene.apache.org
Subject: Indexing using a collection alias
Hi,
It's possible index documents using the alias and not the collection name, if
the alias onl
" Is there really a good reason to consolidate down to a single segment?"
Archiving (as one example). Come July 1, the collection for log
entries/transactions in June will never be changed, so optimizing is
actually a good thing to do.
Kind of getting away from OP's question on this, but I don't
Check the firewall settings on the Linux machine.
By default, mine block port 8983, so the request never even gets to Jetty/Solr.
-Original Message-
From: Paden [mailto:rumsey...@gmail.com]
Sent: Monday, June 22, 2015 2:48 PM
To: solr-user@lucene.apache.org
Subject: Connecting to a Solr
Framework way?
Maybe try delving into the log4j framework and modify the log4j.properties
file. You can generate different log files based upon what class generated the
message. Here's an example that I experimented with previously, it generates
an update log, and 2 different query logs with
Yes, it does support POST. As to format, I believe that's handled by the
container. So if you're url-encoding the parameter values, you'll probably
need to set Content-Type: application/x-www-form-urlencoded for the HTTP POST
header.
-Original Message-
From: Steven White [mailto:swhit
Shawn's explanation fits better with why Websphere and Jetty might behave
differently. But something else that might be happening could be if the DHCP
negotiation causes the IP address to change from one network to another and
back.
-Original Message-
From: Steven White [mailto:swhite4
For updates, the document will always get routed to the leader of the
appropriate shard, no matter what server first receives the request.
-Original Message-
From: Martin de Vries [mailto:mar...@downnotifier.com]
Sent: Thursday, March 05, 2015 4:14 PM
To: solr-user@lucene.apache.org
Subj
You can't just add a new core to an existing collection. You can add the new
node to the cloud, but it won't be part of any collection. You're not going to
be able to just slide it in as a 4th shard to an established collection of 3
shards.
The root of that comes from routing (I'll assume you
Well, if you're going to reindex on a newer version, just start out with the
number of shards you feel is appropriate, and reindex.
But yes, if you had 3 shards, wanted to split some of them, you'd really
have to split all of them (making 6), if you wanted the shards to be about
the same size.
As
https://issues.apache.org/jira/browse/SOLR-6744 created.
And hopefully correctly, since that’s my first.
On Nov 15, 2014, at 9:12 AM, Garth Grimm
mailto:garthgr...@averyranchconsulting.com>>
wrote:
I see the same issue on 4.10.1.
I’ll open a JIRA if I don’t see one.
I guess th
I see the same issue on 4.10.1.
I’ll open a JIRA if I don’t see one.
I guess the best immediate work around is to copy the unique field, and use
that field for renaming?
> On Nov 15, 2014, at 3:18 AM, Suchi Amalapurapu wrote:
>
> Solr version:4.6.1
>
> On Sat, Nov 15, 2014 at 12:24 PM, Jeon W
So it sounds like you’re OK with using the docURL as the unique key for routing
in SolrCloud, but you don’t want to use it as a lookup mechanism.
If you don’t want to do a hash of it and use that unique value in a second
unique field and feed time,
and you can’t seem to find any other field that
at can caise issue
> with Solr lookup.
>
> I guess I should rephrase my question to ,how to auto generate the unique
> keys in the id field when using SolrCloud?
> On Nov 12, 2014 7:28 PM, "Garth Grimm"
> wrote:
>
>> You mention you already have a unique Key
t;> well , because I just tried the following setting without the uniqueKey for
>> id and its only generating blank ids for me.
>>
>> *schema.xml*
>>
>>>required="true" multiValued="false" />
>>
>> *solrco
4-x/
Though I’ve not actually tried that process before.
On Nov 11, 2014, at 7:39 PM, Garth Grimm
mailto:garthgr...@averyranchconsulting.com>>
wrote:
“uuid” isn’t an out of the box field type that I’m familiar with.
Generally, I’d stick with the out of the box advice of the schema.xml file,
“uuid” isn’t an out of the box field type that I’m familiar with.
Generally, I’d stick with the out of the box advice of the schema.xml file,
which includes things like….
and…
id
If you’re creating some key/value pair with uuid as the key as you feed
documents in, and you know
What field(s) auto suggest uses is configurable. So you could create special
fields (and associated ‘copyField’ configs) to populate specific fields for
auto suggest.
For example, you could have 2 fields for “hidden_desc” and “visible_desc”.
Copy field both of them to a field named “descripti
uot;_version_":1481861578588422158},
{
"zip":"76131",
"inhabitants":296033,
"city":"Karlsruhe",
"importance":1,
"latitude":49.0079486,
"latlong":"49
Spaces should work just fine. Can you show us exactly what is happening with
the score that leads you to the conclusion that it isn’t working?
Some testing from an example collection I have…
No boost:
http://localhost:8983/solr/collection1/select?q=text%3Abook&fl=id%2Cprice%2Cyearpub%2Cscore&wt
Well, the current release is only supported on Linux. A Windows compatible
release is planned for later this year.
-Original Message-
From: Anurag Sharma [mailto:anura...@gmail.com]
Sent: Sunday, October 05, 2014 12:23 PM
To: solr-user@lucene.apache.org
Subject: Re: [ANN] Lucidworks F
As a follow-up question on this
One would want to use some kind of load balancing 'above' the SolrCloud
installation for search queries, correct? To ensure that the initial requests
would get distributed evenly to all nodes?
If you don't have that, and send all requests to M2S2 (IRT OP), i
her replica in the shard
can be tried. We use the load balancing solrj client for these internal
requests. CloudSolrServer handles failover for the user (or non internal)
requests. Or you can use your own external load balancer.
- Mark
>
> Cheers,
> Tim
>
>
> On Tue, No
Given a 4 node Solr Cloud (i.e. 2 shards, 2 replicas per shard).
Let's say one node becomes 'nonresponsive'. Meaning sockets get created, but
transactions to them don't get handled (i.e. they time out). We'll also assume
that means the solr instance can't send information out to zookeeper or o
)?
Thanks,
Garth Grimm
Go to the admin screen for Cloud/Tree, and then click the node for
aliases.json. To the lower right, you should see something like:
{"collection":{"AdWorksQuery":"AdWorks"}}
Or access the Zookeeper instance, and do a 'get /aliases.json'.
-Original Message-
From: Christopher Gross [mail
But if you're working with multiple configs in zookeeper, be aware that 4.5
currently has an issue creating multiple collections in a cloud that has
multiple configs. It's targeted to be fixed whenever 4.5.1 comes out.
https://issues.apache.org/jira/i#browse/SOLR-5306
-Original Message---
ew,
I'd run:
http://index1:8080/solr/admin/cores?action=CREATEALIAS&name=core1&collections=core1new&shard=shard1
Correct?
-- Chris
On Wed, Oct 16, 2013 at 9:02 AM, Garth Grimm <
garthgr...@averyranchconsulting.com> wrote:
> The alias applies to the entire clou
The alias applies to the entire cloud, not a single core.
So you'd have your indexing application point to a "collection alias" named
'index'. And that alias would point to core1.
You'd have your query applications point to a "collection alias" named 'query',
and that would point to core1, as w
34 matches
Mail list logo