Re: [PR] SOLR-17447 : Support to early terminate a search based on maxHits per collector. [solr]

2025-02-17 Thread via GitHub


sijuv commented on code in PR #2960:
URL: https://github.com/apache/solr/pull/2960#discussion_r1958612194


##
solr/solr-ref-guide/modules/query-guide/pages/common-query-parameters.adoc:
##
@@ -399,6 +399,23 @@ For example, setting `cpuAllowed=500` gives a limit of at 
most 500 ms of CPU tim
 
 All other considerations regarding partial results listed for the 
`timeAllowed` parameter apply here, too.
 
+
+== maxHits Parameter
+
+[%autowidth,frame=none]
+|===
+|Optional |Default: `false`
+|===
+
+This parameter specifies the max number of hits a searcher will iterate 
through before early terminating the search.
+The count is per shard and across all threads involved in case of 
multi-threaded search. This parameter works
+in conjunction with other parameters that could early terminate a search, ex: 
_timeAllowed_ etc. In case the search
+was early terminated due to it exceeding maxHits a _terminatedEarly_ header in 
the response will be set along with
+_partialResults_ to indicate the same. Note that the _partialResults_ flag 
could be set in the absence of the _maxHits_
+parameter due to other limits like _timeAllowed_ or _cpuAllowed_.
+Note : the hits counted will need not be exactly equal to the maxHits 
provided, however it will be within range of this value.
+
+

Review Comment:
   updated.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Resolved] (SOLR-17671) Replication and Backup should use an unwrapped Directory when copying files.

2025-02-17 Thread Bruno Roustant (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-17671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bruno Roustant resolved SOLR-17671.
---
Fix Version/s: 9.9
   Resolution: Fixed

> Replication and Backup should use an unwrapped Directory when copying files.
> 
>
> Key: SOLR-17671
> URL: https://issues.apache.org/jira/browse/SOLR-17671
> Project: Solr
>  Issue Type: Improvement
>Reporter: Bruno Roustant
>Priority: Major
>  Labels: pull-request-available
> Fix For: 9.9
>
>  Time Spent: 4h 10m
>  Remaining Estimate: 0h
>
> Currently Replication and Backup copy files using a Directory created with 
> the configured DirectoryFactory. This Directory may be a FilterDirectory that 
> adds additional logic (e.g. encryption) that should not run when copying 
> files (e.g. do not decrypt).
> The proposal is to add a new REPLICATE value in the 
> DirectoryFactory.DirContext that would be used by replication and a new 
> BACKUP for backup to get the Directory to use. The DirectoryFactory would 
> unwrap the Directory in this case.
> One could expect that only one REPLICATE could be enough, but backup requires 
> more inner checksum verifications that may need to differentiate the logic 
> between the two (this is the case for encryption).
> Example:
> In the solr-sandbox encryption module, we would need a way to unwrap the 
> Directory used to copy files during index fetching. Otherwise the files are 
> decrypted by the EncryptionDirectory seamlessly during the files copy, ending 
> up with follower replicas having cleartext index.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



Re: [PR] SOLR-17447 : Support to early terminate a search based on maxHits per collector. [solr]

2025-02-17 Thread via GitHub


sijuv commented on code in PR #2960:
URL: https://github.com/apache/solr/pull/2960#discussion_r1958613844


##
solr/core/src/java/org/apache/solr/search/EarlyTerminatingCollector.java:
##
@@ -29,11 +30,15 @@
  */
 public class EarlyTerminatingCollector extends FilterCollector {
 
+  private final int chunkSize = 100; // Check across threads only at a chunk 
size

Review Comment:
   100 was chosen arbitrarily. I don't think this needs a env prop. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Commented] (SOLR-17671) Replication and Backup should use an unwrapped Directory when copying files.

2025-02-17 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-17671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17927833#comment-17927833
 ] 

ASF subversion and git services commented on SOLR-17671:


Commit 3adb2be366aa8886811ef4f743ff6ef77beee08e in solr's branch 
refs/heads/branch_9x from Bruno Roustant
[ https://gitbox.apache.org/repos/asf?p=solr.git;h=3adb2be366a ]

SOLR-17671: Replication and Backup use an unwrapped Directory to copy files. 
(#3185)

New extensible method CachingDirectoryFactory.filterDirectory.

> Replication and Backup should use an unwrapped Directory when copying files.
> 
>
> Key: SOLR-17671
> URL: https://issues.apache.org/jira/browse/SOLR-17671
> Project: Solr
>  Issue Type: Improvement
>Reporter: Bruno Roustant
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h 10m
>  Remaining Estimate: 0h
>
> Currently Replication and Backup copy files using a Directory created with 
> the configured DirectoryFactory. This Directory may be a FilterDirectory that 
> adds additional logic (e.g. encryption) that should not run when copying 
> files (e.g. do not decrypt).
> The proposal is to add a new REPLICATE value in the 
> DirectoryFactory.DirContext that would be used by replication and a new 
> BACKUP for backup to get the Directory to use. The DirectoryFactory would 
> unwrap the Directory in this case.
> One could expect that only one REPLICATE could be enough, but backup requires 
> more inner checksum verifications that may need to differentiate the logic 
> between the two (this is the case for encryption).
> Example:
> In the solr-sandbox encryption module, we would need a way to unwrap the 
> Directory used to copy files during index fetching. Otherwise the files are 
> decrypted by the EncryptionDirectory seamlessly during the files copy, ending 
> up with follower replicas having cleartext index.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Resolved] (SOLR-9761) Solr security related changes for Hadoop 3

2025-02-17 Thread Eric Pugh (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-9761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Pugh resolved SOLR-9761.
-
Resolution: Won't Fix

HDFS has been removed in Solr 10.

> Solr security related changes for Hadoop 3
> --
>
> Key: SOLR-9761
> URL: https://issues.apache.org/jira/browse/SOLR-9761
> Project: Solr
>  Issue Type: Task
>  Components: Hadoop Integration, hdfs
>Reporter: Hrishikesh Gadre
>Priority: Major
>
> SOLR-9515 tracks the work required to update Solr codebase to work with 
> Hadoop 3. This jira is to track the updates required in the Solr security 
> framework w.r.t Hadoop 3.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



Re: [PR] SOLR-17671 Replication and Backup use an unwrapped Directory. [solr]

2025-02-17 Thread via GitHub


dsmiley commented on code in PR #3185:
URL: https://github.com/apache/solr/pull/3185#discussion_r1958467972


##
solr/core/src/java/org/apache/solr/core/CachingDirectoryFactory.java:
##
@@ -391,25 +393,22 @@ public boolean exists(String path) throws IOException {
   public final Directory get(String path, DirContext dirContext, String 
rawLockType)
   throws IOException {
 String fullPath = normalize(path);
+Directory directory;
+CacheValue cacheValue;
 synchronized (this) {
   if (closed) {
 throw new AlreadyClosedException("Already closed");
   }
 
-  final CacheValue cacheValue = byPathCache.get(fullPath);
-  Directory directory = null;
-  if (cacheValue != null) {
-directory = cacheValue.directory;
-  }
-
-  if (directory == null) {
+  cacheValue = byPathCache.get(fullPath);
+  if (cacheValue == null) {
 directory = create(fullPath, createLockFactory(rawLockType), 
dirContext);

Review Comment:
   In a separate issue/PR, the `create` method should not contain the 
dirContext leaving a TODO now.  Or if you wish, make that change here; it's 
rather internal.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



Re: [PR] SOLR-17671 Replication and Backup use an unwrapped Directory. [solr]

2025-02-17 Thread via GitHub


dsmiley commented on code in PR #3185:
URL: https://github.com/apache/solr/pull/3185#discussion_r1958484987


##
solr/core/src/java/org/apache/solr/handler/admin/api/ReplicationAPIBase.java:
##
@@ -377,7 +380,7 @@ public void write(OutputStream out) throws IOException {
   try {
 initWrite();
 
-Directory dir = solrCore.withSearcher(searcher -> 
searcher.getIndexReader().directory());
+Directory dir = getDirectory();

Review Comment:
   I can't tell from the PR but this code will need to ensure we release the 
directory.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[I] Setup SOLR using Basic Authentication doesn't work. [solr-operator]

2025-02-17 Thread via GitHub


irwan-verint opened a new issue, #757:
URL: https://github.com/apache/solr-operator/issues/757

   I'm trying to setup a solr on my local machine (localhost) using standalone 
version. I followed the setup up from the 
[docuementation](https://solr.apache.org/guide/8_5/basic-authentication-plugin.html#basic-authentication-plugin)
 but it seems doesn't work. I  created a _security.json_ file tthen place it 
into _server/solr/security.json_ the same directory with _solr.xml_ and use the 
same value within documentation. When I'm trying to start it, the solr stop to 
working. 
   
   
![Image](https://github.com/user-attachments/assets/9826d176-719e-4c31-ab39-20879c323210)
   
   
![Image](https://github.com/user-attachments/assets/c7eeca45-5f09-40a1-8013-3b5b74da5f46)
   
   But when I deleted the security.json file. My solr working again. This is 
the detail error log that I got.
   
   
![Image](https://github.com/user-attachments/assets/834ac7d5-658c-43c3-ac53-ac59d884799e)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Resolved] (SOLR-10115) Corruption in read-side of SOLR-HDFS stack

2025-02-17 Thread Eric Pugh (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-10115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Pugh resolved SOLR-10115.
--
Resolution: Won't Fix

HDFS has been removed in Solr 10.

> Corruption in read-side of SOLR-HDFS stack
> --
>
> Key: SOLR-10115
> URL: https://issues.apache.org/jira/browse/SOLR-10115
> Project: Solr
>  Issue Type: Bug
>  Components: Hadoop Integration, hdfs
>Affects Versions: 4.4
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
>Priority: Major
> Attachments: YCS_HdfsTest.java
>
>
> I've been trying to track down some random AIOOB exceptions in Lucene for a 
> customer, and I've managed to reproduce the issue with a unit test of 
> sufficient size in conjunction with highly concurrent read requests.
> A typical stack trace looks like:
> {code}
> org.apache.solr.common.SolrException; 
> java.lang.ArrayIndexOutOfBoundsException: 172033655
> at org.apache.lucene.codecs.lucene40.BitVector.get(BitVector.java:149)
> at 
> org.apache.lucene.codecs.lucene41.Lucene41PostingsReader$BlockDocsEnum.nextDoc(Lucene41PostingsReader.java:455)
> at 
> org.apache.lucene.search.MultiTermQueryWrapperFilter.getDocIdSet(MultiTermQueryWrapperFilter.java:111)
> at 
> org.apache.lucene.search.ConstantScoreQuery$ConstantWeight.scorer(ConstantScoreQuery.java:157)
> {code}
> The number of unique stack traces is relatively high, most AIOOB exceptions, 
> but some EOF.  Most exceptions occur in the term index, however I believe 
> this may be just an artifact of where highly concurrent access is most likely 
> to occur.  The queries that triggered this had many wildcards and other 
> multi-term queries.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



Re: [PR] SOLR-17447 : Support to early terminate a search based on maxHits per collector. [solr]

2025-02-17 Thread via GitHub


sijuv commented on code in PR #2960:
URL: https://github.com/apache/solr/pull/2960#discussion_r1958612329


##
solr/core/src/java/org/apache/solr/search/EarlyTerminatingCollector.java:
##
@@ -43,11 +48,14 @@ public class EarlyTerminatingCollector extends 
FilterCollector {
* @param maxDocsToCollect - the maximum number of documents to Collect
*/
   public EarlyTerminatingCollector(Collector delegate, int maxDocsToCollect) {
-super(delegate);
-assert 0 < maxDocsToCollect;

Review Comment:
   added back



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Commented] (SOLR-13587) Close BackupRepository after every usage

2025-02-17 Thread Mikhail Khludnev (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17927771#comment-17927771
 ] 

Mikhail Khludnev commented on SOLR-13587:
-

I believe an every repo deserves closing.

> Close BackupRepository after every usage
> 
>
> Key: SOLR-13587
> URL: https://issues.apache.org/jira/browse/SOLR-13587
> Project: Solr
>  Issue Type: Bug
>  Components: Backup/Restore
>Affects Versions: 8.1
>Reporter: Mikhail Khludnev
>Assignee: Mikhail Khludnev
>Priority: Major
> Attachments: SOLR-13587.patch
>
>
> Turns out BackupRepository is created every operation, but never closed. I 
> suppose it leads to necessity to have {{BadHdfsThreadsFilter}} in 
> {{TestHdfsCloudBackupRestore}}. Also, test need to repeat backup/restore 
> operation to make sure that closing hdfs filesystem doesn't break it see 
> SOLR-9961 for the case.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



Re: [PR] SOLR-17671 Replication and Backup use an unwrapped Directory. [solr]

2025-02-17 Thread via GitHub


bruno-roustant commented on PR #3185:
URL: https://github.com/apache/solr/pull/3185#issuecomment-2663047519

   The new commit implements option B, which works, and the code remains 
concise.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



Re: [PR] SOLR-17671 Replication and Backup use an unwrapped Directory. [solr]

2025-02-17 Thread via GitHub


dsmiley commented on code in PR #3185:
URL: https://github.com/apache/solr/pull/3185#discussion_r1958519813


##
solr/core/src/java/org/apache/solr/core/CachingDirectoryFactory.java:
##
@@ -391,25 +393,22 @@ public boolean exists(String path) throws IOException {
   public final Directory get(String path, DirContext dirContext, String 
rawLockType)
   throws IOException {
 String fullPath = normalize(path);
+Directory directory;
+CacheValue cacheValue;
 synchronized (this) {
   if (closed) {
 throw new AlreadyClosedException("Already closed");
   }
 
-  final CacheValue cacheValue = byPathCache.get(fullPath);
-  Directory directory = null;
-  if (cacheValue != null) {
-directory = cacheValue.directory;
-  }
-
-  if (directory == null) {
+  cacheValue = byPathCache.get(fullPath);
+  if (cacheValue == null) {
 directory = create(fullPath, createLockFactory(rawLockType), 
dirContext);

Review Comment:
   Indeed it's valid for a DirectoryFactory to potentially customize the 
Directory for use.  But it's wrong to do that *here*, as it's basically a bug 
in which the first DirContext will "win" and have it be cached; not later 
DirContext needs.  This is a bug for HdfsDirectory but the fix is in the design 
of CachingDirectoryFactory.  *Instead*, the filter method you added should be 
overwritten so that the DirectoryFactory can tweak the response for the 
DirContext.  For Hdfs, that would be unwrapping BlockCache for anything but 
DEFAULT.  Note HdfsDirectory was removed yesterday.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



Re: [PR] SOLR-17671 Replication and Backup use an unwrapped Directory. [solr]

2025-02-17 Thread via GitHub


bruno-roustant commented on code in PR #3185:
URL: https://github.com/apache/solr/pull/3185#discussion_r1958516767


##
solr/core/src/java/org/apache/solr/handler/admin/api/ReplicationAPIBase.java:
##
@@ -377,7 +380,7 @@ public void write(OutputStream out) throws IOException {
   try {
 initWrite();
 
-Directory dir = solrCore.withSearcher(searcher -> 
searcher.getIndexReader().directory());
+Directory dir = getDirectory();

Review Comment:
   Yes, this is the case in the latest commit of the PR.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



Re: [PR] SOLR-17671 Replication and Backup use an unwrapped Directory. [solr]

2025-02-17 Thread via GitHub


bruno-roustant commented on code in PR #3185:
URL: https://github.com/apache/solr/pull/3185#discussion_r1958509839


##
solr/core/src/java/org/apache/solr/core/CachingDirectoryFactory.java:
##
@@ -391,25 +393,22 @@ public boolean exists(String path) throws IOException {
   public final Directory get(String path, DirContext dirContext, String 
rawLockType)
   throws IOException {
 String fullPath = normalize(path);
+Directory directory;
+CacheValue cacheValue;
 synchronized (this) {
   if (closed) {
 throw new AlreadyClosedException("Already closed");
   }
 
-  final CacheValue cacheValue = byPathCache.get(fullPath);
-  Directory directory = null;
-  if (cacheValue != null) {
-directory = cacheValue.directory;
-  }
-
-  if (directory == null) {
+  cacheValue = byPathCache.get(fullPath);
+  if (cacheValue == null) {
 directory = create(fullPath, createLockFactory(rawLockType), 
dirContext);

Review Comment:
   I prefer to leave that for another PR (I have another proposal to make 
CachingDirectoryFactory support a DelegatingDirectoryFactory).
   For the TODO, I'm unsure as I see some usage of the DirContext in 
HdfsDirectoryFactory which seems valid (creating either a BlockDirectory or a 
HdfsDirectory).



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



Re: [PR] SOLR-17667: Simplify zombie logic in LBSolrClient [solr]

2025-02-17 Thread via GitHub


HoustonPutman commented on code in PR #3176:
URL: https://github.com/apache/solr/pull/3176#discussion_r1958579710


##
solr/solrj/src/java/org/apache/solr/client/solrj/impl/LBHttpSolrClient.java:
##
@@ -192,7 +192,7 @@ protected SolrClient getClient(Endpoint endpoint) {
*/
   @Deprecated
   @Override
-  public String removeSolrServer(String server) {
+  public synchronized String removeSolrServer(String server) {

Review Comment:
   Yeah, I'll make sure to make the necessary changes on `main`. Thanks for the 
reminder!



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



Re: [PR] SOLR-17447 : Support to early terminate a search based on maxHits per collector. [solr]

2025-02-17 Thread via GitHub


sijuv commented on code in PR #2960:
URL: https://github.com/apache/solr/pull/2960#discussion_r1958581315


##
solr/core/src/java/org/apache/solr/search/SolrIndexSearcher.java:
##
@@ -295,7 +295,7 @@ private Collector buildAndRunCollectorChain(
 
 final boolean terminateEarly = cmd.getTerminateEarly();
 if (terminateEarly) {
-  collector = new EarlyTerminatingCollector(collector, cmd.getLen());
+  collector = new EarlyTerminatingCollector(collector, 
cmd.getMaxHitsTerminateEarly());

Review Comment:
   I dont think the _EarlyTerminatingCollector_ has been used in solr yet. The 
code prior to my change was instantiating this collector when _TERMINATE_EARLY_ 
flag was set, but I dont see any references to this flag being set in the code 
because there are no usage references to _setTerminateEarly_.
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



Re: [PR] SOLR-17447 : Support to early terminate a search based on maxHits per collector. [solr]

2025-02-17 Thread via GitHub


sijuv commented on code in PR #2960:
URL: https://github.com/apache/solr/pull/2960#discussion_r1958582130


##
solr/core/src/java/org/apache/solr/search/SolrIndexSearcher.java:
##
@@ -329,12 +329,12 @@ private Collector buildAndRunCollectorChain(
   if (collector instanceof DelegatingCollector) {
 ((DelegatingCollector) collector).complete();
   }
-  throw etce;
+  qr.setPartialResults(true);

Review Comment:
   I dont think _EarlyTerminatingCollector_ has been used yet.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



Re: [PR] SOLR-17671 Replication and Backup use an unwrapped Directory. [solr]

2025-02-17 Thread via GitHub


bruno-roustant merged PR #3185:
URL: https://github.com/apache/solr/pull/3185


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Commented] (SOLR-17671) Replication and Backup should use an unwrapped Directory when copying files.

2025-02-17 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-17671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17927824#comment-17927824
 ] 

ASF subversion and git services commented on SOLR-17671:


Commit fa099a0e7505e6de0fa487ce326f3031d8654cd4 in solr's branch 
refs/heads/main from Bruno Roustant
[ https://gitbox.apache.org/repos/asf?p=solr.git;h=fa099a0e750 ]

SOLR-17671: Replication and Backup use an unwrapped Directory to copy files. 
(#3185)

New extensible method CachingDirectoryFactory.filterDirectory.

> Replication and Backup should use an unwrapped Directory when copying files.
> 
>
> Key: SOLR-17671
> URL: https://issues.apache.org/jira/browse/SOLR-17671
> Project: Solr
>  Issue Type: Improvement
>Reporter: Bruno Roustant
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h 10m
>  Remaining Estimate: 0h
>
> Currently Replication and Backup copy files using a Directory created with 
> the configured DirectoryFactory. This Directory may be a FilterDirectory that 
> adds additional logic (e.g. encryption) that should not run when copying 
> files (e.g. do not decrypt).
> The proposal is to add a new REPLICATE value in the 
> DirectoryFactory.DirContext that would be used by replication and a new 
> BACKUP for backup to get the Directory to use. The DirectoryFactory would 
> unwrap the Directory in this case.
> One could expect that only one REPLICATE could be enough, but backup requires 
> more inner checksum verifications that may need to differentiate the logic 
> between the two (this is the case for encryption).
> Example:
> In the solr-sandbox encryption module, we would need a way to unwrap the 
> Directory used to copy files during index fetching. Otherwise the files are 
> decrypted by the EncryptionDirectory seamlessly during the files copy, ending 
> up with follower replicas having cleartext index.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



Re: [PR] SOLR-17447 : Support to early terminate a search based on maxHits per collector. [solr]

2025-02-17 Thread via GitHub


sijuv commented on code in PR #2960:
URL: https://github.com/apache/solr/pull/2960#discussion_r1958587050


##
solr/core/src/java/org/apache/solr/search/QueryCommand.java:
##
@@ -194,7 +195,8 @@ public QueryCommand setNeedDocSet(boolean needDocSet) {
   }
 
   public boolean getTerminateEarly() {
-return (flags & SolrIndexSearcher.TERMINATE_EARLY) != 0;
+return (flags & SolrIndexSearcher.TERMINATE_EARLY) != 0

Review Comment:
   _ setTerminateEarly_ is unused _ setSegmentTerminateEarly_ is used in the 
case of sorted segments.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Updated] (SOLR-17670) Fix unnecessary memory allocation caused by a large reRankDocs param

2025-02-17 Thread JiaBaoGao (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-17670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

JiaBaoGao updated SOLR-17670:
-
Security: (was: Public)

> Fix unnecessary memory allocation caused by a large reRankDocs param
> 
>
> Key: SOLR-17670
> URL: https://issues.apache.org/jira/browse/SOLR-17670
> Project: Solr
>  Issue Type: Bug
>Reporter: JiaBaoGao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> The reRank function has a reRankDocs parameter that specifies the number of 
> documents to re-rank. I've observed that increasing this parameter to test 
> its performance impact causes queries to become progressively slower. Even 
> when the parameter value exceeds the total number of documents in the index, 
> further increases continue to slow down the query, which is counterintuitive.
>  
> Therefore, I investigated the code:
>  
> For a query containing re-ranking, such as:
> {code:java}
> {
> "start": "0",
> "rows": 10,
> "fl": "ID,score",
> "q": "*:*",
> "rq": "{!rerank reRankQuery='{!func} 100' reRankDocs=10 
> reRankWeight=2}"
> } {code}
>  
> The current execution logic is as follows:
> 1. Perform normal retrieval using the q parameter.
> 2. Re-score all documents retrieved in the q phase using the rq parameter.
>  
> During the retrieval in phase 1 (using q), a TopScoreDocCollector is created. 
> Underneath, this creates a PriorityQueue which contains an Object[]. The 
> length of this Object[] continuously increases with reRankDocs without any 
> limit. 
>  
> On my local test cluster with limited JVM memory, this can even trigger an 
> OOM, causing the Solr node to crash. I can also reproduce the OOM situation 
> using the SolrCloudTestCase unit test. 
>  
> I think limiting the length of the Object[] array using 
> searcher.getIndexReader().maxDoc() at ReRankCollector would resolve this 
> issue. This way, when reRankDocs exceeds maxDoc, memory allocation will not 
> continue to increase indefinitely. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



Re: [PR] SOLR-17671 Replication and Backup use an unwrapped Directory. [solr]

2025-02-17 Thread via GitHub


bruno-roustant commented on PR #3185:
URL: https://github.com/apache/solr/pull/3185#issuecomment-2663595482

   @dsmiley I added CHANGES.txt for review.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



Re: [PR] SOLR-17671 Replication and Backup use an unwrapped Directory. [solr]

2025-02-17 Thread via GitHub


bruno-roustant commented on code in PR #3185:
URL: https://github.com/apache/solr/pull/3185#discussion_r1958539013


##
solr/core/src/java/org/apache/solr/core/CachingDirectoryFactory.java:
##
@@ -391,25 +393,22 @@ public boolean exists(String path) throws IOException {
   public final Directory get(String path, DirContext dirContext, String 
rawLockType)
   throws IOException {
 String fullPath = normalize(path);
+Directory directory;
+CacheValue cacheValue;
 synchronized (this) {
   if (closed) {
 throw new AlreadyClosedException("Already closed");
   }
 
-  final CacheValue cacheValue = byPathCache.get(fullPath);
-  Directory directory = null;
-  if (cacheValue != null) {
-directory = cacheValue.directory;
-  }
-
-  if (directory == null) {
+  cacheValue = byPathCache.get(fullPath);
+  if (cacheValue == null) {
 directory = create(fullPath, createLockFactory(rawLockType), 
dirContext);

Review Comment:
   Thanks! I add the todo.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Commented] (SOLR-17667) Simplify and cleanup zombie server logic in LBSolrClient

2025-02-17 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-17667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17927865#comment-17927865
 ] 

ASF subversion and git services commented on SOLR-17667:


Commit 3d6458b6a516defc84746a7368b76864a1a1c280 in solr's branch 
refs/heads/main from Houston Putman
[ https://gitbox.apache.org/repos/asf?p=solr.git;h=3d6458b6a51 ]

SOLR-17667: Simplify zombie logic in LBSolrClient (#3176)

(cherry picked from commit b46bc0cc1c937c3e822382c74410b5e83fd9b920)


> Simplify and cleanup zombie server logic in LBSolrClient
> 
>
> Key: SOLR-17667
> URL: https://issues.apache.org/jira/browse/SOLR-17667
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrJ
>Reporter: Houston Putman
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Currently the Zombie server logic is quite complex, a list of alive servers 
> and a list of zombie servers. When moving servers between these lists, things 
> can get lost. Additionally, the logic is different when using a request that 
> contains a list of URLS. So zombies can be dropped always in some case, not 
> being added back to the alive list.
> It would be easier to have a list of allServers for a client, and a map of 
> zombieServers. If the server in allServers is not in the zombieServers map, 
> it can be considered alive.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



Re: [PR] SOLR-17609: Remove HDFS module [solr]

2025-02-17 Thread via GitHub


HoustonPutman commented on PR #2923:
URL: https://github.com/apache/solr/pull/2923#issuecomment-2664036764

   I think you left around a lock file for the module, which was probably 
forgotten when merging in the recent changes.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Commented] (SOLR-17351) Cosmetic changes to v2 filestore "get file" API

2025-02-17 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-17351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17927886#comment-17927886
 ] 

ASF subversion and git services commented on SOLR-17351:


Commit c5ce8dd3437210c6866bc6b197afb998047cc3b2 in solr's branch 
refs/heads/branch_9x from Jason Gerlowski
[ https://gitbox.apache.org/repos/asf?p=solr.git;h=c5ce8dd3437 ]

SOLR-17351: Decompose filestore "get file" API

This commit splits up the "get file" endpoint into a number of different
APIs. Specifically:

  - metadata-fetching has been moved out to the endpoint,
GET/api/cluster/filestore/metadata/some/path.txt
  - Filestore commands such as pushing/pulling files are now
available at: POST /api/cluster/filestore/commands

These divisions allow us to generate SolrRequest/SolrResponse classes
representing these APIs, meaning that SolrJ users no longer need to use
GenericSolrRequest/GenericSolrResponse.

(As a 9.x backport this commit retains the original form of these APIs
to retain backwards compatibility, but this support should be removed in
10.0)


> Cosmetic changes to v2 filestore "get file" API
> ---
>
> Key: SOLR-17351
> URL: https://issues.apache.org/jira/browse/SOLR-17351
> Project: Solr
>  Issue Type: Sub-task
>  Components: Package Manager, v2 API
>Affects Versions: 9.6.1
>Reporter: Jason Gerlowski
>Priority: Minor
>  Labels: pull-request-available
> Attachments: SOLR-17351.test-failures.tgz
>
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> Solr's filestore APIs fit well with the REST-ful design we're targeting with 
> our v2 APIs, with one large exception: the "get file" API current available 
> at {{GET /api/node/files/somePath.txt}}.  This API stands out for a few 
> reasons:
> 1. It uses a different path-prefix than all other filestore APIs.  (i.e. 
> {{/api/node/files}} instead of {{/api/cluster/files}})
> 2. It exposes 4 or 5 conceptually distinct operations. Obviously in the 
> "default" case it allows callers to retrieve filestore contents, but based on 
> query params it can instead:
>   - return filestore entry metadata (when {{meta=true}} is specified)
>   - instruct the receiving Solr node to pull a file from another node's 
> filestore and cache it locally (when {{getFrom=someOtherNode}} is specified)
>   - instruct the receiving Solr node to push its cached copy of a file out to 
> all other Solr nodes (when {{sync=true}} is specified)
> 3. Even in the default case of returning "raw" filestore contents, the API 
> can provide two different styles of response:
>   - if {{wt=json}} is specified Solr will take the filestore entry bytes, 
> attempt to stringify them, and then return a JSON object that uses this 
> string as the value for a "response" key.  It's unclear how this would work 
> for binary content 
>   - for all other values of "wt", the API will return the raw file content.
> We should reconsider this endpoint and see if it can't be massaged into being 
> more in line with our other v2 APIs.  Some cosmetic tweaks will go a long 
> way, but the biggest benefit is likely to come from breaking the endpoint up 
> into multiple distinct APIs.  In its current form, the API returns such a 
> variety of responses that it's hard to write client code for.  (I suspect 
> this is the main reason these "filestore" APIs were never made available in 
> SolrJ.)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



Re: [PR] SOLR-17609: Remove trailing file missed from from original SOLR-17609 [solr]

2025-02-17 Thread via GitHub


epugh merged PR #3193:
URL: https://github.com/apache/solr/pull/3193


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Commented] (SOLR-17609) Remove hdfs module

2025-02-17 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-17609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17927888#comment-17927888
 ] 

ASF subversion and git services commented on SOLR-17609:


Commit ae5731aabd75b5e38ded0ddc689d0956c1e07496 in solr's branch 
refs/heads/main from Eric Pugh
[ https://gitbox.apache.org/repos/asf?p=solr.git;h=ae5731aabd7 ]

Remove trailing file missed from from SOLR-17609 (#3193)



> Remove hdfs module
> --
>
> Key: SOLR-17609
> URL: https://issues.apache.org/jira/browse/SOLR-17609
> Project: Solr
>  Issue Type: Task
>  Components: hdfs
>Affects Versions: main (10.0)
>Reporter: Eric Pugh
>Assignee: Eric Pugh
>Priority: Major
>  Labels: pull-request-available
> Fix For: main (10.0)
>
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> One of the outcomes of the 2024 Community Survey is that we learned (in our 
> admittedly fairly unscientific) responses that hdfs module is not used.
> This PR is to understand the impact of removing hdfs in Solr 10.
> See [https://lists.apache.org/thread/hp6bov79rgrg0gb2ozzbzxxn30k2js0h] for 
> discussion on Dev.
>  
> I won't merge this PR till we have more consensus.
> This builds on work started in 
> https://issues.apache.org/jira/browse/SOLR-14660 and 
> https://issues.apache.org/jira/browse/SOLR-14021



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Resolved] (SOLR-17667) Simplify and cleanup zombie server logic in LBSolrClient

2025-02-17 Thread Houston Putman (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-17667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Houston Putman resolved SOLR-17667.
---
Fix Version/s: 9.9
 Assignee: Houston Putman
   Resolution: Fixed

> Simplify and cleanup zombie server logic in LBSolrClient
> 
>
> Key: SOLR-17667
> URL: https://issues.apache.org/jira/browse/SOLR-17667
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrJ
>Reporter: Houston Putman
>Assignee: Houston Putman
>Priority: Major
>  Labels: pull-request-available
> Fix For: 9.9
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Currently the Zombie server logic is quite complex, a list of alive servers 
> and a list of zombie servers. When moving servers between these lists, things 
> can get lost. Additionally, the logic is different when using a request that 
> contains a list of URLS. So zombies can be dropped always in some case, not 
> being added back to the alive list.
> It would be easier to have a list of allServers for a client, and a map of 
> zombieServers. If the server in allServers is not in the zombieServers map, 
> it can be considered alive.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



Re: [PR] SOLR-17609: Remove HDFS module [solr]

2025-02-17 Thread via GitHub


epugh commented on PR #2923:
URL: https://github.com/apache/solr/pull/2923#issuecomment-2664130955

   > I think you left around a lock file for the module, which was probably 
forgotten when merging in the recent changes.
   
   Can you elaborate a bit more?  I don't think I know which file you mean!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Commented] (SOLR-13212) when TestInjection.nonGracefullClose causes a TestShutdownFailError, test is garunteed tofail due to leaked objects (causes failures in (Hdfs)RestartWhileUpdatingTest)

2025-02-17 Thread Chris M. Hostetter (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17927868#comment-17927868
 ] 

Chris M. Hostetter commented on SOLR-13212:
---

{quote}Should this ticket be marked resolved?   ...  I see a commit...
{quote}
1) The problem is not specific to HDFS, and the class in question still 
exists...
{quote}While investigating suite level test failures in 
{{RestartWhileUpdatingTest}} (and it's subclass 
{{{}HdfsRestartWhileUpdatingTest{}}}) ...
{quote}
2) The commit(s) did not fix anything, it just disabled the test injection 
randomization that was guaranteed to trigger the test failure.

3) AFAICT no one who understands this test has stepped up to answer the 
fundamental questions as to whether or not this is a "real" bug in Solr, or 
just a badly written test...
{quote}It's not clear to me if the root problem here is that the 
CoreContainer/Jetty isn't handling the ungraceful close well enough, and 
ensuring that the SolrCore (and stuff hanging off of it) is freed up for GC, or 
if the test should be doing something to account for this possibility and 
ammending/disabling the ObjectTracker?
{quote}

> when TestInjection.nonGracefullClose causes a TestShutdownFailError, test is 
> garunteed tofail due to leaked objects (causes failures in 
> (Hdfs)RestartWhileUpdatingTest) 
> 
>
> Key: SOLR-13212
> URL: https://issues.apache.org/jira/browse/SOLR-13212
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Chris M. Hostetter
>Priority: Major
> Attachments: SOLR-13212.patch
>
>
>  While investigating suite level test failures in 
> {{RestartWhileUpdatingTest}} (and it's subclass  
> {{HdfsRestartWhileUpdatingTest}}) due to leaked objects i realized that this 
> happens anytime {{TestInjection.injectNonGracefullClose}} causes a 
> {{TestShutdownFailError}} to be thrown.
> The test will still be able to restart the node, and the test (method) will 
> succeed, but the suite will fail due to the leaked objects.
> NOTE: These are currently the only tests using 
> {{TestInjection.nonGracefullClose}}.  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[PR] SOLR-17609: Remove trailing file missed from from original SOLR-17609 [solr]

2025-02-17 Thread via GitHub


epugh opened a new pull request, #3193:
URL: https://github.com/apache/solr/pull/3193

   https://issues.apache.org/jira/browse/SOLR-17609
   
   This is a follow up to the original PR 
https://github.com/apache/solr/pull/2923 that missed it.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



Re: [PR] SOLR-17667: Simplify zombie logic in LBSolrClient [solr]

2025-02-17 Thread via GitHub


HoustonPutman merged PR #3176:
URL: https://github.com/apache/solr/pull/3176


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Commented] (SOLR-17667) Simplify and cleanup zombie server logic in LBSolrClient

2025-02-17 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-17667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17927855#comment-17927855
 ] 

ASF subversion and git services commented on SOLR-17667:


Commit b46bc0cc1c937c3e822382c74410b5e83fd9b920 in solr's branch 
refs/heads/branch_9x from Houston Putman
[ https://gitbox.apache.org/repos/asf?p=solr.git;h=b46bc0cc1c9 ]

SOLR-17667: Simplify zombie logic in LBSolrClient (#3176)



> Simplify and cleanup zombie server logic in LBSolrClient
> 
>
> Key: SOLR-17667
> URL: https://issues.apache.org/jira/browse/SOLR-17667
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrJ
>Reporter: Houston Putman
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Currently the Zombie server logic is quite complex, a list of alive servers 
> and a list of zombie servers. When moving servers between these lists, things 
> can get lost. Additionally, the logic is different when using a request that 
> contains a list of URLS. So zombies can be dropped always in some case, not 
> being added back to the alive list.
> It would be easier to have a list of allServers for a client, and a map of 
> zombieServers. If the server in allServers is not in the zombieServers map, 
> it can be considered alive.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



Re: [PR] SOLR-17447 : Support to early terminate a search based on maxHits per collector. [solr]

2025-02-17 Thread via GitHub


sijuv commented on code in PR #2960:
URL: https://github.com/apache/solr/pull/2960#discussion_r1958586189


##
solr/core/src/java/org/apache/solr/search/EarlyTerminatingCollector.java:
##
@@ -61,11 +69,25 @@ public LeafCollector getLeafCollector(LeafReaderContext 
context) throws IOExcept
   public void collect(int doc) throws IOException {
 super.collect(doc);
 numCollected++;
-if (maxDocsToCollect <= numCollected) {
+terminatedEarly = maxDocsToCollect <= numCollected;
+if (numCollected % chunkSize == 0) {
+  pendingDocsToCollect.add(chunkSize);
+  final long overallCollectedDocCount = 
pendingDocsToCollect.intValue();
+  terminatedEarly = overallCollectedDocCount >= maxDocsToCollect;

Review Comment:
   the boolean is updated only every 100th time to reduce any overhead updating 
the thread shared adder brings in.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Resolved] (SOLR-13924) MoveReplica failures when using HDFS (NullPointerException)

2025-02-17 Thread Eric Pugh (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-13924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Pugh resolved SOLR-13924.
--
Resolution: Won't Fix

HDFS has been removed in Solr 10.

> MoveReplica failures when using HDFS (NullPointerException)
> ---
>
> Key: SOLR-13924
> URL: https://issues.apache.org/jira/browse/SOLR-13924
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 8.3
>Reporter: Chris M. Hostetter
>Assignee: Shalin Shekhar Mangar
>Priority: Major
>
> Based on recent jenkins test failures, it appears that attemping to use the 
> "MoveReplica" command on HDFS has a high chance of failure due to an 
> underlying NPE.
> I'm not sure if this bug *only* affects HDFS, or if it's just more likly to 
> occur when using HDFS due to some timing quirks.
> It's also possible that the bug impacts non-HDFS users just as much as HDFS 
> users, but only manifests in our tests due to some quick of our 
> {{cloud-hdfs}} test configs.
> The problem appears to be new in 8.3 as a result of changes made in SOLR-13843



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Resolved] (SOLR-7288) org.apache.hadoop.hdfs.PeerCache is not stopped in many of our tests after server and client shutdown.

2025-02-17 Thread Eric Pugh (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-7288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Pugh resolved SOLR-7288.
-
Resolution: Won't Fix

HDFS has been removed in Solr 10.

> org.apache.hadoop.hdfs.PeerCache is not stopped in many of our tests after 
> server and client shutdown.
> --
>
> Key: SOLR-7288
> URL: https://issues.apache.org/jira/browse/SOLR-7288
> Project: Solr
>  Issue Type: Test
>  Components: Hadoop Integration, hdfs
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Resolved] (SOLR-10222) Remove per-core blockcache

2025-02-17 Thread Eric Pugh (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-10222?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Pugh resolved SOLR-10222.
--
Resolution: Won't Fix

I *believe* this is HDFS specific, not solr specific, so going to close it 
since HDFS has been removed in Solr 10.   We did keep the blockcache code by 
moving it into solr/core, so if we need to keep this ticket related to the 
existing code, then please reopen it.

> Remove per-core blockcache
> --
>
> Key: SOLR-10222
> URL: https://issues.apache.org/jira/browse/SOLR-10222
> Project: Solr
>  Issue Type: Improvement
>  Components: Hadoop Integration, hdfs
>Reporter: Mike Drob
>Priority: Major
>
> We should clean up some of the details around the use of the block cache.
> Can we deprecate the per-core blockcache usage in Solr 6.x and remove it from 
> 7? Or does that need to happen in 7 and 8?
> Maybe it makes sense to move the configuration to solr.xml at the same time



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Resolved] (SOLR-10092) HDFS: AutoAddReplica fails

2025-02-17 Thread Eric Pugh (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-10092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Pugh resolved SOLR-10092.
--
Resolution: Won't Fix

HDFS has been removed in Solr 10.

> HDFS: AutoAddReplica fails
> --
>
> Key: SOLR-10092
> URL: https://issues.apache.org/jira/browse/SOLR-10092
> Project: Solr
>  Issue Type: Bug
>  Components: Hadoop Integration, hdfs
>Affects Versions: 6.3
>Reporter: Hendrik Haddorp
>Assignee: Mark Miller
>Priority: Major
> Attachments: SOLR-10092.patch, SOLR-10092.patch
>
>
> OverseerAutoReplicaFailoverThread fails to create replacement core with this 
> exception:
> o.a.s.c.OverseerAutoReplicaFailoverThread Exception trying to create new 
> replica on 
> http://...:9000/solr:org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:
>  Error from server at http://...:9000/solr: Error CREATEing SolrCore 
> 'test2.collection-09_shard1_replica1': Unable to create core 
> [test2.collection-09_shard1_replica1] Caused by: No shard id for 
> CoreDescriptor[name=test2.collection-09_shard1_replica1;instanceDir=/var/opt/solr/test2.collection-09_shard1_replica1]
> at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:593)
> at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:262)
> at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:251)
> at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
> at 
> org.apache.solr.cloud.OverseerAutoReplicaFailoverThread.createSolrCore(OverseerAutoReplicaFailoverThread.java:456)
> at 
> org.apache.solr.cloud.OverseerAutoReplicaFailoverThread.lambda$addReplica$0(OverseerAutoReplicaFailoverThread.java:251)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745) 
> also see this mail thread about the issue: 
> https://lists.apache.org/thread.html/%3CCAA70BoWyzbvQuJTyzaG4Kx1tj0Djgcm+MV=x_hoac1e6cse...@mail.gmail.com%3E



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Resolved] (SOLR-7393) HDFS poor indexing performance

2025-02-17 Thread Eric Pugh (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-7393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Pugh resolved SOLR-7393.
-
Resolution: Won't Fix

HDFS has been removed in Solr 10.

> HDFS poor indexing performance
> --
>
> Key: SOLR-7393
> URL: https://issues.apache.org/jira/browse/SOLR-7393
> Project: Solr
>  Issue Type: Bug
>  Components: Hadoop Integration, hdfs, SolrCloud
>Affects Versions: 4.7.2, 4.10.3
> Environment: HDP 2.2 / HDP Search + LucidWorks Hive SerDe
>Reporter: Hari Sekhon
>Priority: Critical
>
> When switching SolrCloud from local dataDir to HDFS directory factory 
> indexing performance falls through the floor.
> I've also observed very high latency on both QTime and code timer on HDFS 
> writes compares to local dataDir writes (using check_solr_write.pl from 
> https://github.com/harisekhon/nagios-plugins). Single test document write 
> latency jumps from a few dozen milliseconds to 700-1700 millisecs, over 2000 
> on some runs.
> A previous bulk online indexing job from Hive to SolrCloud that took 2 hours 
> for 620M rows ended up taking a projected 20+ hours and never completing, 
> usually breaking around the 16-17 hour timeframe when left overnight.
> It's worth noting that I had to disable the HDFS write cache which was 
> causing index corruption (SOLR-7255) on the advice of Mark Miller, who tells 
> me this doesn't make much performance difference anway.
> This is probably also related to SolrCloud not respecting HDFS replication 
> factor, effectively making 4 copies of data instead of 2 (SOLR-6528), but 
> that solely doesn't account for the massive performance drop going from 
> vanilla SolrCloud to SolrCloud on HDFS HA + Kerberos.
> Hari Sekhon
> http://www.linkedin.com/in/harisekhon



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Updated] (SOLR-13630) Check if HdfsTestUtil.teardownClass() may shutdown HDFS fully

2025-02-17 Thread Eric Pugh (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-13630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Pugh updated SOLR-13630:
-
Resolution: Won't Fix
Status: Resolved  (was: Patch Available)

HDFS has been removed in Solr 10.

> Check if HdfsTestUtil.teardownClass() may shutdown HDFS fully
> -
>
> Key: SOLR-13630
> URL: https://issues.apache.org/jira/browse/SOLR-13630
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Mikhail Khludnev
>Priority: Major
> Attachments: SOLR-13630.patch
>
>
> I want to check if it's feasible to stop all hdfs threads instead of ignoring 
> them in lingering. 
> Spoiler: -no sense-.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Updated] (SOLR-14507) Option to allow location override if solr.hdfs.home isn't set in backup repo

2025-02-17 Thread Eric Pugh (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14507?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Pugh updated SOLR-14507:
-
Resolution: Won't Fix
Status: Resolved  (was: Patch Available)

HDFS has been removed in Solr 10.

> Option to allow location override if solr.hdfs.home isn't set in backup repo
> 
>
> Key: SOLR-14507
> URL: https://issues.apache.org/jira/browse/SOLR-14507
> Project: Solr
>  Issue Type: Improvement
>  Components: Backup/Restore
>Reporter: Haley Reeve
>Priority: Major
> Attachments: SOLR-14507-2.patch, SOLR-14507.patch
>
>
> The Solr backup/restore API has an optional parameter for specifying the 
> directory to backup to. However, the HdfsBackupRepository class doesn't use 
> this location when creating the HDFS Filesystem object. Instead it uses the 
> solr.hdfs.home setting configured in solr.xml. This functionally means that 
> the backup location, which can be passed to the API call dynamically, is 
> limited by the static home directory defined in solr.xml. This requirement 
> means that if the solr.hdfs.home path and backup location don't share the 
> same URI scheme and hostname, the backup will fail, even if the backup could 
> otherwise have been written to the specified location successfully.
> This request is to allow the option of using the location setting to 
> initialize the filesystem object.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Resolved] (SOLR-7287) org.apache.hadoop.hdfs.LeaseRenewer is not stopped after HDFS is shutdown if it has been started by FS#recoverLease.

2025-02-17 Thread Eric Pugh (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-7287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Pugh resolved SOLR-7287.
-
Resolution: Won't Fix

HDFS has been removed in Solr 10.

> org.apache.hadoop.hdfs.LeaseRenewer is not stopped after HDFS is shutdown if 
> it has been started by FS#recoverLease.
> 
>
> Key: SOLR-7287
> URL: https://issues.apache.org/jira/browse/SOLR-7287
> Project: Solr
>  Issue Type: Test
>  Components: Hadoop Integration, hdfs
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Resolved] (SOLR-7395) Major numDocs inconsistency between leader and follower replicas in SolrCloud on HDFS

2025-02-17 Thread Eric Pugh (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-7395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Pugh resolved SOLR-7395.
-
Resolution: Won't Fix

HDFS has been removed in Solr 10.

> Major numDocs inconsistency between leader and follower replicas in SolrCloud 
> on HDFS
> -
>
> Key: SOLR-7395
> URL: https://issues.apache.org/jira/browse/SOLR-7395
> Project: Solr
>  Issue Type: Bug
>  Components: Hadoop Integration, hdfs, SolrCloud
>Affects Versions: 4.10.3
> Environment: HDP 2.2 / HDP Search
>Reporter: Hari Sekhon
>Priority: Major
> Attachments: 145_core.png, 146_core.png, 147_core.png, 149_core.png, 
> Cloud UI.png
>
>
> I've observed major numDocs inconsistencies between leader and follower in 
> SolrCloud running on HDFS during bulk indexing jobs from Hive.
> See attached screenshots which show the leader/follower relationships and 
> screenshots of the core UI showing the huge numDocs discrepancies of 20k vs 
> 193k docs.
> This initially seemed related to SOLR-4260, except that was supposed to be 
> fixed several versions ago and this is running on HDFS which may be the 
> difference.
> Hari Sekhon
> http://www.linkedin.com/in/harisekhon



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Resolved] (SOLR-7489) Don't wait as long to try and recover hdfs leases on transaction log files.

2025-02-17 Thread Eric Pugh (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-7489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Pugh resolved SOLR-7489.
-
Resolution: Won't Fix

HDFS has been removed in Solr 10.

> Don't wait as long to try and recover hdfs leases on transaction log files.
> ---
>
> Key: SOLR-7489
> URL: https://issues.apache.org/jira/browse/SOLR-7489
> Project: Solr
>  Issue Type: Improvement
>  Components: Hadoop Integration, hdfs
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Major
>
> We initially just took most of this code from hbase which will wait for up to 
> 15 minutes. This doesn't seem ideal - we should give up sooner and treat the 
> file as not recoverable.
> We also need to fix the possible data loss message. This is really the same 
> as if a transaction log on local disk were to become corrupt, and if you have 
> a replica to recover from, things will be fine.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Resolved] (SOLR-13856) 8.x HdfsWriteToMultipleCollectionsTest jenkins failures due to TImeoutException

2025-02-17 Thread Eric Pugh (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-13856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Pugh resolved SOLR-13856.
--
Resolution: Won't Fix

HDFS has been removed in Solr 10.

> 8.x HdfsWriteToMultipleCollectionsTest jenkins failures due to 
> TImeoutException
> ---
>
> Key: SOLR-13856
> URL: https://issues.apache.org/jira/browse/SOLR-13856
> Project: Solr
>  Issue Type: Test
>Reporter: Chris M. Hostetter
>Priority: Major
> Attachments: 8.3.fail1.log.txt, 8.3.fail2.log.txt, 8.3.fail3.log.txt, 
> 8.x.fail1.log.txt, 8.x.fail2.log.txt, 8.x.fail3.log.txt, 
> HdfsWriteToMultipleCollectionsTest.fails.txt, 
> apache_Lucene-Solr-NightlyTests-8.3_25.log.txt, 
> apache_Lucene-Solr-repro_3681.log.txt
>
>
> I've noticed a trend in jenkins failures where 
> HdfsWriteToMultipleCollectionsTest...
> * does _NOT_ ever seem to fail on master even w/heavy beasting
> * fails on 8.x (28c1049a258bbd060a80803c72e1c6cadc784dab) and 8.3 
> (25968e3b75e5e9a4f2a64de10500aae10a257bdd) easily
> ** failing seeds frequently reproduce, but not 100%
> ** seeds reproduce even when tested using newer (ie: java11) JVMs
> ** doesn't fail when commenting out HDFS aspects of test
> *** suggests failure cause is somehow specific to HDFS, not differences in 
> the 8x/master HTTP/solr indexing stack...
> *However:* There are currently zero differences between the *.hdfs.* packaged 
> solr code (src or test) on branch_8x vs master; likewise 8x and master also 
> use the exact same hadoop jars.
> So what the hell is different?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Commented] (SOLR-13212) when TestInjection.nonGracefullClose causes a TestShutdownFailError, test is garunteed tofail due to leaked objects (causes failures in (Hdfs)RestartWhileUpdatingTest)

2025-02-17 Thread Eric Pugh (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17927675#comment-17927675
 ] 

Eric Pugh commented on SOLR-13212:
--

Should this ticket be marked resolved?    Going through and closing old HDFS 
related tickets and stumbled across this one.  I see a commit

> when TestInjection.nonGracefullClose causes a TestShutdownFailError, test is 
> garunteed tofail due to leaked objects (causes failures in 
> (Hdfs)RestartWhileUpdatingTest) 
> 
>
> Key: SOLR-13212
> URL: https://issues.apache.org/jira/browse/SOLR-13212
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Chris M. Hostetter
>Priority: Major
> Attachments: SOLR-13212.patch
>
>
>  While investigating suite level test failures in 
> {{RestartWhileUpdatingTest}} (and it's subclass  
> {{HdfsRestartWhileUpdatingTest}}) due to leaked objects i realized that this 
> happens anytime {{TestInjection.injectNonGracefullClose}} causes a 
> {{TestShutdownFailError}} to be thrown.
> The test will still be able to restart the node, and the test (method) will 
> succeed, but the suite will fail due to the leaked objects.
> NOTE: These are currently the only tests using 
> {{TestInjection.nonGracefullClose}}.  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Resolved] (SOLR-11707) allow to configure the HDFS block size

2025-02-17 Thread Eric Pugh (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-11707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Pugh resolved SOLR-11707.
--
Resolution: Won't Fix

HDFS has been removed in Solr 10.

> allow to configure the HDFS block size
> --
>
> Key: SOLR-11707
> URL: https://issues.apache.org/jira/browse/SOLR-11707
> Project: Solr
>  Issue Type: Improvement
>  Components: Hadoop Integration, hdfs
>Reporter: Hendrik Haddorp
>Priority: Minor
>
> Currently index files are created in HDFS with the block size that is defined 
> on the namenode. For that the HdfsFileWriter reads out the config from the 
> server and then specifies the size (and replication factor) in the 
> FileSystem.create call.
> For the write.lock files things work slightly different. These are being 
> created by the HdfsLockFactory without specifying a block size (or 
> replication factor). This results in a default being picked by the HDFS 
> client, which is 128MB.
> So currently files are being created with different block sizes if the 
> namenode is configured to something else then 128MB. It would be good if Solr 
> would allow to configure the block size to be used. This is especially useful 
> if the Solr admin is not the HDFS admin and if you have different 
> applications using HDFS that have different requirements for their block size.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Resolved] (SOLR-7555) Display total space and available space in Admin

2025-02-17 Thread Eric Pugh (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-7555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Pugh resolved SOLR-7555.
-
Resolution: Fixed

Going with Jan's suggestion.

> Display total space and available space in Admin
> 
>
> Key: SOLR-7555
> URL: https://issues.apache.org/jira/browse/SOLR-7555
> Project: Solr
>  Issue Type: Improvement
>  Components: Admin UI
>Affects Versions: 5.1
>Reporter: Eric Pugh
>Assignee: Erik Hatcher
>Priority: Minor
> Fix For: 6.0
>
> Attachments: DiskSpaceAwareDirectory.java, 
> SOLR-7555-display_disk_space.patch, SOLR-7555-display_disk_space_v2.patch, 
> SOLR-7555-display_disk_space_v3.patch, SOLR-7555-display_disk_space_v4.patch, 
> SOLR-7555-display_disk_space_v5.patch, SOLR-7555.patch, SOLR-7555.patch, 
> SOLR-7555.patch
>
>
> Frequently I have access to the Solr Admin console, but not the underlying 
> server, and I'm curious how much space remains available.   This little patch 
> exposes total Volume size as well as the usable space remaining:
> !https://monosnap.com/file/VqlReekCFwpK6utI3lP18fbPqrGI4b.png!
> I'm not sure if this is the best place to put this, as every shard will share 
> the same data, so maybe it should be on the top level Dashboard?  Also not 
> sure what to call the fields! 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Resolved] (SOLR-13908) Possible bugs when using HdfsDirectoryFactory w/ softCommit=true + openSearcher=true

2025-02-17 Thread Eric Pugh (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-13908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Pugh resolved SOLR-13908.
--
Resolution: Won't Fix

HDFS has been removed in Solr 10.

> Possible bugs when using HdfsDirectoryFactory w/ softCommit=true + 
> openSearcher=true
> 
>
> Key: SOLR-13908
> URL: https://issues.apache.org/jira/browse/SOLR-13908
> Project: Solr
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Chris M. Hostetter
>Priority: Major
>
> While working on SOLR-13872 something caught my eye that seems fishy
> *Background:*
> SOLR-4916 introduced the API 
> {{DirectoryFactory.searchersReserveCommitPoints()}} -- a method that 
> {{SolrIndexSearcher}} uses to decide if it needs to explicitly save/release 
> the {{IndexCommit}} point of it's {{DirectoryReader}} with the 
> {{IndexDeletionPolicytWrapper}}, for use on Filesystems that don't in some 
> way "protect" open files...
> {code:title=SolrIndexSearcher}
> if (directoryFactory.searchersReserveCommitPoints()) {
>   // reserve commit point for life of searcher
>   
> core.getDeletionPolicy().saveCommitPoint(reader.getIndexCommit().getGeneration());
> }
> {code}
> {code:title=DirectoryFactory}
>   /**
>* If your implementation can count on delete-on-last-close semantics
>* or throws an exception when trying to remove a file in use, return
>* false (eg NFS). Otherwise, return true. Defaults to returning false.
>* 
>* @return true if factory impl requires that Searcher's explicitly
>* reserve commit points.
>*/
>   public boolean searchersReserveCommitPoints() {
> return false;
>   }
> {code}
> {{HdfsDirectoryFactory}} is (still) the only {{DirectoryFactory}} Impl that 
> returns {{true}}.
> 
> *Concern:*
> As noted in LUCENE-9040  The behavior of {{DirectoryReader.getIndexCommit()}} 
> is a little weird / underspecified when dealing with an "NRT" {{IndexReader}} 
> (opened directly off of an {{IndexWriter}} using "un-committed" changes) ... 
> which is exactly what {{SolrIndexSearcher}} is using in solr setups that use 
> {{softCommit=true&openSearcher=false}}.
> In particular the {{IndexCommit.getGeneration()}} value that will be used 
> when {{SolrIndexSearcher}} executes 
> {{core.getDeletionPolicy().saveCommitPoint(reader.getIndexCommit().getGeneration());}}
>  will be (as of the current code) the {{generation}} of the last _hard_ 
> commit -- meaning that new segment/data files since the last "hard commit" 
> will not be protected from deletion if additional commits/merges happen on 
> the index duringthe life of the {{SolrIndexSearcher}} -- either view 
> concurrent rapid commits, or via 
> {{commit=true&softCommit=false&openSearcher=false}}.
> I have not investigated this in depth, but I believe there is risk here of 
> unpredictible bugs when using HDFS in conjunction with 
> {{softCommit=true&openSearcher=true}}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Resolved] (SOLR-9169) External file fields do not work with HDFS

2025-02-17 Thread Eric Pugh (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-9169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Pugh resolved SOLR-9169.
-
Resolution: Won't Fix

HDFS has been removed in Solr 10.

> External file fields do not work with HDFS
> --
>
> Key: SOLR-9169
> URL: https://issues.apache.org/jira/browse/SOLR-9169
> Project: Solr
>  Issue Type: Bug
>  Components: Hadoop Integration, hdfs
>Affects Versions: 6.0
>Reporter: David Johnson
>Priority: Major
>
> The external file fields do not currently have HDFS support.  They attempt to 
> read using the VersionedFile class, which only uses the basic JAVA IO 
> classes, which results in an unable to open file / file not found error.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Resolved] (SOLR-16334) CVE-2021-37404 | CVSS 9 | org.apache.hadoop_hadoop-common

2025-02-17 Thread Eric Pugh (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-16334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Pugh resolved SOLR-16334.
--
Resolution: Won't Fix

HDFS has been removed in Solr 10.

> CVE-2021-37404 | CVSS 9 | org.apache.hadoop_hadoop-common
> -
>
> Key: SOLR-16334
> URL: https://issues.apache.org/jira/browse/SOLR-16334
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 8.11.2
>Reporter: Chris Sabelstrom
>Priority: Major
> Attachments: image-2022-08-09-10-52-28-417.png
>
>
> Our security scanner detected the following vulnerability. Please upgrade to 
> version noted in Status column. Please fix this for 8.11.x
> !image-2022-08-09-10-52-28-417.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Resolved] (SOLR-14373) HDFS block cache allows overallocation

2025-02-17 Thread Eric Pugh (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Pugh resolved SOLR-14373.
--
Resolution: Won't Fix

HDFS has been removed in Solr 10.

> HDFS block cache allows overallocation
> --
>
> Key: SOLR-14373
> URL: https://issues.apache.org/jira/browse/SOLR-14373
> Project: Solr
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 4.10
>Reporter: Istvan Farkas
>Priority: Minor
>
> For the HDFS block cache, when we allocate more slabs the direct memory 
> available, the error message seems to be hidden.
> In such cases The HdfsDirectoryFactory throws an OutOfMemoryError, which 
> seems to be caught in the HdfsDirectoryFactory itself and thrown as a 
> RuntimeException: 
> {code}
>  try {
>   blockCache = new BlockCache(metrics, directAllocation, totalMemory, 
> slabSize, blockSize);
> } catch (OutOfMemoryError e) {
>   throw new RuntimeException(
>   "The max direct memory is likely too low.  Either increase it (by 
> adding -XX:MaxDirectMemorySize=g -XX:+UseLargePages to your containers 
> startup args)"
>   + " or disable direct allocation using 
> solr.hdfs.blockcache.direct.memory.allocation=false in solrconfig.xml. If you 
> are putting the block cache on the heap,"
>   + " your java heap size might not be large enough."
>   + " Failed allocating ~" + totalMemory / 100.0 + " MB.",
>   e);
> }
> {code}
> Which will manifest as a NullPointerException during core load.
> {code}
> 2020-02-24 06:50:23,492 ERROR (coreLoadExecutor-5-thread-8)-c: 
> collection1-s:shard2-r:core_node2-x: 
> collection1_shard2_replica1-o.a.s.c.SolrCore: Error while closing
> java.lang.NullPointerException
> at org.apache.solr.core.SolrCore.close(SolrCore.java:1352)
> at org.apache.solr.core.SolrCore.(SolrCore.java:967)
> {code}
> When directAllocation is true, the directoryFactory has an approximation of 
> the memory to be allocated.
> {code}
> 2020-02-24 06:49:53,153 INFO 
> (coreLoadExecutor-5-thread-8)-c:collection1-s:shard2-r:core_node2-x:collection1_shard2_replica1-o.a.s.c.HdfsDirectoryFactory:
>  Number of slabs of block cache [16384] with direct memory allocation set to 
> [true]
> 2020-02-24 06:49:53,153 INFO 
> (coreLoadExecutor-5-thread-8)-c:collection1-s:shard2-r:core_node2-x:collection1_shard2_replica1-o.a.s.c.HdfsDirectoryFactory:
>  Block cache target memory usage, slab size of [134217728] will allocate 
> [16384] slabs and use ~[219902322] bytes
> {code}
> This is detected on Solr 4.10 but it seems that it also affects current 
> versions, I will double check.
> Plan to resolve:
> - correct logging and throwable instance checking so it does not manifest in 
> a nullpointerexception during core load
> - add a detection which checks if the memory to be allocated is higher than 
> the available direct memory. If yes, fall back to a smaller slab count and 
> log a warning message.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Resolved] (SOLR-11211) Too many documents, composite IndexReaders cannot exceed 2147483519

2025-02-17 Thread Eric Pugh (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-11211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Pugh resolved SOLR-11211.
--
Resolution: Won't Fix

This may be a valid non HDFS use bug too?  if so, please reopen.  Closing since 
HDFS has been removed in Solr 10.

> Too many documents, composite IndexReaders cannot exceed 2147483519
> ---
>
> Key: SOLR-11211
> URL: https://issues.apache.org/jira/browse/SOLR-11211
> Project: Solr
>  Issue Type: Task
> Environment: Hadoop Centos6
>Reporter: Wael
>Priority: Major
>
> I am running a single node Hadoop SOLR machine with 64 GB of ram.
> The issue is that I was using the machine successfully untill yesterday where 
> I made a restart and one of the indexes I am working on wouldn't start giving 
> the error :Too many documents, composite IndexReaders cannot exceed 
> 2147483519". 
> I wonder how SOLR allowed me to add more documents than what a single shard 
> can take. I need a solution to startup the index and I don't want to loose 
> all the data as I only have a 2 week old backup. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Resolved] (SOLR-7360) Enable HDFS HA NameNode setup and fail-over testing added in SOLR-7311.

2025-02-17 Thread Eric Pugh (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-7360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Pugh resolved SOLR-7360.
-
Resolution: Won't Fix

HDFS has been removed in Solr 10.

> Enable HDFS HA NameNode setup and fail-over testing added in SOLR-7311.
> ---
>
> Key: SOLR-7360
> URL: https://issues.apache.org/jira/browse/SOLR-7360
> Project: Solr
>  Issue Type: Test
>  Components: Hadoop Integration, hdfs
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Resolved] (SOLR-7378) Be more conservative about loading a core when hdfs transaction log could not be recovered

2025-02-17 Thread Eric Pugh (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-7378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Pugh resolved SOLR-7378.
-
Resolution: Won't Fix

HDFS has been removed in Solr 10.

> Be more conservative about loading a core when hdfs transaction log could not 
> be recovered
> --
>
> Key: SOLR-7378
> URL: https://issues.apache.org/jira/browse/SOLR-7378
> Project: Solr
>  Issue Type: Bug
>  Components: Hadoop Integration, hdfs, SolrCloud
>Affects Versions: 5.0
>Reporter: Gregory Chanan
>Priority: Major
>
> Today, if an HdfsTransactionLog cannot recover its lease, you get the 
> following warning in the log:
> {code}
>   log.warn("Cannot recoverLease after trying for " +
> conf.getInt("solr.hdfs.lease.recovery.timeout", 90) +
> "ms (solr.hdfs.lease.recovery.timeout); continuing, but may be 
> DATALOSS!!!; " +
> getLogMessageDetail(nbAttempt, p, startWaiting));
> {code}
> from: 
> https://github.com/apache/lucene-solr/blob/a8c24b7f02d4e4c172926d04654bcc007f6c29d2/solr/core/src/java/org/apache/solr/util/FSHDFSUtils.java#L145-L148
> But some deployments may not actually want to continue if there is potential 
> data loss, they may want to investigate what the underlying issue is with 
> HDFS first.  And there's no way outside of looking at the logs to figure out 
> what is going on.
> There's a range of possibilties here, but here's a couple of ideas:
> 1) config parameter around whether to continue with potential data loss or not
> 2) load but require special flag to read potentially incorrect data (similar 
> to  shards.tolerant, data.tolerant or something?)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Resolved] (SOLR-8112) HdfsTransactionLog#rollback should throw a not supported exception.

2025-02-17 Thread Eric Pugh (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-8112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Pugh resolved SOLR-8112.
-
Resolution: Won't Fix

HDFS has been removed in Solr 10.

> HdfsTransactionLog#rollback should throw a not supported exception.
> ---
>
> Key: SOLR-8112
> URL: https://issues.apache.org/jira/browse/SOLR-8112
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Minor
>
> This method currently won't work until we get hdfs truncate support in 2.8.
> Nothing currently calls this, we work around lack of support in 
> HdfsUpdateLog, but we should clean this up anyway.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Resolved] (SOLR-7322) Have a solr-specific hadoop Configuration

2025-02-17 Thread Eric Pugh (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-7322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Pugh resolved SOLR-7322.
-
Resolution: Won't Fix

HDFS has been removed in Solr 10.

> Have a solr-specific hadoop Configuration
> -
>
> Key: SOLR-7322
> URL: https://issues.apache.org/jira/browse/SOLR-7322
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Reporter: Gregory Chanan
>Priority: Major
>
> There are a few places in the code that set up hadoop Configurations, e.g. 
> HdfsUpdateLog, HdfsDirectoryFactory.  In apache sentry, we also set up a 
> Configuration with the same properties as the above two (i.e. reading the 
> hdfs.conf.dir resources).  It would help to unify these usages.
> Similar to say, the HBaseConfiguration: 
> https://github.com/apache/hbase/blob/0a500e5d305b0c75a6a357a5ff7a9210a615a007/hbase-common/src/main/java/org/apache/hadoop/hbase/HBaseConfiguration.java
> Maybe would want to make Configuration itself a forbidden API and force 
> everyone to use our version as well.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Resolved] (SOLR-7438) Look into using new HDFS truncate feature in HdfsTransactionLog.

2025-02-17 Thread Eric Pugh (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-7438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Pugh resolved SOLR-7438.
-
Resolution: Won't Fix

HDFS has been removed in Solr 10.

> Look into using new HDFS truncate feature in HdfsTransactionLog.
> 
>
> Key: SOLR-7438
> URL: https://issues.apache.org/jira/browse/SOLR-7438
> Project: Solr
>  Issue Type: Improvement
>  Components: Hadoop Integration, hdfs
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Major
>
> Looks like truncate is added in 2.7.
> See HdfsTransactionLog:
> {code}
>   // HACK
>   // while waiting for HDFS-3107, instead of quickly
>   // dropping, we slowly apply
>   // This is somewhat brittle, but current usage
>   // allows for it
>   @Override
>   public boolean dropBufferedUpdates() {
> Future future = applyBufferedUpdates();
> if (future != null) {
>   try {
> future.get();
>   } catch (InterruptedException | ExecutionException e) {
> throw new RuntimeException(e);
>   }
> }
> return true;
>   }
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Resolved] (SOLR-13127) Solr doesn't make difference by request methods

2025-02-17 Thread Eric Pugh (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-13127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Pugh resolved SOLR-13127.
--
Resolution: Won't Fix

I believe since the hadoop-auth module was removed in 10, that this is no 
longer a valid issue.  Please reopen if this issue is independent/doesn't rely 
on the hadoop-auth module.

> Solr doesn't make difference by request methods
> ---
>
> Key: SOLR-13127
> URL: https://issues.apache.org/jira/browse/SOLR-13127
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 7.4
> Environment: Ubuntu 16.04
> Solr 7.4
> Kerberos
> Java 8
>Reporter: Geza Nagy
>Priority: Major
>
> I tested SolrCloud with Kerberos auth and found an interesting scenario.
> +*Symptom:*+
> I tried to call the solr admin api to add a collection and I got back a 
> response of 400 because the collection is already exists.
> +*What I used:*+
> HTTPUrlConnection + hadoop security's Kerberos Authenticator.
> [https://docs.oracle.com/javase/8/docs/api/java/net/HttpURLConnection.html]
> [https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/client/KerberosAuthenticator.java]
>  
> +*Root cause:*+
> The Kerberos Authenticator uses OPTIONS as request method when checks if the 
> client is already authenticated and if it is the OPTIONS request reaches the 
> solr endpoint and runs the action included in the uri (as per I provide the 
> full url to the authenticator.)
> So during the authentication the action is performed and when my original 
> request hits the endpoint the collection is already made.
> And it can happen because there is no functionality in SOLR to handle 
> properly the different request methods.
>  
> In my opinion it's not a proper functionality if I can call any endpoint with 
> any request method and accidently perform action while I just want to check 
> if I'm authenticated or not.  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Commented] (SOLR-17609) Remove hdfs module

2025-02-17 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-17609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17927701#comment-17927701
 ] 

ASF subversion and git services commented on SOLR-17609:


Commit 7e9ce82ace42b3273e4a8827a7d6dd15fc479f7e in solr's branch 
refs/heads/branch_9x from Eric Pugh
[ https://gitbox.apache.org/repos/asf?p=solr.git;h=7e9ce82ace4 ]

SOLR-17609:  mark deprecation of HDFS module in 9x and removal in 10 (#3041)

* Deprecate HDFS Module with warning about removal in Solr 10

* Update solr/solr-ref-guide/modules/deployment-guide/pages/solr-on-hdfs.adoc

-

Co-authored-by: Houston Putman 

> Remove hdfs module
> --
>
> Key: SOLR-17609
> URL: https://issues.apache.org/jira/browse/SOLR-17609
> Project: Solr
>  Issue Type: Task
>  Components: hdfs
>Affects Versions: main (10.0)
>Reporter: Eric Pugh
>Assignee: Eric Pugh
>Priority: Major
>  Labels: pull-request-available
> Fix For: main (10.0)
>
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> One of the outcomes of the 2024 Community Survey is that we learned (in our 
> admittedly fairly unscientific) responses that hdfs module is not used.
> This PR is to understand the impact of removing hdfs in Solr 10.
> See [https://lists.apache.org/thread/hp6bov79rgrg0gb2ozzbzxxn30k2js0h] for 
> discussion on Dev.
>  
> I won't merge this PR till we have more consensus.
> This builds on work started in 
> https://issues.apache.org/jira/browse/SOLR-14660 and 
> https://issues.apache.org/jira/browse/SOLR-14021



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Resolved] (SOLR-8335) HdfsLockFactory does not allow core to come up after a node was killed

2025-02-17 Thread Eric Pugh (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-8335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Pugh resolved SOLR-8335.
-
Resolution: Won't Fix

HDFS has been removed in Solr 10.

> HdfsLockFactory does not allow core to come up after a node was killed
> --
>
> Key: SOLR-8335
> URL: https://issues.apache.org/jira/browse/SOLR-8335
> Project: Solr
>  Issue Type: Bug
>  Components: Hadoop Integration, hdfs
>Affects Versions: 5.0, 5.1, 5.2, 5.2.1, 5.3, 5.3.1
>Reporter: Varun Thacker
>Assignee: Mark Miller
>Priority: Major
> Attachments: SOLR-8335.patch
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> When using HdfsLockFactory if a node gets killed instead of a graceful 
> shutdown the write.lock file remains in HDFS . The next time you start the 
> node the core doesn't load up because of LockObtainFailedException .
> I was able to reproduce this in all 5.x versions of Solr . The problem wasn't 
> there when I tested it in 4.10.4
> Steps to reproduce this on 5.x
> 1. Create directory in HDFS : {{bin/hdfs dfs -mkdir /solr}}
> 2. Start Solr: {{bin/solr start -Dsolr.directoryFactory=HdfsDirectoryFactory 
> -Dsolr.lock.type=hdfs -Dsolr.data.dir=hdfs://localhost:9000/solr 
> -Dsolr.updatelog=hdfs://localhost:9000/solr}}
> 3. Create core: {{./bin/solr create -c test -n data_driven}}
> 4. Kill solr
> 5. The lock file is there in HDFS and is called {{write.lock}}
> 6. Start Solr again and you get a stack trace like this:
> {code}
> 2015-11-23 13:28:04.287 ERROR (coreLoadExecutor-6-thread-1) [   x:test] 
> o.a.s.c.CoreContainer Error creating core [test]: Index locked for write for 
> core 'test'. Solr now longer supports forceful unlocking via 
> 'unlockOnStartup'. Please verify locks manually!
> org.apache.solr.common.SolrException: Index locked for write for core 'test'. 
> Solr now longer supports forceful unlocking via 'unlockOnStartup'. Please 
> verify locks manually!
> at org.apache.solr.core.SolrCore.(SolrCore.java:820)
> at org.apache.solr.core.SolrCore.(SolrCore.java:659)
> at org.apache.solr.core.CoreContainer.create(CoreContainer.java:723)
> at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:443)
> at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:434)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$1.run(ExecutorUtil.java:210)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.lucene.store.LockObtainFailedException: Index locked 
> for write for core 'test'. Solr now longer supports forceful unlocking via 
> 'unlockOnStartup'. Please verify locks manually!
> at org.apache.solr.core.SolrCore.initIndex(SolrCore.java:528)
> at org.apache.solr.core.SolrCore.(SolrCore.java:761)
> ... 9 more
> 2015-11-23 13:28:04.289 ERROR (coreContainerWorkExecutor-2-thread-1) [   ] 
> o.a.s.c.CoreContainer Error waiting for SolrCore to be created
> java.util.concurrent.ExecutionException: 
> org.apache.solr.common.SolrException: Unable to create core [test]
> at java.util.concurrent.FutureTask.report(FutureTask.java:122)
> at java.util.concurrent.FutureTask.get(FutureTask.java:192)
> at org.apache.solr.core.CoreContainer$2.run(CoreContainer.java:472)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$1.run(ExecutorUtil.java:210)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.solr.common.SolrException: Unable to create core [test]
> at org.apache.solr.core.CoreContainer.create(CoreContainer.java:737)
> at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:443)
> at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:434)
> ... 5 more
> Caused by: org.apache.solr.common.SolrException: Index locked for write for 
> core 'test'. Solr now longer supports forceful unlocking via 
> 'unlockOnStartup'. Please verify locks manually!
> at org.apache.solr.core.SolrCore.(SolrCore.java:820)
> at org.apache.solr.core.SolrCore.(SolrCore.java:659)
> at org.apache.solr.core.CoreContainer

[jira] [Resolved] (SOLR-10161) HdfsChaosMonkeySafeLeaderTest needs to be hardened.

2025-02-17 Thread Eric Pugh (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-10161?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Pugh resolved SOLR-10161.
--
Resolution: Won't Fix

HDFS has been removed in Solr 10.

> HdfsChaosMonkeySafeLeaderTest needs to be hardened.
> ---
>
> Key: SOLR-10161
> URL: https://issues.apache.org/jira/browse/SOLR-10161
> Project: Solr
>  Issue Type: Test
>  Components: Hadoop Integration, hdfs, Tests
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Major
> Attachments: logs.tar.gz
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



Re: [PR] SOLR-17609: mark deprecation of HDFS module in 9x and removal in 10 [solr]

2025-02-17 Thread via GitHub


epugh merged PR #3041:
URL: https://github.com/apache/solr/pull/3041


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Resolved] (SOLR-12080) Frequent failures of MoveReplicaHDFSTest.testFailedMove

2025-02-17 Thread Eric Pugh (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-12080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Pugh resolved SOLR-12080.
--
Resolution: Won't Fix

HDFS has been removed in Solr 10.

> Frequent failures of MoveReplicaHDFSTest.testFailedMove
> ---
>
> Key: SOLR-12080
> URL: https://issues.apache.org/jira/browse/SOLR-12080
> Project: Solr
>  Issue Type: Bug
>  Components: Hadoop Integration, Tests
>Reporter: Andrzej Bialecki
>Assignee: Andrzej Bialecki
>Priority: Major
> Attachments: jenkins.log.txt.gz
>
>
> This test frequently fails. This is one of the failing seeds:
> {code}
>[junit4]   2> 129275 INFO  (qtp1647120030-248) [n:127.0.0.1:55469_solr 
> c:MoveReplicaHDFSTest_failed_coll_true s:shard2 r:core_node7 
> x:MoveReplicaHDFSTest_failed_coll_true_shard2_replica_n4] o.a.s.c.S.Request 
> [MoveReplicaHDFSTest_failed_coll_true_shard2_replica_n4]  webapp=/solr 
> path=/select 
> params={q=*:*&_stateVer_=MoveReplicaHDFSTest_failed_coll_true:9&wt=javabin&version=2}
>  status=503 QTime=0
>[junit4]   2> 129278 ERROR (qtp148844424-682) [n:127.0.0.1:54855_solr 
> c:MoveReplicaHDFSTest_failed_coll_true s:shard2 r:core_node8 
> x:MoveReplicaHDFSTest_failed_coll_true_shard2_replica_n6] 
> o.a.s.h.RequestHandlerBase org.apache.solr.common.SolrException: no servers 
> hosting shard: shard1
>[junit4]   2>  at 
> org.apache.solr.handler.component.HttpShardHandler.prepDistributed(HttpShardHandler.java:436)
>[junit4]   2>  at 
> org.apache.solr.handler.component.SearchHandler.getAndPrepShardHandler(SearchHandler.java:226)
>[junit4]   2>  at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:264)
>[junit4]   2>  at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:195)
>[junit4]   2>  at 
> org.apache.solr.core.SolrCore.execute(SolrCore.java:2503)
>[junit4]   2>  at 
> org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:711)
>[junit4]   2>  at 
> org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:517)
>[junit4]   2>  at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:384)
>[junit4]   2>  at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:330)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1637)
>[junit4]   2>  at 
> org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:139)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1637)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533)
>[junit4]   2>  at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:188)
>[junit4]   2>  at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595)
>[junit4]   2>  at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:188)
>[junit4]   2>  at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1253)
>[junit4]   2>  at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:168)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:473)
>[junit4]   2>  at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1564)
>[junit4]   2>  at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:166)
>[junit4]   2>  at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1155)
>[junit4]   2>  at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>[junit4]   2>  at 
> org.eclipse.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:527)
>[junit4]   2>  at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
>[junit4]   2>  at 
> org.eclipse.jetty.server.Server.handle(Server.java:530)
>[junit4]   2>  at 
> org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:347)
>[junit4]   2>  at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:256)
>[junit4]   2>  at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:279)
>[junit4]   2>  at 
> org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:102)
>[junit4]   2>  at 
> org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:124)
>[junit4]   2>  at 
> org.eclipse.jetty.util.thread.strategy.EatWh

[jira] [Resolved] (SOLR-7204) Improve error handling in create collection API

2025-02-17 Thread Eric Pugh (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-7204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Pugh resolved SOLR-7204.
-
Resolution: Won't Fix

I *think* that while there is probably remove for improving error handling, 
since this ticket focuses on HDFS, that it can be closed since HDFS has been 
removed in Solr 10.

> Improve error handling in create collection API
> ---
>
> Key: SOLR-7204
> URL: https://issues.apache.org/jira/browse/SOLR-7204
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.10.3
>Reporter: Hrishikesh Gadre
>Priority: Minor
>
> I was trying to create a collection on a Solrcloud deployed along with 
> kerberized Hadoop cluster. I kept on getting following error,
> org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException:Error 
> CREATEing SolrCore 'orders_shard1_replica2': Unable to create core 
> [orders_shard1_replica2] Caused by: Lock obtain timed out: 
> org.apache.solr.store.hdfs.HdfsLockFactory$HdfsLock@451997e1
> On careful analysis of logs, I realized it was due to Solr not being able to 
> talk to HDFS properly because of following error,
> javax.net.ssl.SSLHandshakeException: 
> sun.security.validator.ValidatorException: PKIX path building failed: 
> sun.security.provider.certpath.SunCertPathBuilderException: unable to find 
> valid certification path to requested target
> We should improve the error handling such that we return the root-cause of 
> the error (in this case SSLHandshakeException instead of lock timeout 
> exception).
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Resolved] (SOLR-16805) org.apache.solr.common.SolrException: Error loading class 'solr.KerberosPlugin'

2025-02-17 Thread Eric Pugh (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-16805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Pugh resolved SOLR-16805.
--
Resolution: Won't Fix

Hadoop Auth was removed in Solr 10.

> org.apache.solr.common.SolrException:  Error loading class 
> 'solr.KerberosPlugin'
> 
>
> Key: SOLR-16805
> URL: https://issues.apache.org/jira/browse/SOLR-16805
> Project: Solr
>  Issue Type: Bug
>  Components: Authentication
>Affects Versions: 9.2.1
> Environment: OS: Debian 11
> Solr Version : 9.2.1
> Java Version: 11
>Reporter: Senthil Kumar
>Assignee: Houston Putman
>Priority: Major
>  Labels: kerberos, solrcloud
>
> I am facing the same "Error loading class 'solr.KerberosPlugin'" even in Solr 
> 9.2.1
>  
> I have copied hadoop-auth classes to WEB_INF/lib and also tried setting 
> SOLR_MODULES=hadoop-auth,hdfs too. But still I am seeing this issue both in 
> 9.2.0 and 9.2.1
>  
> It will be helpful if you add more insight to resolve this issue
>  
> Below is the Error stack trace:
> {{
> 2023-05-17 11:02:38.998 INFO  (zkCallback-13-thread-1) [] 
> o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (1)
> 2023-05-17 11:02:39.037 INFO  (main) [] o.a.s.c.CoreContainer Initializing 
> authentication plugin: org.apache.solr.security.KerberosPlugin
> 2023-05-17 11:02:39.044 ERROR (main) [] o.a.s.s.CoreContainerProvider Could 
> not start Solr. Check solr/home property and the logs
> 2023-05-17 11:02:39.064 ERROR (main) [] o.a.s.c.SolrCore null => 
> *org.apache.solr.common.SolrException:  Error loading class 
> 'org.apache.solr.security.KerberosPlugin'*
>         at 
> org.apache.solr.core.{*}SolrResourceLoader.findClass(SolrResourceLoader.java:550){*}
> org.apache.solr.common.SolrException:  *Error loading class 
> 'org.apache.solr.security.KerberosPlugin'*
>         at 
> org.apache.solr.core.SolrResourceLoader.findClass(SolrResourceLoader.java:550)
>  ~[solr-core-9.2.1.jar:9.2.1 a4c64ab6a2a270ca69c28c706dabb2927ed8a7c2 - 
> jsweeney - 2023-04-24 11:35:31]
>         at 
> org.apache.solr.core.SolrResourceLoader.newInstance(SolrResourceLoader.java:612)
>  ~[solr-core-9.2.1.jar:9.2.1 a4c64ab6a2a270ca69c28c706dabb2927ed8a7c2 - 
> jsweeney - 2023-04-24 11:35:31]
>         at 
> org.apache.solr.core.{*}CoreContainer.initializeAuthenticationPlugin(CoreContainer.java:558){*}
>  ~[solr-core-9.2.1.jar:9.2.1 a4c64ab6a2a270ca69c28c706dabb2927ed8a7c2 - 
> jsweeney - 2023-04-24 11:35:31]
>         at 
> org.apache.solr.core.CoreContainer.reloadSecurityProperties(CoreContainer.java:1159)
>  ~[solr-core-9.2.1.jar:9.2.1 a4c64ab6a2a270ca69c28c706dabb2927ed8a7c2 - 
> jsweeney - 2023-04-24 11:35:31]
>         at 
> org.apache.solr.core.{*}CoreContainer.load(CoreContainer.java:823){*} 
> ~[solr-core-9.2.1.jar:9.2.1 a4c64ab6a2a270ca69c28c706dabb2927ed8a7c2 - 
> jsweeney - 2023-04-24 11:35:31]
>         at 
> org.apache.solr.servlet.CoreContainerProvider.createCoreContainer(CoreContainerProvider.java:412)
>  ~[solr-core-9.2.1.jar:9.2.1 a4c64ab6a2a270ca69c28c706dabb2927ed8a7c2 - 
> jsweeney - 2023-04-24 11:35:31]
>         at 
> org.apache.solr.servlet.CoreContainerProvider.init(CoreContainerProvider.java:230)
>  ~[solr-core-9.2.1.jar:9.2.1 a4c64ab6a2a270ca69c28c706dabb2927ed8a7c2 - 
> jsweeney - 2023-04-24 11:35:31]
>         at 
> org.apache.solr.servlet.CoreContainerProvider.contextInitialized(CoreContainerProvider.java:114)
>  ~[solr-core-9.2.1.jar:9.2.1 a4c64ab6a2a270ca69c28c706dabb2927ed8a7c2 - 
> jsweeney - 2023-04-24 11:35:31]
>         at 
> org.eclipse.jetty.server.handler.ContextHandler.callContextInitialized(ContextHandler.java:1048)
>  ~[jetty-server-10.0.13.jar:10.0.13]
>         at 
> org.eclipse.jetty.servlet.ServletContextHandler.callContextInitialized(ServletContextHandler.java:624)
>  ~[jetty-servlet-10.0.13.jar:10.0.13]
>         at 
> org.eclipse.jetty.server.handler.ContextHandler.contextInitialized(ContextHandler.java:983)
>  ~[jetty-server-10.0.13.jar:10.0.13]
>         at 
> org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:740) 
> ~[jetty-servlet-10.0.13.jar:10.0.13]
>         at 
> org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:392)
>  ~[jetty-servlet-10.0.13.jar:10.0.13]
>         at 
> org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1304) 
> ~[jetty-webapp-10.0.13.jar:10.0.13]
>         at 
> org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:900)
>  ~[jetty-server-10.0.13.jar:10.0.13]
>         at 
> org.eclipse.jetty.servlet.ServletContextHandler.doStart(ServletContextHandler.java:306)
>  ~[jetty-servlet-10.0.13.jar:10.0.13]
>         at 
> org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:532) 
> ~[jetty-w

Re: [PR] SOLR-17309: Enhance certificate based authentication plugin with flexible cert principal resolution [solr]

2025-02-17 Thread via GitHub


epugh commented on PR #3029:
URL: https://github.com/apache/solr/pull/3029#issuecomment-2662631556

   @laminelam if you do't mind updating this PR I'd love to get this in!  I 
normally prefer the ref guide commit to go in along wiht the source code 
commit, so we don't forget to add it, but happy to skip that.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



Re: [PR] fix the solr zk invocation [solr-operator]

2025-02-17 Thread via GitHub


elangelo commented on PR #756:
URL: https://github.com/apache/solr-operator/pull/756#issuecomment-2663001130

   @gerlowskija you are right of course... I was using solr-operator 0.9. I 
initially was doing this with 0.8 but that wasn't working as well. I then 
upgraded to 0.9 and there found it broken too.
   I did bring up my own custom container and confirmed it immediately worked 
if i added --zk-host to the cli.
   I understand we might want a longer term fix than this and obviously I'm 
good with that, but short term this means that initializing authentication on 
solr 9.8 and younger is broken? If we need to fix the solr cli tool we would 
have to backport it?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org