[GitHub] [solr-operator] alittlec commented on issue #53: Allow Solr to be run across Availability Zones
alittlec commented on issue #53: URL: https://github.com/apache/solr-operator/issues/53#issuecomment-882472454 hi - is there any update on this? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org For additional commands, e-mail: issues-h...@solr.apache.org
[jira] [Commented] (SOLR-15538) Update Lucene Preview Release dependency
[ https://issues.apache.org/jira/browse/SOLR-15538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17383356#comment-17383356 ] Mark Robert Miller commented on SOLR-15538: --- The breaks: * Analyzer version compatibility methods are gone. * Span queries have changed package. * Boosted span query is gone. * getTermVectors is now final (can't override) and deprecated, have to use getTermVectorsReader > Update Lucene Preview Release dependency > > > Key: SOLR-15538 > URL: https://issues.apache.org/jira/browse/SOLR-15538 > Project: Solr > Issue Type: Task > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Mark Robert Miller >Priority: Minor > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org For additional commands, e-mail: issues-h...@solr.apache.org
[jira] [Commented] (SOLR-15335) templated (header + body) approach for building Dockerfile.local + Dockerfile.official w/common guts
[ https://issues.apache.org/jira/browse/SOLR-15335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17383407#comment-17383407 ] ASF subversion and git services commented on SOLR-15335: Commit e54a6db6b11822837f3425c7984a5939aefcdb6d in solr's branch refs/heads/main from Houston Putman [ https://gitbox.apache.org/repos/asf?p=solr.git;h=e54a6db ] SOLR-15335: Don't use cached Dockerfiles when the body changes. > templated (header + body) approach for building Dockerfile.local + > Dockerfile.official w/common guts > > > Key: SOLR-15335 > URL: https://issues.apache.org/jira/browse/SOLR-15335 > Project: Solr > Issue Type: Sub-task > Components: Docker >Reporter: Chris M. Hostetter >Assignee: Chris M. Hostetter >Priority: Major > Fix For: main (9.0) > > Attachments: SOLR-15335-body-template-cache.patch, > SOLR-15335-no-custom-network.patch, SOLR-15335.patch, SOLR-15335.patch, > SOLR-15335.patch, SOLR-15335.patch, SOLR-15335.patch, SOLR-15335.patch, > SOLR-15335.patch, SOLR-15335.patch, SOLR-15335.patch > > > Goals: > * "generate" a Dockerfile.official at release time that will satisfy the > process/tooling of docker-library for 'official' docker images > ** use a templated approach to fill in things like version, sha512, and GPG > fingerprint > * ensure that the generated Dockerfile.official and the Dockerfile.local > included in solr.tgz are identical in terms of the "operational" aspects of a > Solr docker image (ie: what the disk layout looks like, and how it runs) > ** they should only differ in how they get the contents of a solr.tgz into > the docker image (and how much they trust it before unpacking it) > * minimize the amount of overhead needed to make changes that exist in in > both dockerfiles -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org For additional commands, e-mail: issues-h...@solr.apache.org
[jira] [Commented] (SOLR-15549) Old SolrJ implementations (8.x) are incompatible with 9.0 Clouds
[ https://issues.apache.org/jira/browse/SOLR-15549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17383417#comment-17383417 ] Houston Putman commented on SOLR-15549: --- I should note the difference between the PR for {{branch_8x}} and {{main}}. h2. {{branch_8x}} This changes the check in {{ZkStateReader}} from just checking for {{/clusterstate.json}} to looking for either {{/clusterstate.json}} or {{/collections}}. This will allow for users to use the 8.10 SolrJ APIs for 9.x Clouds. h2. {{main}} This adds a better error message in {{ZkStateReader}}, that lets the user know why the ZK Node is not hosting a SolrCloud, i.e. which ZNode is missing (usually {{/aliases.json}}). The implementation here had already been improved to not checking for individual paths, but capturing {{KeeperException.NoNodeExceptions}} and turning them into more user-usuably {{SolrExceptions}} . I also am using this PR to forward-port the ChangeLog entry. > Old SolrJ implementations (8.x) are incompatible with 9.0 Clouds > > > Key: SOLR-15549 > URL: https://issues.apache.org/jira/browse/SOLR-15549 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrJ >Affects Versions: 8.0 >Reporter: Houston Putman >Assignee: Houston Putman >Priority: Major > Time Spent: 20m > Remaining Estimate: 0h > > The {{ZkStateReader}} in 8.x (and previous versions) checks that a > {{/clusterstate.json}} node exists in the ZK ChRoot, to ensure that the > ChRoot hosts a Solr Cloud. However, starting in 9.0, {{/clusterstate.json}} > has been removed, and it is auto-deleted if a user tries to create one. > That means that the ZkStateReader from SolrJ 8.x will error when trying to > connect with a Solr 9 cloud, with the message: > {quote}Cannot connect to cluster at localhost:2181/: cluster not found/not > ready > {quote} > The solution, is to have the ZK State Reader check both > {{/clusterstate.json}} and {{/collections}} and only error if both are > missing. {{/clusterstate.json}} is long-deprecated in 8.x anyways, so adding > this additional check is good practice in general. > While it would be nice for every user to use the same SolrJ version as the > version they are running for Solr, it can be difficult in practice, > especially when upgrading major Solr versions. It would be preferable to > support at least version + 1 clouds in SolrJ, for the purpose of upgrades. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org For additional commands, e-mail: issues-h...@solr.apache.org
[jira] [Commented] (SOLR-15538) Update Lucene Preview Release dependency
[ https://issues.apache.org/jira/browse/SOLR-15538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17383439#comment-17383439 ] Michael Gibney commented on SOLR-15538: --- Hi [~markrmiller], I'm curious whether you're planning to work on this issue, or are just documenting it? This might be relatively straightforward if proceeding "head first" by generating a new pre-release build and referring to it in a Solr PR; but in case you (or another person) tackles this in a more "provisional" way, I'd be curious to know your thoughts on intersections with SOLR-15455 (wrt development workflow). > Update Lucene Preview Release dependency > > > Key: SOLR-15538 > URL: https://issues.apache.org/jira/browse/SOLR-15538 > Project: Solr > Issue Type: Task > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Mark Robert Miller >Priority: Minor > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org For additional commands, e-mail: issues-h...@solr.apache.org
[GitHub] [solr] markrmiller opened a new pull request #225: SOLR-15538 Update Lucene Preview Release dependency
markrmiller opened a new pull request #225: URL: https://github.com/apache/solr/pull/225 https://issues.apache.org/jira/browse/SOLR-15538 Step 1, hack to success. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org For additional commands, e-mail: issues-h...@solr.apache.org
[GitHub] [solr] markrmiller commented on a change in pull request #214: SOLR-15428: Integrate the OpenJDK JMH micro benchmark framework for m…
markrmiller commented on a change in pull request #214: URL: https://github.com/apache/solr/pull/214#discussion_r672482454 ## File path: solr/test-framework/src/java/org/apache/solr/util/RandomizeSSL.java ## @@ -104,10 +105,10 @@ public SSLRandomizer(double ssl, double clientAuth, String debug) { public SSLTestConfig createSSLTestConfig() { // even if we know SSL is disabled, always consume the same amount of randomness // that way all other test behavior should be consistent even if a user adds/removes @SuppressSSL - - final boolean useSSL = TestUtil.nextInt(LuceneTestCase.random(), 0, 999) < + Random random = new Random(); Review comment: Because we use randoms differently in the benchmark stuff and don't want to be stuck with Randomized testing randoms. This change was not intended to stick here though, that was from early workarounds. I don't really want the random enforcement / support from the test framework for a couple reasons, but this is simply something not removed - the problem being if you used the mincluster and jetty with jetty.testMode=true and did not launch things via carrot randomized runner, it will *sometimes* detect you dont have a test randomized context for a thread and fail you - but we are not using randomized runner or junit. Currently, I work around needing this workaround by not using the jetty.testMode sys prop path and adding another sys prop hack atm for where starting the mini cluster is a bit too tied into the test framework and carrot random reqs. Java 7 and up has essentially made Random obsolete, so there needs to be some separate regardless, because we don't use carrot2 junit runners for benchmarks, but it's also much preferable, faster, improved to avoid Random entirely and use ThreadLocalRandom and SplittableRandom instead, so i try and use them in the benchmarks. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org For additional commands, e-mail: issues-h...@solr.apache.org
[GitHub] [solr] markrmiller commented on a change in pull request #214: SOLR-15428: Integrate the OpenJDK JMH micro benchmark framework for m…
markrmiller commented on a change in pull request #214: URL: https://github.com/apache/solr/pull/214#discussion_r672484107 ## File path: solr/test-framework/src/java/org/apache/solr/cloud/SolrCloudTestCase.java ## @@ -124,8 +124,18 @@ public Builder(int nodeCount, Path baseDir) { // By default the MiniSolrCloudCluster being built will randomly (seed based) decide which collection API strategy // to use (distributed or Overseer based) and which cluster update strategy to use (distributed if collection API // is distributed, but Overseer based or distributed randomly chosen if Collection API is Overseer based) - this.useDistributedCollectionConfigSetExecution = LuceneTestCase.random().nextInt(2) == 0; - this.useDistributedClusterStateUpdate = useDistributedCollectionConfigSetExecution || LuceneTestCase.random().nextInt(2) == 0; + + Boolean skipDistRandomSetup = Boolean.getBoolean("solr.tests.skipDistributedConfigAndClusterStateRandomSetup"); Review comment: This is essentially a get it working hack at the moment. Ideally, it's simpler to use MiniSolrCluster without having to worry about this SolrCloudTestCase / carrot random tie in. You really want a consistent experience when using the class itself. It's running things via tests that should enable the randomization. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org For additional commands, e-mail: issues-h...@solr.apache.org
[GitHub] [solr] markrmiller commented on a change in pull request #214: SOLR-15428: Integrate the OpenJDK JMH micro benchmark framework for m…
markrmiller commented on a change in pull request #214: URL: https://github.com/apache/solr/pull/214#discussion_r672485138 ## File path: solr/test-framework/build.gradle ## @@ -19,10 +19,131 @@ apply plugin: 'java-library' description = 'Solr Test Framework' +sourceSets { + // Note that just declaring this sourceset creates two configurations. + jmh { +java.srcDirs = ['src/jmh'] + } +} + +compileJmhJava { + doFirst { +options.compilerArgs.remove("-Werror") +options.compilerArgs.remove("-proc:none") + } +} + +forbiddenApisJmh { + bundledSignatures += [ + 'jdk-unsafe', + 'jdk-deprecated', + 'jdk-non-portable', + ] + + suppressAnnotations += [ + "**.SuppressForbidden" + ] +} + + +task jmh(type: JavaExec) { + dependsOn("jmhClasses") + group = "benchmark" + main = "org.openjdk.jmh.Main" + classpath = sourceSets.jmh.compileClasspath + sourceSets.jmh.runtimeClasspath + + standardOutput(System.out) + errorOutput(System.err) + + def include = project.properties.get('include'); + def exclude = project.properties.get('exclude'); + def format = project.properties.get('format', 'json'); + def profilers = project.properties.get('profilers'); + def jvmArgs = project.properties.get('jvmArgs') + def verify = project.properties.get('verify'); + + def resultFile = file("build/reports/jmh/result.${format}") + + if (include) { +args include + } + if (exclude) { +args '-e', exclude + } + if (verify != null) { +// execute benchmarks with the minimum amount of execution (only to check if they are working) +println "≥≥ Running in verify mode" +args '-f', 1 +args '-wi', 1 +args '-i', 1 + } + args '-foe', 'true' //fail-on-error + args '-v', 'NORMAL' //verbosity [SILENT, NORMAL, EXTRA] + if (profilers) { +profilers.split(',').each { + args '-prof', it +} + } + + args '-jvmArgsPrepend', '-Xms4g' + args '-jvmArgsPrepend', '-Djmh.separateClassLoader=true' + args '-jvmArgsPrepend', '-Dlog4j2.is.webapp=false' + args '-jvmArgsPrepend', '-Dlog4j2.garbagefreeThreadContextMap=true' + args '-jvmArgsPrepend', '-Dlog4j2.enableDirectEncoders=true' + args '-jvmArgsPrepend', '-Dlog4j2.enable.threadlocals=true' +// args '-jvmArgsPrepend', '-XX:ConcGCThreads=2' +// args '-jvmArgsPrepend', '-XX:ParallelGCThreads=3' +// args '-jvmArgsPrepend', '-XX:+UseG1GC' + args '-jvmArgsPrepend', '-Djetty.insecurerandom=1' + args '-jvmArgsPrepend', '-Djava.security.egd=file:/dev/./urandom' + args '-jvmArgsPrepend', '-XX:-UseBiasedLocking' + args '-jvmArgsPrepend', '-XX:+PerfDisableSharedMem' + args '-jvmArgsPrepend', '-XX:+ParallelRefProcEnabled' +// args '-jvmArgsPrepend', '-XX:MaxGCPauseMillis=250' + args '-jvmArgsPrepend', '-Dsolr.log.dir=' + + if (jvmArgs) { +for (jvmArg in jvmArgs.split(' ')) { + args '-jvmArgsPrepend', jvmArg +} + } + + args '-rf', format + args '-rff', resultFile + + doFirst { +// println "\nClasspath:" + jmh.classpath.toList() +println "\nExecuting JMH with: $args \n" + +args '-jvmArgsPrepend', '-Djava.class.path='+ toPath(getClasspath().files) +resultFile.parentFile.mkdirs() + } + + doLast { +// jvmArgs "java.class.path", toPath(jmh.classpath) + } + +} + + +private String toPath(Set classpathUnderTest) { Review comment: I still need to work out if this is still needed. It was needed because when running via gradle and using jmh's fork option, the classpath was not propagated. I have sense simplified the integration (realizing I was jumping hoops because our build was putting in -proc:none for all java compile tasks) and I have to double check to make sure this is still necessary. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org For additional commands, e-mail: issues-h...@solr.apache.org
[GitHub] [solr] markrmiller commented on a change in pull request #214: SOLR-15428: Integrate the OpenJDK JMH micro benchmark framework for m…
markrmiller commented on a change in pull request #214: URL: https://github.com/apache/solr/pull/214#discussion_r672485138 ## File path: solr/test-framework/build.gradle ## @@ -19,10 +19,131 @@ apply plugin: 'java-library' description = 'Solr Test Framework' +sourceSets { + // Note that just declaring this sourceset creates two configurations. + jmh { +java.srcDirs = ['src/jmh'] + } +} + +compileJmhJava { + doFirst { +options.compilerArgs.remove("-Werror") +options.compilerArgs.remove("-proc:none") + } +} + +forbiddenApisJmh { + bundledSignatures += [ + 'jdk-unsafe', + 'jdk-deprecated', + 'jdk-non-portable', + ] + + suppressAnnotations += [ + "**.SuppressForbidden" + ] +} + + +task jmh(type: JavaExec) { + dependsOn("jmhClasses") + group = "benchmark" + main = "org.openjdk.jmh.Main" + classpath = sourceSets.jmh.compileClasspath + sourceSets.jmh.runtimeClasspath + + standardOutput(System.out) + errorOutput(System.err) + + def include = project.properties.get('include'); + def exclude = project.properties.get('exclude'); + def format = project.properties.get('format', 'json'); + def profilers = project.properties.get('profilers'); + def jvmArgs = project.properties.get('jvmArgs') + def verify = project.properties.get('verify'); + + def resultFile = file("build/reports/jmh/result.${format}") + + if (include) { +args include + } + if (exclude) { +args '-e', exclude + } + if (verify != null) { +// execute benchmarks with the minimum amount of execution (only to check if they are working) +println "≥≥ Running in verify mode" +args '-f', 1 +args '-wi', 1 +args '-i', 1 + } + args '-foe', 'true' //fail-on-error + args '-v', 'NORMAL' //verbosity [SILENT, NORMAL, EXTRA] + if (profilers) { +profilers.split(',').each { + args '-prof', it +} + } + + args '-jvmArgsPrepend', '-Xms4g' + args '-jvmArgsPrepend', '-Djmh.separateClassLoader=true' + args '-jvmArgsPrepend', '-Dlog4j2.is.webapp=false' + args '-jvmArgsPrepend', '-Dlog4j2.garbagefreeThreadContextMap=true' + args '-jvmArgsPrepend', '-Dlog4j2.enableDirectEncoders=true' + args '-jvmArgsPrepend', '-Dlog4j2.enable.threadlocals=true' +// args '-jvmArgsPrepend', '-XX:ConcGCThreads=2' +// args '-jvmArgsPrepend', '-XX:ParallelGCThreads=3' +// args '-jvmArgsPrepend', '-XX:+UseG1GC' + args '-jvmArgsPrepend', '-Djetty.insecurerandom=1' + args '-jvmArgsPrepend', '-Djava.security.egd=file:/dev/./urandom' + args '-jvmArgsPrepend', '-XX:-UseBiasedLocking' + args '-jvmArgsPrepend', '-XX:+PerfDisableSharedMem' + args '-jvmArgsPrepend', '-XX:+ParallelRefProcEnabled' +// args '-jvmArgsPrepend', '-XX:MaxGCPauseMillis=250' + args '-jvmArgsPrepend', '-Dsolr.log.dir=' + + if (jvmArgs) { +for (jvmArg in jvmArgs.split(' ')) { + args '-jvmArgsPrepend', jvmArg +} + } + + args '-rf', format + args '-rff', resultFile + + doFirst { +// println "\nClasspath:" + jmh.classpath.toList() +println "\nExecuting JMH with: $args \n" + +args '-jvmArgsPrepend', '-Djava.class.path='+ toPath(getClasspath().files) +resultFile.parentFile.mkdirs() + } + + doLast { +// jvmArgs "java.class.path", toPath(jmh.classpath) + } + +} + + +private String toPath(Set classpathUnderTest) { Review comment: I still need to work out if this is still needed. It was needed because when running via gradle and using jmh's fork option, the classpath was not propagated. I have since simplified the integration (realizing I was jumping hoops because our build was putting in -proc:none for all java compile tasks) and I have to double check to make sure this is still necessary. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org For additional commands, e-mail: issues-h...@solr.apache.org
[GitHub] [solr] markrmiller commented on a change in pull request #214: SOLR-15428: Integrate the OpenJDK JMH micro benchmark framework for m…
markrmiller commented on a change in pull request #214: URL: https://github.com/apache/solr/pull/214#discussion_r672486250 ## File path: solr/core/src/java/org/apache/solr/client/solrj/embedded/JettySolrRunner.java ## @@ -313,22 +313,23 @@ private void init(int port) { if (config.onlyHttp1) { connector = new ServerConnector(server, new HttpConnectionFactory(configuration)); } else { - connector = new ServerConnector(server, new HttpConnectionFactory(configuration), - new HTTP2CServerConnectionFactory(configuration)); + connector = new ServerConnector(server, new HttpConnectionFactory(configuration), new HTTP2CServerConnectionFactory(configuration)); } } connector.setReuseAddress(true); connector.setPort(port); connector.setHost("127.0.0.1"); connector.setIdleTimeout(THREAD_POOL_MAX_IDLE_TIME_MS); - connector.setStopTimeout(0); + server.setConnectors(new Connector[] {connector}); server.setSessionIdManager(new DefaultSessionIdManager(server, new Random())); } else { HttpConfiguration configuration = new HttpConfiguration(); - ServerConnector connector = new ServerConnector(server, new HttpConnectionFactory(configuration)); + ServerConnector connector = new ServerConnector(server, new HttpConnectionFactory(configuration), new HTTP2CServerConnectionFactory(configuration)); Review comment: Because it currently does not work with http2, though I have spun these fixes into: SOLR-15547 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org For additional commands, e-mail: issues-h...@solr.apache.org
[GitHub] [solr] markrmiller commented on a change in pull request #214: SOLR-15428: Integrate the OpenJDK JMH micro benchmark framework for m…
markrmiller commented on a change in pull request #214: URL: https://github.com/apache/solr/pull/214#discussion_r672487028 ## File path: solr/test-framework/src/jmh/org/apache/solr/bench/DocMakerRamGen.java ## @@ -0,0 +1,269 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.solr.bench; + +import org.apache.commons.lang3.RandomStringUtils; +import org.apache.commons.lang3.Validate; +import org.apache.lucene.util.TestUtil; +import org.apache.solr.common.SolrInputDocument; + +import java.util.HashMap; +import java.util.Iterator; +import java.util.Map; +import java.util.Objects; +import java.util.Queue; +import java.util.Random; +import java.util.SplittableRandom; +import java.util.concurrent.ConcurrentHashMap; +import java.util.concurrent.ConcurrentLinkedQueue; +import java.util.concurrent.ExecutorService; +import java.util.concurrent.Executors; +import java.util.concurrent.ThreadLocalRandom; +import java.util.concurrent.TimeUnit; +import java.util.concurrent.atomic.AtomicInteger; + +public class DocMakerRamGen { + + private final static Map> CACHE = new ConcurrentHashMap<>(); + + private Queue docs = new ConcurrentLinkedQueue<>(); + + private final Map fields = new HashMap<>(); + + private static final AtomicInteger ID = new AtomicInteger(); + private final boolean cacheResults; + + private ExecutorService executorService; + + private SplittableRandom threadRandom; + + public DocMakerRamGen() { + this(true); + } + + public DocMakerRamGen(boolean cacheResults) { +this.cacheResults = cacheResults; + +Long seed = Long.getLong("threadLocalRandomSeed"); +if (seed == null) { + System.setProperty("threadLocalRandomSeed", Long.toString(new Random().nextLong())); +} + +threadRandom = new SplittableRandom(Long.getLong("threadLocalRandomSeed")); + } + + public void preGenerateDocs(int numDocs) throws InterruptedException { +executorService = Executors.newFixedThreadPool(Runtime.getRuntime().availableProcessors() + 1); +if (cacheResults) { + docs = CACHE.compute(Integer.toString(hashCode()), (key, value) -> { +if (value == null) { + for (int i = 0; i < numDocs; i++) { Review comment: This is likely a fair bit to do on cleaning up / finalizing this docmaker - still pulling a bit from elsewhere to it and then I'll do some cleanup - next update. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org For additional commands, e-mail: issues-h...@solr.apache.org
[GitHub] [solr] markrmiller commented on a change in pull request #214: SOLR-15428: Integrate the OpenJDK JMH micro benchmark framework for m…
markrmiller commented on a change in pull request #214: URL: https://github.com/apache/solr/pull/214#discussion_r672482454 ## File path: solr/test-framework/src/java/org/apache/solr/util/RandomizeSSL.java ## @@ -104,10 +105,10 @@ public SSLRandomizer(double ssl, double clientAuth, String debug) { public SSLTestConfig createSSLTestConfig() { // even if we know SSL is disabled, always consume the same amount of randomness // that way all other test behavior should be consistent even if a user adds/removes @SuppressSSL - - final boolean useSSL = TestUtil.nextInt(LuceneTestCase.random(), 0, 999) < + Random random = new Random(); Review comment: Because we use randoms differently in the benchmark stuff and don't want to be stuck with Randomized testing randoms. This change was not intended to stick here though, that was from early workarounds. I don't really want the random enforcement / support from the test framework for a couple reasons, but this is simply something not removed - the problem being if you used the mincluster and jetty with jetty.testMode=true and did not launch things via carrot randomized runner, it will *sometimes* detect you dont have a test randomized context for a thread and fail you - but we are not using randomized runner or junit. Currently, I work around needing this workaround by not using the jetty.testMode sys prop path and adding another sys prop hack atm for where starting the mini cluster is a bit too tied into the test framework and carrot random reqs. Java 7 and up has essentially made Random obsolete, so there needs to be some separation regardless, because we don't use carrot2 junit runners for benchmarks, but it's also much preferable, faster, improved to avoid Random entirely and use ThreadLocalRandom and SplittableRandom instead, so i try and use them in the benchmarks. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org For additional commands, e-mail: issues-h...@solr.apache.org
[GitHub] [solr] markrmiller commented on a change in pull request #214: SOLR-15428: Integrate the OpenJDK JMH micro benchmark framework for m…
markrmiller commented on a change in pull request #214: URL: https://github.com/apache/solr/pull/214#discussion_r672487747 ## File path: solr/test-framework/src/jmh/org/apache/solr/bench/DocMakerRamGen.java ## @@ -0,0 +1,269 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.solr.bench; + +import org.apache.commons.lang3.RandomStringUtils; +import org.apache.commons.lang3.Validate; +import org.apache.lucene.util.TestUtil; +import org.apache.solr.common.SolrInputDocument; + +import java.util.HashMap; +import java.util.Iterator; +import java.util.Map; +import java.util.Objects; +import java.util.Queue; +import java.util.Random; +import java.util.SplittableRandom; +import java.util.concurrent.ConcurrentHashMap; +import java.util.concurrent.ConcurrentLinkedQueue; +import java.util.concurrent.ExecutorService; +import java.util.concurrent.Executors; +import java.util.concurrent.ThreadLocalRandom; +import java.util.concurrent.TimeUnit; +import java.util.concurrent.atomic.AtomicInteger; + +public class DocMakerRamGen { + + private final static Map> CACHE = new ConcurrentHashMap<>(); + + private Queue docs = new ConcurrentLinkedQueue<>(); + + private final Map fields = new HashMap<>(); + + private static final AtomicInteger ID = new AtomicInteger(); + private final boolean cacheResults; + + private ExecutorService executorService; + + private SplittableRandom threadRandom; + + public DocMakerRamGen() { + this(true); + } + + public DocMakerRamGen(boolean cacheResults) { +this.cacheResults = cacheResults; + +Long seed = Long.getLong("threadLocalRandomSeed"); +if (seed == null) { + System.setProperty("threadLocalRandomSeed", Long.toString(new Random().nextLong())); +} + +threadRandom = new SplittableRandom(Long.getLong("threadLocalRandomSeed")); + } + + public void preGenerateDocs(int numDocs) throws InterruptedException { +executorService = Executors.newFixedThreadPool(Runtime.getRuntime().availableProcessors() + 1); +if (cacheResults) { + docs = CACHE.compute(Integer.toString(hashCode()), (key, value) -> { +if (value == null) { + for (int i = 0; i < numDocs; i++) { +executorService.submit(() -> { + SolrInputDocument doc = getDocument(); + docs.add(doc); +}); + } + return docs; +} +for (int i = value.size(); i < numDocs; i++) { + executorService.submit(() -> { +SolrInputDocument doc = getDocument(); +value.add(doc); + }); +} +return value; + }); +} else { + for (int i = 0; i < numDocs; i++) { +executorService.submit(() -> { + SolrInputDocument doc = getDocument(); + docs.add(doc); +}); + } +} + +executorService.shutdown(); +boolean result = executorService.awaitTermination(10, TimeUnit.MINUTES); +if (!result) { + throw new RuntimeException("Timeout waiting for doc adds to finish"); +} + } + + public Iterator getGeneratedDocsIterator() { +return docs.iterator(); + } + + public SolrInputDocument getDocument() { +SolrInputDocument doc = new SolrInputDocument(); + +for (Map.Entry entry : fields.entrySet()) { + doc.addField(entry.getKey(), getValue(entry.getValue())); +} + +return doc; + } + + public void addField(String name, FieldDef.FieldDefBuilder builder) { +fields.put(name, builder.build()); + } + + private Object getValue(FieldDef value) { +switch (value.getContent()) { + case UNIQUE_INT: +return ID.incrementAndGet(); + case INTEGER: +if (value.getMaxCardinality() > 0) { + long start = value.getCardinalityStart(); + long seed = nextLong(start, start + value.getMaxCardinality(), threadRandom); + SplittableRandom random = new SplittableRandom(seed); + return nextInt(0, Integer.MAX_VALUE, random); +} + +return ThreadLocalRandom.current().nextInt(Integer.MAX_VALUE); + case ALPHEBETIC: +if (value.getNumTokens() > 1) { + StringBuilder sb = new StringBuilder(value.getNumTokens() * (Math.max(value.getL
[GitHub] [solr] markrmiller commented on a change in pull request #214: SOLR-15428: Integrate the OpenJDK JMH micro benchmark framework for m…
markrmiller commented on a change in pull request #214: URL: https://github.com/apache/solr/pull/214#discussion_r672488213 ## File path: solr/test-framework/src/jmh/org/apache/solr/bench/FieldDef.java ## @@ -0,0 +1,128 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.solr.bench; + +import java.util.Objects; +import java.util.concurrent.ThreadLocalRandom; + +public class FieldDef { Review comment: Everything still needs javadocs and package level overview file. Don't want to keep updating them though, so will come when I feel the rest won't get change pushback. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org For additional commands, e-mail: issues-h...@solr.apache.org
[GitHub] [solr] markrmiller commented on a change in pull request #214: SOLR-15428: Integrate the OpenJDK JMH micro benchmark framework for m…
markrmiller commented on a change in pull request #214: URL: https://github.com/apache/solr/pull/214#discussion_r672488795 ## File path: solr/test-framework/src/jmh/org/apache/solr/bench/FieldDef.java ## @@ -0,0 +1,128 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.solr.bench; + +import java.util.Objects; +import java.util.concurrent.ThreadLocalRandom; + +public class FieldDef { + private DocMakerRamGen.Content content; Review comment: It probably should be. Was not sure I might do something a little more solid than hashing for preventing regen per iteration on a run. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org For additional commands, e-mail: issues-h...@solr.apache.org
[GitHub] [solr] markrmiller commented on pull request #214: SOLR-15428: Integrate the OpenJDK JMH micro benchmark framework for m…
markrmiller commented on pull request #214: URL: https://github.com/apache/solr/pull/214#issuecomment-882730941 bq. Otherwise, it seems a new module would be more appropriate. You don't back up that statement with any support, other than: bq. test-framework is about publishing utilities for other modules Which I'm not sure I agree with. Initially, I will say its here for: - initial ease - overlap in code/use/idea - often a benchmark will share much with how it's unit test is created, use some of the same supporting classes. The unit tests help test correctness, the jmh benchmarks help test performance. The test-framwork modules has support / base classes for 'testing' (correctness, performance, etc). I like to stack up reasons for going one way or another, but I see the test-framework module as the home of the testing framework, not "about publishing utilities for other modules", so not really adding to my calculus in that form. I also thought there would be more that was shared initially though. And I've sense realized this could use more separation from the randomized testing framework layers and requirements, and the different needs do make reuse of items between benchmark and junit tests a bit less simple and with more of a need to make sure mini clusters and jettyrunners are happy outside of randomized runner stuff. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org For additional commands, e-mail: issues-h...@solr.apache.org
[GitHub] [solr-operator] HoustonPutman merged pull request #287: Upgrade default Solr version to 8.9
HoustonPutman merged pull request #287: URL: https://github.com/apache/solr-operator/pull/287 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org For additional commands, e-mail: issues-h...@solr.apache.org
[GitHub] [solr-operator] HoustonPutman closed issue #285: Upgrade the default version of Solr
HoustonPutman closed issue #285: URL: https://github.com/apache/solr-operator/issues/285 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org For additional commands, e-mail: issues-h...@solr.apache.org
[GitHub] [solr] markrmiller opened a new pull request #226: SOLR-15547: Configure JettySolrRunner to work correctly without syste…
markrmiller opened a new pull request #226: URL: https://github.com/apache/solr/pull/226 Just a quick stab. Add the clear text http2 connector and set host & allow host to be configured via builder. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org For additional commands, e-mail: issues-h...@solr.apache.org
[jira] [Created] (SOLR-15550) Record submitterStackTrace on ObjectReleaseTracker in tests
Mike Drob created SOLR-15550: Summary: Record submitterStackTrace on ObjectReleaseTracker in tests Key: SOLR-15550 URL: https://issues.apache.org/jira/browse/SOLR-15550 Project: Solr Issue Type: Test Security Level: Public (Default Security Level. Issues are Public) Components: Tests Reporter: Mike Drob We currently collect the submitterStackTrace in MDCAwareThreadPool.execute and log it in case of exception. This is very useful, but doesn't propagate to objects gathered by ObjectReleaseTracker, so in case of failure we end up with logging like: {noformat} org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: org.apache.solr.core.SolrCore at org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42) at org.apache.solr.core.SolrCore.(SolrCore.java:1090) at org.apache.solr.core.SolrCore.(SolrCore.java:928) at org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1433) at org.apache.solr.core.CoreContainer.lambda$load$10(CoreContainer.java:872) at com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:202) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:224) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:834) {noformat} which doesn't help me very much in debugging why I have an unreleased SolrCore. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org For additional commands, e-mail: issues-h...@solr.apache.org
[jira] [Commented] (SOLR-15550) Record submitterStackTrace on ObjectReleaseTracker in tests
[ https://issues.apache.org/jira/browse/SOLR-15550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17383500#comment-17383500 ] Mike Drob commented on SOLR-15550: -- With the changes in my PR, the new output is much more verbose, and probably more helpful. {noformat} org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: org.apache.solr.core.SolrCore at org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:43) at org.apache.solr.core.SolrCore.(SolrCore.java:1090) at org.apache.solr.core.SolrCore.(SolrCore.java:928) at org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1433) at org.apache.solr.core.CoreContainer.lambda$load$10(CoreContainer.java:872) at com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:202) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:235) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:834) Caused by: java.lang.Exception: Submitter stack trace at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.execute(ExecutorUtil.java:204) at java.base/java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:140) at com.codahale.metrics.InstrumentedExecutorService.submit(InstrumentedExecutorService.java:90) at org.apache.solr.core.CoreContainer.load(CoreContainer.java:865) at org.apache.solr.util.TestHarness.(TestHarness.java:183) at org.apache.solr.util.TestHarness.(TestHarness.java:155) at org.apache.solr.util.TestHarness.(TestHarness.java:161) at org.apache.solr.util.TestHarness.(TestHarness.java:111) at org.apache.solr.SolrTestCaseJ4.createCore(SolrTestCaseJ4.java:858) at org.apache.solr.SolrTestCaseJ4.initCore(SolrTestCaseJ4.java:848) at org.apache.solr.SolrTestCaseJ4.initCore(SolrTestCaseJ4.java:709) at org.apache.solr.SolrTestCaseJ4.initCore(SolrTestCaseJ4.java:698) at org.apache.solr.search.TestRangeQuery.beforeClass(TestRangeQuery.java:59) {noformat} > Record submitterStackTrace on ObjectReleaseTracker in tests > --- > > Key: SOLR-15550 > URL: https://issues.apache.org/jira/browse/SOLR-15550 > Project: Solr > Issue Type: Test > Security Level: Public(Default Security Level. Issues are Public) > Components: Tests >Reporter: Mike Drob >Priority: Major > > We currently collect the submitterStackTrace in MDCAwareThreadPool.execute > and log it in case of exception. This is very useful, but doesn't propagate > to objects gathered by ObjectReleaseTracker, so in case of failure we end up > with logging like: > {noformat} > org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: > org.apache.solr.core.SolrCore > at > org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42) > at org.apache.solr.core.SolrCore.(SolrCore.java:1090) > at org.apache.solr.core.SolrCore.(SolrCore.java:928) > at > org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1433) > at > org.apache.solr.core.CoreContainer.lambda$load$10(CoreContainer.java:872) > at > com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:202) > at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) > at > org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:224) > at > java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) > at > java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) > at java.base/java.lang.Thread.run(Thread.java:834) > {noformat} > which doesn't help me very much in debugging why I have an unreleased > SolrCore. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org For additional commands, e-mail: issues-h...@solr.apache.org
[GitHub] [solr] madrob opened a new pull request #227: SOLR-15550 Save submitter trace in ObjectReleaseTracker
madrob opened a new pull request #227: URL: https://github.com/apache/solr/pull/227 https://issues.apache.org/jira/browse/SOLR-15550 Example logging available on the JIRA issue. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org For additional commands, e-mail: issues-h...@solr.apache.org
[jira] [Updated] (SOLR-15550) Record submitterStackTrace on ObjectReleaseTracker in tests
[ https://issues.apache.org/jira/browse/SOLR-15550?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Robert Miller updated SOLR-15550: -- Attachment: SOLR-15550.patch > Record submitterStackTrace on ObjectReleaseTracker in tests > --- > > Key: SOLR-15550 > URL: https://issues.apache.org/jira/browse/SOLR-15550 > Project: Solr > Issue Type: Test > Security Level: Public(Default Security Level. Issues are Public) > Components: Tests >Reporter: Mike Drob >Priority: Major > Attachments: SOLR-15550.patch > > Time Spent: 10m > Remaining Estimate: 0h > > We currently collect the submitterStackTrace in MDCAwareThreadPool.execute > and log it in case of exception. This is very useful, but doesn't propagate > to objects gathered by ObjectReleaseTracker, so in case of failure we end up > with logging like: > {noformat} > org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: > org.apache.solr.core.SolrCore > at > org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42) > at org.apache.solr.core.SolrCore.(SolrCore.java:1090) > at org.apache.solr.core.SolrCore.(SolrCore.java:928) > at > org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1433) > at > org.apache.solr.core.CoreContainer.lambda$load$10(CoreContainer.java:872) > at > com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:202) > at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) > at > org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:224) > at > java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) > at > java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) > at java.base/java.lang.Thread.run(Thread.java:834) > {noformat} > which doesn't help me very much in debugging why I have an unreleased > SolrCore. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org For additional commands, e-mail: issues-h...@solr.apache.org
[jira] [Commented] (SOLR-15550) Record submitterStackTrace on ObjectReleaseTracker in tests
[ https://issues.apache.org/jira/browse/SOLR-15550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17383508#comment-17383508 ] Mark Robert Miller commented on SOLR-15550: --- I would look at something along the lines of this (SOLR-15550.patch) > Record submitterStackTrace on ObjectReleaseTracker in tests > --- > > Key: SOLR-15550 > URL: https://issues.apache.org/jira/browse/SOLR-15550 > Project: Solr > Issue Type: Test > Security Level: Public(Default Security Level. Issues are Public) > Components: Tests >Reporter: Mike Drob >Priority: Major > Attachments: SOLR-15550.patch > > Time Spent: 10m > Remaining Estimate: 0h > > We currently collect the submitterStackTrace in MDCAwareThreadPool.execute > and log it in case of exception. This is very useful, but doesn't propagate > to objects gathered by ObjectReleaseTracker, so in case of failure we end up > with logging like: > {noformat} > org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: > org.apache.solr.core.SolrCore > at > org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42) > at org.apache.solr.core.SolrCore.(SolrCore.java:1090) > at org.apache.solr.core.SolrCore.(SolrCore.java:928) > at > org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1433) > at > org.apache.solr.core.CoreContainer.lambda$load$10(CoreContainer.java:872) > at > com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:202) > at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) > at > org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:224) > at > java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) > at > java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) > at java.base/java.lang.Thread.run(Thread.java:834) > {noformat} > which doesn't help me very much in debugging why I have an unreleased > SolrCore. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org For additional commands, e-mail: issues-h...@solr.apache.org
[jira] [Commented] (SOLR-15550) Record submitterStackTrace on ObjectReleaseTracker in tests
[ https://issues.apache.org/jira/browse/SOLR-15550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17383511#comment-17383511 ] Mark Robert Miller commented on SOLR-15550: --- lol, got a refresh with your pr on post. Yeah, same idea. > Record submitterStackTrace on ObjectReleaseTracker in tests > --- > > Key: SOLR-15550 > URL: https://issues.apache.org/jira/browse/SOLR-15550 > Project: Solr > Issue Type: Test > Security Level: Public(Default Security Level. Issues are Public) > Components: Tests >Reporter: Mike Drob >Priority: Major > Attachments: SOLR-15550.patch > > Time Spent: 10m > Remaining Estimate: 0h > > We currently collect the submitterStackTrace in MDCAwareThreadPool.execute > and log it in case of exception. This is very useful, but doesn't propagate > to objects gathered by ObjectReleaseTracker, so in case of failure we end up > with logging like: > {noformat} > org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: > org.apache.solr.core.SolrCore > at > org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42) > at org.apache.solr.core.SolrCore.(SolrCore.java:1090) > at org.apache.solr.core.SolrCore.(SolrCore.java:928) > at > org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1433) > at > org.apache.solr.core.CoreContainer.lambda$load$10(CoreContainer.java:872) > at > com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:202) > at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) > at > org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:224) > at > java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) > at > java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) > at java.base/java.lang.Thread.run(Thread.java:834) > {noformat} > which doesn't help me very much in debugging why I have an unreleased > SolrCore. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org For additional commands, e-mail: issues-h...@solr.apache.org
[GitHub] [solr-operator] HoustonPutman closed issue #286: Remove deprecations for the v0.4.0 release
HoustonPutman closed issue #286: URL: https://github.com/apache/solr-operator/issues/286 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org For additional commands, e-mail: issues-h...@solr.apache.org
[GitHub] [solr-operator] HoustonPutman merged pull request #288: Remove deprecations for v0.4.0 release.
HoustonPutman merged pull request #288: URL: https://github.com/apache/solr-operator/pull/288 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org For additional commands, e-mail: issues-h...@solr.apache.org
[GitHub] [solr] madrob commented on pull request #221: SOLR-15258: ConfigSetService operations ought to throw IOException
madrob commented on pull request #221: URL: https://github.com/apache/solr/pull/221#issuecomment-882802608 > if getCurrentSchemaModificationVersion throws an exception, then its caller doesn't have to catch the exception; Is this an _is_ or _ought_ statement? In other words, are you saying that this is a problem currently? What would you expect a caller to do if there is an exception thrown? How is it different from the current case of catching SolrException? > As this change is trivial, it can be part of SOLR-15258 (ConfigSetService refactoring) Yes, that's fine. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org For additional commands, e-mail: issues-h...@solr.apache.org
[GitHub] [solr] madrob commented on pull request #216: SOLR-15535: remove rawtypes warnings in 'grouping' code
madrob commented on pull request #216: URL: https://github.com/apache/solr/pull/216#issuecomment-882803411 I tried to mess around with this a bit by creating a wrapping class, but couldn't get it to work very cleanly so let's stick with this approach and maybe can revisit in the future. Thanks for picking it up! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org For additional commands, e-mail: issues-h...@solr.apache.org
[jira] [Created] (SOLR-15551) Do not eagerly fill stack traces with ObjectReleaseTracker
Mark Robert Miller created SOLR-15551: - Summary: Do not eagerly fill stack traces with ObjectReleaseTracker Key: SOLR-15551 URL: https://issues.apache.org/jira/browse/SOLR-15551 Project: Solr Issue Type: Bug Security Level: Public (Default Security Level. Issues are Public) Components: Tests Reporter: Mark Robert Miller Filling stack traces is really slow and so should not be eagerly done on every ObjectReleaseTracker#track. A much faster way to get a decent perf stack is the stack trace walker API & not necessarily walking the whole trace. This only fastest when everything must almost always happen and happen inline in a thread. If a different thread is filling the trace than the one capturing it, or if you are only filling after a failure in exceptional cases, the fastest thing to do is simple store a thrown exception and only trigger the fill when/if needed at that point. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org For additional commands, e-mail: issues-h...@solr.apache.org
[jira] [Commented] (SOLR-15428) Integrate the OpenJDK JMH micro benchmark framework for micro benchmarks and performance comparisons and investigation.
[ https://issues.apache.org/jira/browse/SOLR-15428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17383678#comment-17383678 ] Mark Robert Miller commented on SOLR-15428: --- Updated with a single change - move jmh code/files to a new solr/benchmark module instead of sharing test-framework. > Integrate the OpenJDK JMH micro benchmark framework for micro benchmarks and > performance comparisons and investigation. > --- > > Key: SOLR-15428 > URL: https://issues.apache.org/jira/browse/SOLR-15428 > Project: Solr > Issue Type: New Feature >Reporter: Mark Robert Miller >Priority: Major > Attachments: bench.patch > > Time Spent: 2.5h > Remaining Estimate: 0h > > I’ve spent a fair amount of time over the years on work around integrating > Lucene’s benchmark framework into Solr and while I’ve used this with > additional local work off and on, JMH has become somewhat of a standard for > micro benchmarks on the JVM. I have some work that provides an initial > integration, allowing for more targeted micro benchmarks as well as more > integration type benchmarking using JettySolrRunner. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org For additional commands, e-mail: issues-h...@solr.apache.org
[GitHub] [solr] sonatype-lift[bot] commented on a change in pull request #214: SOLR-15428: Integrate the OpenJDK JMH micro benchmark framework for m…
sonatype-lift[bot] commented on a change in pull request #214: URL: https://github.com/apache/solr/pull/214#discussion_r672741219 ## File path: solr/benchmark/build.gradle ## @@ -0,0 +1,140 @@ +/* Review comment: *Moderate OSS Vulnerability:* ### pkg:maven/com.google.guava/guava@25.1-jre 0 Critical, 0 Severe, 1 Moderate and 0 Unknown vulnerabilities have been found in a direct dependency MODERATE Vulnerabilities (1) *** > [CVE-2020-8908] A temp directory creation vulnerability exists in all versions of Guava, allowin... > A temp directory creation vulnerability exists in all versions of Guava, allowing an attacker with access to the machine to potentially access data in a temporary directory created by the Guava API com.google.common.io.Files.createTempDir(). By default, on unix-like systems, the created directory is world-readable (readable by an attacker with access to the system). The method in question has been marked @Deprecated in versions 30.0 and later and should not be used. For Android developers, we recommend choosing a temporary directory API provided by Android, such as context.getCacheDir(). For other Java developers, we recommend migrating to the Java 7 API java.nio.file.Files.createTempDirectory() which explicitly configures permissions of 700, or configuring the Java runtime's java.io.tmpdir system property to point to a location whose permissions are appropriately configured. > > **CVSS Score:** 3.3 > > **CVSS Vector:** CVSS:3.0/AV:L/AC:L/PR:L/UI:N/S:U/C:L/I:N/A:N *** (at-me [in a reply](https://help.sonatype.com/lift) with `help` or `ignore`) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org For additional commands, e-mail: issues-h...@solr.apache.org
[jira] [Commented] (SOLR-15512) Add support for passing IndexSearcher an executor for multi-threaded multi-segment and possible future multi-threaded per segment search.
[ https://issues.apache.org/jira/browse/SOLR-15512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17383686#comment-17383686 ] Mark Robert Miller commented on SOLR-15512: --- Yeah, I use a visitor for that - unless somehow what I pushed is stale. > Add support for passing IndexSearcher an executor for multi-threaded > multi-segment and possible future multi-threaded per segment search. > - > > Key: SOLR-15512 > URL: https://issues.apache.org/jira/browse/SOLR-15512 > Project: Solr > Issue Type: Sub-task > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Mark Robert Miller >Priority: Minor > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org For additional commands, e-mail: issues-h...@solr.apache.org
[jira] [Commented] (SOLR-15512) Add support for passing IndexSearcher an executor for multi-threaded multi-segment and possible future multi-threaded per segment search.
[ https://issues.apache.org/jira/browse/SOLR-15512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17383687#comment-17383687 ] Mark Robert Miller commented on SOLR-15512: --- I attempted to address all the review comments you previously received in that PR. Another update coming shortly though. > Add support for passing IndexSearcher an executor for multi-threaded > multi-segment and possible future multi-threaded per segment search. > - > > Key: SOLR-15512 > URL: https://issues.apache.org/jira/browse/SOLR-15512 > Project: Solr > Issue Type: Sub-task > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Mark Robert Miller >Priority: Minor > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org For additional commands, e-mail: issues-h...@solr.apache.org
[jira] [Commented] (SOLR-14446) Upload configset should use ZkClient.multi()
[ https://issues.apache.org/jira/browse/SOLR-14446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17383692#comment-17383692 ] Mark Robert Miller commented on SOLR-14446: --- This is also just very slow as is, which multi also acts as a drastic improvement on. The async API should also always be considered whenever considering the use of multi. The are pluses and minuses to either approach, but the performance is essentially identical (a multi or pipelining a series of async calls and then waiting). With multi, you do get the atomicity - everything does exactly what you asked for or fails and you try again. This is often overkill exception for very specific cases though. If you went with async here instead of multi as an example:,lets say you are doing a config update but all the files are new except one. Instead of having to reupload everything in another attempt (say you just tried to create everything on attempt 1), you can see all the new files succeed, that the one file existed, and just update that one file in step 2. Not recommending an approach here, just FYI. > Upload configset should use ZkClient.multi() > > > Key: SOLR-14446 > URL: https://issues.apache.org/jira/browse/SOLR-14446 > Project: Solr > Issue Type: Improvement >Reporter: Ishan Chattopadhyaya >Priority: Major > > Based on a private discussion with [~dsmiley] and [~dragonsinth] for > SOLR-14425, it occurred to me that our configset upload is a loop over all > files in a configset and individual writes. > [https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/handler/admin/ConfigSetsHandler.java#L184] > > It might make sense to use ZkClient.multi() here so that collection creation > doesn't need to guess whether all files of the configset made it into the ZK > or not (they will either all be there, or none at all). -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org For additional commands, e-mail: issues-h...@solr.apache.org
[jira] [Comment Edited] (SOLR-14446) Upload configset should use ZkClient.multi()
[ https://issues.apache.org/jira/browse/SOLR-14446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17383692#comment-17383692 ] Mark Robert Miller edited comment on SOLR-14446 at 7/20/21, 1:46 AM: - This is also just very slow as is, which multi also acts as a drastic improvement on. The async API should also always be considered whenever considering the use of multi. The are pluses and minuses to either approach, but the performance is essentially identical (a multi or pipelining a series of async calls and then waiting). With multi, you do get the atomicity - everything does exactly what you asked for or fails and you try again. This is often overkill except for very specific cases though. If you went with async here instead of multi as an example: lets say you are doing a config update but all the files are new except one. Instead of having to reupload everything in another attempt (say you just tried to create everything on attempt 1), you can see all the new files succeed, that the one file existed, and just update that one file in step 2. Not recommending an approach here, just FYI. was (Author: markrmiller): This is also just very slow as is, which multi also acts as a drastic improvement on. The async API should also always be considered whenever considering the use of multi. The are pluses and minuses to either approach, but the performance is essentially identical (a multi or pipelining a series of async calls and then waiting). With multi, you do get the atomicity - everything does exactly what you asked for or fails and you try again. This is often overkill exception for very specific cases though. If you went with async here instead of multi as an example:,lets say you are doing a config update but all the files are new except one. Instead of having to reupload everything in another attempt (say you just tried to create everything on attempt 1), you can see all the new files succeed, that the one file existed, and just update that one file in step 2. Not recommending an approach here, just FYI. > Upload configset should use ZkClient.multi() > > > Key: SOLR-14446 > URL: https://issues.apache.org/jira/browse/SOLR-14446 > Project: Solr > Issue Type: Improvement >Reporter: Ishan Chattopadhyaya >Priority: Major > > Based on a private discussion with [~dsmiley] and [~dragonsinth] for > SOLR-14425, it occurred to me that our configset upload is a loop over all > files in a configset and individual writes. > [https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/handler/admin/ConfigSetsHandler.java#L184] > > It might make sense to use ZkClient.multi() here so that collection creation > doesn't need to guess whether all files of the configset made it into the ZK > or not (they will either all be there, or none at all). -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org For additional commands, e-mail: issues-h...@solr.apache.org
[jira] [Commented] (SOLR-12386) Test fails for "Can't find resource" for files in the _default configset
[ https://issues.apache.org/jira/browse/SOLR-12386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17383703#comment-17383703 ] Mark Robert Miller commented on SOLR-12386: --- Two comments: * in combination with a retry mechanism on creation of ZkSolrResourceLoader that ensures the ZK based configSet dir exists. Retries and attempts from everyone to create core zk nodes (and retry, often per path part) is bad scalable silly behavior. You can save a lot of craziness by simple having a single node on first startup create all the fundamental and expected zk nodes. * That retry mechanism would probably do a Zk.sync() in-between. Careful about over use of that sync - it's called a "slow read" and it's easy to over use without need. It's also not likely needed in these cases where you should likely be doing optimistic actions that will fail if another client is racing you regardless of if you sync all over. > Test fails for "Can't find resource" for files in the _default configset > > > Key: SOLR-12386 > URL: https://issues.apache.org/jira/browse/SOLR-12386 > Project: Solr > Issue Type: Test > Components: SolrCloud >Reporter: David Smiley >Priority: Minor > Attachments: cant find resource, stacktrace.txt > > > Some tests, especially ConcurrentCreateRoutedAliasTest, have failed > sporadically failed with the message "Can't find resource" pertaining to a > file that is in the default ConfigSet yet mysteriously can't be found. This > happens when a collection is being created that ultimately fails for this > reason. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org For additional commands, e-mail: issues-h...@solr.apache.org
[jira] [Commented] (SOLR-12386) Test fails for "Can't find resource" for files in the _default configset
[ https://issues.apache.org/jira/browse/SOLR-12386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17383710#comment-17383710 ] Mark Robert Miller commented on SOLR-12386: --- {quote}Retries and attempts from everyone to create core zk nodes {quote} I should illustrate that a bit. Lets say you start up 100 solr servers for a nice new cluster. Or an existing cluster. Generally, what is going to happen is that every Solr instance is going to hit ZK and do something like, ensure /configs exists. And makePath on /path1/path2/path3. Which may be 3 calls, just in case path1 and path2 don't yet exist. So what essentially happens, is that all the time, we have 100 solr servers try / retrying to make same paths, the same exisiting or about to exist path parts, etc. So we do something like boot up a new cluster, and the zk base layout could maybe created with, let's say, 15 zk calls. And maybe we make 900. And 100's more on a restart for nodes that are created on day 1, instant 1. Maybe we recreate nodes someone just tried to delete in this process. Since independent things in lots of random places are trying to ensure nodes exist (that should be one and done on first startup, or collection create, etc, maybe you end up will all kinds of zk calls from all these servers even at random times outside startup, restart, collection create. When you could simply have one time instance of, create these dozen paths, one server says it, it's done, case closed forever more. > Test fails for "Can't find resource" for files in the _default configset > > > Key: SOLR-12386 > URL: https://issues.apache.org/jira/browse/SOLR-12386 > Project: Solr > Issue Type: Test > Components: SolrCloud >Reporter: David Smiley >Priority: Minor > Attachments: cant find resource, stacktrace.txt > > > Some tests, especially ConcurrentCreateRoutedAliasTest, have failed > sporadically failed with the message "Can't find resource" pertaining to a > file that is in the default ConfigSet yet mysteriously can't be found. This > happens when a collection is being created that ultimately fails for this > reason. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org For additional commands, e-mail: issues-h...@solr.apache.org
[jira] [Comment Edited] (SOLR-12386) Test fails for "Can't find resource" for files in the _default configset
[ https://issues.apache.org/jira/browse/SOLR-12386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17383710#comment-17383710 ] Mark Robert Miller edited comment on SOLR-12386 at 7/20/21, 2:48 AM: - {quote}Retries and attempts from everyone to create core zk nodes {quote} I should illustrate that a bit. Lets say you start up 100 solr servers for a nice new cluster. Or an existing cluster. Generally, what is going to happen is that every Solr instance is going to hit ZK and do something like, ensure /configs exists. And makePath on /path1/path2/path3. Which may be 3 calls, just in case path1 and path2 don't yet exist. So what essentially happens, is that all the time, we have 100 solr servers try / retrying to make the same paths, the same existing or about to exist path parts, etc. Racing each other to get those same path parts in for a path. So we do something like boot up a new cluster, and the zk base layout could maybe be created with, let's say, 15 zk calls. And maybe we make 900. And 100's more on a restart for nodes that are created on day 1, instant 1. Maybe we recreate nodes someone / some process just tried to delete in this process. Since independent things in lots of random places are trying to ensure nodes exist (that should be one and done on first startup, or collection create, etc), maybe you end up will all kinds of zk calls from all these servers even at random times outside startup, restart, collection create. When you could simply have a one time instance of, create these dozen paths, one client says it, it's done, case closed forever more. was (Author: markrmiller): {quote}Retries and attempts from everyone to create core zk nodes {quote} I should illustrate that a bit. Lets say you start up 100 solr servers for a nice new cluster. Or an existing cluster. Generally, what is going to happen is that every Solr instance is going to hit ZK and do something like, ensure /configs exists. And makePath on /path1/path2/path3. Which may be 3 calls, just in case path1 and path2 don't yet exist. So what essentially happens, is that all the time, we have 100 solr servers try / retrying to make same paths, the same exisiting or about to exist path parts, etc. So we do something like boot up a new cluster, and the zk base layout could maybe created with, let's say, 15 zk calls. And maybe we make 900. And 100's more on a restart for nodes that are created on day 1, instant 1. Maybe we recreate nodes someone just tried to delete in this process. Since independent things in lots of random places are trying to ensure nodes exist (that should be one and done on first startup, or collection create, etc, maybe you end up will all kinds of zk calls from all these servers even at random times outside startup, restart, collection create. When you could simply have one time instance of, create these dozen paths, one server says it, it's done, case closed forever more. > Test fails for "Can't find resource" for files in the _default configset > > > Key: SOLR-12386 > URL: https://issues.apache.org/jira/browse/SOLR-12386 > Project: Solr > Issue Type: Test > Components: SolrCloud >Reporter: David Smiley >Priority: Minor > Attachments: cant find resource, stacktrace.txt > > > Some tests, especially ConcurrentCreateRoutedAliasTest, have failed > sporadically failed with the message "Can't find resource" pertaining to a > file that is in the default ConfigSet yet mysteriously can't be found. This > happens when a collection is being created that ultimately fails for this > reason. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org For additional commands, e-mail: issues-h...@solr.apache.org
[jira] [Comment Edited] (SOLR-12386) Test fails for "Can't find resource" for files in the _default configset
[ https://issues.apache.org/jira/browse/SOLR-12386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17383710#comment-17383710 ] Mark Robert Miller edited comment on SOLR-12386 at 7/20/21, 2:49 AM: - {quote}Retries and attempts from everyone to create core zk nodes {quote} I should illustrate that a bit. Lets say you start up 100 solr servers for a nice new cluster. Or an existing cluster. Generally, what is going to happen is that every Solr instance is going to hit ZK and do something like, ensure /configs exists. And makePath on /path1/path2/path3. Which may be 3 calls, just in case path1 and path2 don't yet exist. So what essentially happens, is that all the time, we have 100 solr servers try / retrying to make the same paths, the same existing or about to exist path parts, etc. Racing each other to get those same path parts in for a path. So we do something like boot up a new cluster, and the zk base layout could maybe be created with, let's say, 15 zk calls. And maybe we make 900 (*generously* conservative). And 100's more on a restart for nodes that are created on day 1, instant 1. Maybe we recreate nodes someone / some process just tried to delete in this process. Since independent things in lots of random places are trying to ensure nodes exist (that should be one and done on first startup, or collection create, etc), maybe you end up will all kinds of zk calls from all these servers even at random times outside startup, restart, collection create. When you could simply have a one time instance of, create these dozen paths, one client says it, it's done, case closed forever more. was (Author: markrmiller): {quote}Retries and attempts from everyone to create core zk nodes {quote} I should illustrate that a bit. Lets say you start up 100 solr servers for a nice new cluster. Or an existing cluster. Generally, what is going to happen is that every Solr instance is going to hit ZK and do something like, ensure /configs exists. And makePath on /path1/path2/path3. Which may be 3 calls, just in case path1 and path2 don't yet exist. So what essentially happens, is that all the time, we have 100 solr servers try / retrying to make the same paths, the same existing or about to exist path parts, etc. Racing each other to get those same path parts in for a path. So we do something like boot up a new cluster, and the zk base layout could maybe be created with, let's say, 15 zk calls. And maybe we make 900. And 100's more on a restart for nodes that are created on day 1, instant 1. Maybe we recreate nodes someone / some process just tried to delete in this process. Since independent things in lots of random places are trying to ensure nodes exist (that should be one and done on first startup, or collection create, etc), maybe you end up will all kinds of zk calls from all these servers even at random times outside startup, restart, collection create. When you could simply have a one time instance of, create these dozen paths, one client says it, it's done, case closed forever more. > Test fails for "Can't find resource" for files in the _default configset > > > Key: SOLR-12386 > URL: https://issues.apache.org/jira/browse/SOLR-12386 > Project: Solr > Issue Type: Test > Components: SolrCloud >Reporter: David Smiley >Priority: Minor > Attachments: cant find resource, stacktrace.txt > > > Some tests, especially ConcurrentCreateRoutedAliasTest, have failed > sporadically failed with the message "Can't find resource" pertaining to a > file that is in the default ConfigSet yet mysteriously can't be found. This > happens when a collection is being created that ultimately fails for this > reason. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org For additional commands, e-mail: issues-h...@solr.apache.org
[GitHub] [solr] dsmiley commented on pull request #221: SOLR-15258: ConfigSetService operations ought to throw IOException
dsmiley commented on pull request #221: URL: https://github.com/apache/solr/pull/221#issuecomment-883013519 > Is this an is or ought statement? "ought". It's a minor nuisance really. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org For additional commands, e-mail: issues-h...@solr.apache.org
[GitHub] [solr] dsmiley commented on a change in pull request #214: SOLR-15428: Integrate the OpenJDK JMH micro benchmark framework for m…
dsmiley commented on a change in pull request #214: URL: https://github.com/apache/solr/pull/214#discussion_r672775627 ## File path: solr/test-framework/src/java/org/apache/solr/util/RandomizeSSL.java ## @@ -104,10 +105,10 @@ public SSLRandomizer(double ssl, double clientAuth, String debug) { public SSLTestConfig createSSLTestConfig() { // even if we know SSL is disabled, always consume the same amount of randomness // that way all other test behavior should be consistent even if a user adds/removes @SuppressSSL - - final boolean useSSL = TestUtil.nextInt(LuceneTestCase.random(), 0, 999) < + Random random = new Random(); Review comment: @sonatype-lift ignore -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org For additional commands, e-mail: issues-h...@solr.apache.org
[GitHub] [solr] dsmiley commented on a change in pull request #214: SOLR-15428: Integrate the OpenJDK JMH micro benchmark framework for m…
dsmiley commented on a change in pull request #214: URL: https://github.com/apache/solr/pull/214#discussion_r672775917 ## File path: solr/benchmark/build.gradle ## @@ -0,0 +1,140 @@ +/* Review comment: @sonatype-lift ignore -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org For additional commands, e-mail: issues-h...@solr.apache.org
[GitHub] [solr] sonatype-lift[bot] commented on a change in pull request #214: SOLR-15428: Integrate the OpenJDK JMH micro benchmark framework for m…
sonatype-lift[bot] commented on a change in pull request #214: URL: https://github.com/apache/solr/pull/214#discussion_r672775940 ## File path: solr/benchmark/build.gradle ## @@ -0,0 +1,140 @@ +/* Review comment: I've recorded this as ignored for this pull request. If you change your mind, just comment `@sonatype-lift unignore`. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org For additional commands, e-mail: issues-h...@solr.apache.org
[jira] [Commented] (SOLR-14425) Fix ZK sync usage to be synchronous (blocking)
[ https://issues.apache.org/jira/browse/SOLR-14425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17383717#comment-17383717 ] Mark Robert Miller commented on SOLR-14425: --- It's unlikely Solr tests would ever pick up any issue around using sync() or not given that we run tests with a single ZK instance and sync is for the case when client1 writes something to zk, communicates to client2 it should not find that in ZK, and client2 does a read on a different ZK instance that does not yet see the update on it's read. It's a workaround for zk not providing linearizable reads (as it does writes), which is done for much better read performance. But it's not likely to matter one way or another with a single zk instance as your zk cluster. > Fix ZK sync usage to be synchronous (blocking) > -- > > Key: SOLR-14425 > URL: https://issues.apache.org/jira/browse/SOLR-14425 > Project: Solr > Issue Type: Bug > Components: SolrCloud >Reporter: David Smiley >Priority: Minor > Time Spent: 0.5h > Remaining Estimate: 0h > > As of this writing, we only use one call to ZK's "sync" method. It's related > to collection aliases -- I added this. I discovered I misunderstood the > semantics of the API; it syncs in the background and thus returns > immediately. Looking at ZK's sync CLI command and Curator both made me > realize my folly. I'm considering this only a "minor" issue because I'm not > sure I've seen a bug from this; or maybe I did in spooky test failures over a > year ago -- I'm not sure. And we don't use this pervasively (yet). > It occurred to me that if Solr embraced the Curator framework abstraction > over ZooKeeper, I would not have fallen into that trap. I'll file a separate > issue for that. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org For additional commands, e-mail: issues-h...@solr.apache.org
[jira] [Comment Edited] (SOLR-14425) Fix ZK sync usage to be synchronous (blocking)
[ https://issues.apache.org/jira/browse/SOLR-14425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17383717#comment-17383717 ] Mark Robert Miller edited comment on SOLR-14425 at 7/20/21, 3:20 AM: - It's unlikely Solr tests would ever pick up any issue around using sync() or not given that we run tests with a single ZK instance and sync is for the case when client1 writes something to zk, communicates to client2 it should now find that write in ZK, and client2 does a read on a different ZK instance that does not yet see the update on it's read. It's a workaround for zk not providing linearizable reads (as it does writes), which is done for much better read performance. But it's not likely to matter one way or another with a single zk instance as your zk cluster. was (Author: markrmiller): It's unlikely Solr tests would ever pick up any issue around using sync() or not given that we run tests with a single ZK instance and sync is for the case when client1 writes something to zk, communicates to client2 it should not find that in ZK, and client2 does a read on a different ZK instance that does not yet see the update on it's read. It's a workaround for zk not providing linearizable reads (as it does writes), which is done for much better read performance. But it's not likely to matter one way or another with a single zk instance as your zk cluster. > Fix ZK sync usage to be synchronous (blocking) > -- > > Key: SOLR-14425 > URL: https://issues.apache.org/jira/browse/SOLR-14425 > Project: Solr > Issue Type: Bug > Components: SolrCloud >Reporter: David Smiley >Priority: Minor > Time Spent: 0.5h > Remaining Estimate: 0h > > As of this writing, we only use one call to ZK's "sync" method. It's related > to collection aliases -- I added this. I discovered I misunderstood the > semantics of the API; it syncs in the background and thus returns > immediately. Looking at ZK's sync CLI command and Curator both made me > realize my folly. I'm considering this only a "minor" issue because I'm not > sure I've seen a bug from this; or maybe I did in spooky test failures over a > year ago -- I'm not sure. And we don't use this pervasively (yet). > It occurred to me that if Solr embraced the Curator framework abstraction > over ZooKeeper, I would not have fallen into that trap. I'll file a separate > issue for that. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org For additional commands, e-mail: issues-h...@solr.apache.org
[GitHub] [solr] dsmiley commented on a change in pull request #214: SOLR-15428: Integrate the OpenJDK JMH micro benchmark framework for m…
dsmiley commented on a change in pull request #214: URL: https://github.com/apache/solr/pull/214#discussion_r672779442 ## File path: solr/test-framework/src/java/org/apache/solr/util/RandomizeSSL.java ## @@ -104,10 +105,10 @@ public SSLRandomizer(double ssl, double clientAuth, String debug) { public SSLTestConfig createSSLTestConfig() { // even if we know SSL is disabled, always consume the same amount of randomness // that way all other test behavior should be consistent even if a user adds/removes @SuppressSSL - - final boolean useSSL = TestUtil.nextInt(LuceneTestCase.random(), 0, 999) < + Random random = new Random(); Review comment: @dweiss any thoughts on the above? In general, it'd be nice to have code that uses a consistent random number and be compatible with RandomizedTesting yet not actually depend on RnadomizedRunner (e.g. because the code _sometimes_ runs without RR -- e.g. benchmark). -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org For additional commands, e-mail: issues-h...@solr.apache.org
[GitHub] [solr] dsmiley commented on a change in pull request #227: SOLR-15550 Save submitter trace in ObjectReleaseTracker
dsmiley commented on a change in pull request #227: URL: https://github.com/apache/solr/pull/227#discussion_r672780636 ## File path: solr/solrj/src/java/org/apache/solr/common/util/ExecutorUtil.java ## @@ -196,7 +198,13 @@ public void execute(final Runnable command) { String ctxStr = contextString.toString().replace("/", "//"); final String submitterContextStr = ctxStr.length() <= MAX_THREAD_NAME_LEN ? ctxStr : ctxStr.substring(0, MAX_THREAD_NAME_LEN); - final Exception submitterStackTrace = enableSubmitterStackTrace ? new Exception("Submitter stack trace") : null; + final Exception submitterStackTrace; + if (enableSubmitterStackTrace) { +Exception grandParentSubmitter = submitter.get(); +submitterStackTrace = new Exception("Submitter stack trace", grandParentSubmitter); Review comment: Hmm; isn't this one of those cases we should call fillInStackTrace() on the exception because we don't throw it? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org For additional commands, e-mail: issues-h...@solr.apache.org
[jira] [Commented] (SOLR-15550) Record submitterStackTrace on ObjectReleaseTracker in tests
[ https://issues.apache.org/jira/browse/SOLR-15550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17383719#comment-17383719 ] David Smiley commented on SOLR-15550: - This will really help! > Record submitterStackTrace on ObjectReleaseTracker in tests > --- > > Key: SOLR-15550 > URL: https://issues.apache.org/jira/browse/SOLR-15550 > Project: Solr > Issue Type: Test > Security Level: Public(Default Security Level. Issues are Public) > Components: Tests >Reporter: Mike Drob >Priority: Major > Attachments: SOLR-15550.patch > > Time Spent: 20m > Remaining Estimate: 0h > > We currently collect the submitterStackTrace in MDCAwareThreadPool.execute > and log it in case of exception. This is very useful, but doesn't propagate > to objects gathered by ObjectReleaseTracker, so in case of failure we end up > with logging like: > {noformat} > org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: > org.apache.solr.core.SolrCore > at > org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42) > at org.apache.solr.core.SolrCore.(SolrCore.java:1090) > at org.apache.solr.core.SolrCore.(SolrCore.java:928) > at > org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1433) > at > org.apache.solr.core.CoreContainer.lambda$load$10(CoreContainer.java:872) > at > com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:202) > at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) > at > org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:224) > at > java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) > at > java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) > at java.base/java.lang.Thread.run(Thread.java:834) > {noformat} > which doesn't help me very much in debugging why I have an unreleased > SolrCore. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org For additional commands, e-mail: issues-h...@solr.apache.org
[GitHub] [solr] dsmiley commented on pull request #225: SOLR-15538 Update Lucene Preview Release dependency
dsmiley commented on pull request #225: URL: https://github.com/apache/solr/pull/225#issuecomment-883026028 Thanks for this -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org For additional commands, e-mail: issues-h...@solr.apache.org
[GitHub] [solr] madrob commented on a change in pull request #227: SOLR-15550 Save submitter trace in ObjectReleaseTracker
madrob commented on a change in pull request #227: URL: https://github.com/apache/solr/pull/227#discussion_r672783290 ## File path: solr/solrj/src/java/org/apache/solr/common/util/ExecutorUtil.java ## @@ -196,7 +198,13 @@ public void execute(final Runnable command) { String ctxStr = contextString.toString().replace("/", "//"); final String submitterContextStr = ctxStr.length() <= MAX_THREAD_NAME_LEN ? ctxStr : ctxStr.substring(0, MAX_THREAD_NAME_LEN); - final Exception submitterStackTrace = enableSubmitterStackTrace ? new Exception("Submitter stack trace") : null; + final Exception submitterStackTrace; + if (enableSubmitterStackTrace) { +Exception grandParentSubmitter = submitter.get(); +submitterStackTrace = new Exception("Submitter stack trace", grandParentSubmitter); Review comment: I don't understand the suggestion - Throwable's constructor already calls it for us. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org For additional commands, e-mail: issues-h...@solr.apache.org
[jira] [Commented] (SOLR-15550) Record submitterStackTrace on ObjectReleaseTracker in tests
[ https://issues.apache.org/jira/browse/SOLR-15550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17383724#comment-17383724 ] Mike Drob commented on SOLR-15550: -- That's a good patch, Mark, I've got some of your changes incorporated locally, and had to iterate a few more times with other edge cases as I continued testing. > Record submitterStackTrace on ObjectReleaseTracker in tests > --- > > Key: SOLR-15550 > URL: https://issues.apache.org/jira/browse/SOLR-15550 > Project: Solr > Issue Type: Test > Security Level: Public(Default Security Level. Issues are Public) > Components: Tests >Reporter: Mike Drob >Priority: Major > Attachments: SOLR-15550.patch > > Time Spent: 0.5h > Remaining Estimate: 0h > > We currently collect the submitterStackTrace in MDCAwareThreadPool.execute > and log it in case of exception. This is very useful, but doesn't propagate > to objects gathered by ObjectReleaseTracker, so in case of failure we end up > with logging like: > {noformat} > org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: > org.apache.solr.core.SolrCore > at > org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42) > at org.apache.solr.core.SolrCore.(SolrCore.java:1090) > at org.apache.solr.core.SolrCore.(SolrCore.java:928) > at > org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1433) > at > org.apache.solr.core.CoreContainer.lambda$load$10(CoreContainer.java:872) > at > com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:202) > at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) > at > org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:224) > at > java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) > at > java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) > at java.base/java.lang.Thread.run(Thread.java:834) > {noformat} > which doesn't help me very much in debugging why I have an unreleased > SolrCore. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org For additional commands, e-mail: issues-h...@solr.apache.org
[jira] [Commented] (SOLR-15486) consider SolrCoreState.inflightUpdatesCounter logic in ZK-unware Solr
[ https://issues.apache.org/jira/browse/SOLR-15486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17383728#comment-17383728 ] David Smiley commented on SOLR-15486: - {quote} * Alternatively if waiting for inflight updates to complete (at that point in the shutdown sequence) is generally beneficial then one could say that the {{pauseUpdatesAndAwaitInflightRequests}} logic should be added for the ZK-unware code path in {{CoreContainer.shutdown}} also.{quote} Makes sense; removes needless SolrCloud specificity. I would word it that way, not the presence/absence of ZK which is admittedly the typical implementation detail of how we detect SolrCloud. > consider SolrCoreState.inflightUpdatesCounter logic in ZK-unware Solr > - > > Key: SOLR-15486 > URL: https://issues.apache.org/jira/browse/SOLR-15486 > Project: Solr > Issue Type: Task >Reporter: Christine Poerschke >Assignee: Christine Poerschke >Priority: Minor > > SOLR-14942 added the {{inflightUpdatesCounter}} logic to reduce leader > election time on node shutdown. > From my understanding of the code so far: > * Since the earlier triggering of an election is specific to ZK-aware Solr > then one could say that {{ContentStreamHandlerBase.handleRequestBody}} doing > inflight update registers and deregisters is unnecessary. > * Alternatively if waiting for inflight updates to complete (at that point > in the shutdown sequence) is generally beneficial then one could say that the > {{pauseUpdatesAndAwaitInflightRequests}} logic should be added for the > ZK-unware code path in {{CoreContainer.shutdown}} also. > Illustrative draft pull request with both options: > https://github.com/apache/solr/pull/180 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org For additional commands, e-mail: issues-h...@solr.apache.org
[GitHub] [solr] dsmiley commented on a change in pull request #227: SOLR-15550 Save submitter trace in ObjectReleaseTracker
dsmiley commented on a change in pull request #227: URL: https://github.com/apache/solr/pull/227#discussion_r672789135 ## File path: solr/solrj/src/java/org/apache/solr/common/util/ExecutorUtil.java ## @@ -196,7 +198,13 @@ public void execute(final Runnable command) { String ctxStr = contextString.toString().replace("/", "//"); final String submitterContextStr = ctxStr.length() <= MAX_THREAD_NAME_LEN ? ctxStr : ctxStr.substring(0, MAX_THREAD_NAME_LEN); - final Exception submitterStackTrace = enableSubmitterStackTrace ? new Exception("Submitter stack trace") : null; + final Exception submitterStackTrace; + if (enableSubmitterStackTrace) { +Exception grandParentSubmitter = submitter.get(); +submitterStackTrace = new Exception("Submitter stack trace", grandParentSubmitter); Review comment: Ah nevermind 👍 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org For additional commands, e-mail: issues-h...@solr.apache.org
[GitHub] [solr] dsmiley commented on a change in pull request #227: SOLR-15550 Save submitter trace in ObjectReleaseTracker
dsmiley commented on a change in pull request #227: URL: https://github.com/apache/solr/pull/227#discussion_r672789251 ## File path: solr/solrj/src/java/org/apache/solr/common/util/ExecutorUtil.java ## @@ -196,7 +198,13 @@ public void execute(final Runnable command) { String ctxStr = contextString.toString().replace("/", "//"); final String submitterContextStr = ctxStr.length() <= MAX_THREAD_NAME_LEN ? ctxStr : ctxStr.substring(0, MAX_THREAD_NAME_LEN); - final Exception submitterStackTrace = enableSubmitterStackTrace ? new Exception("Submitter stack trace") : null; + final Exception submitterStackTrace; Review comment: Why create Exception specifically -- why not do Throwable? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org For additional commands, e-mail: issues-h...@solr.apache.org
[jira] [Commented] (SOLR-12386) Test fails for "Can't find resource" for files in the _default configset
[ https://issues.apache.org/jira/browse/SOLR-12386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17383733#comment-17383733 ] David Smiley commented on SOLR-12386: - If one node is to initialize ZK then how do you suggest that the nodes coordinate waiting for the initialization to complete? Any way it seems like optimizing something that doesn't need to be optimized. Who cares how many ZK races occur on a cluster startup? > Test fails for "Can't find resource" for files in the _default configset > > > Key: SOLR-12386 > URL: https://issues.apache.org/jira/browse/SOLR-12386 > Project: Solr > Issue Type: Test > Components: SolrCloud >Reporter: David Smiley >Priority: Minor > Attachments: cant find resource, stacktrace.txt > > > Some tests, especially ConcurrentCreateRoutedAliasTest, have failed > sporadically failed with the message "Can't find resource" pertaining to a > file that is in the default ConfigSet yet mysteriously can't be found. This > happens when a collection is being created that ultimately fails for this > reason. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org For additional commands, e-mail: issues-h...@solr.apache.org
[jira] [Assigned] (SOLR-15552) Publish some simple perf comparisons between facet and drill expressions.
[ https://issues.apache.org/jira/browse/SOLR-15552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Robert Miller reassigned SOLR-15552: - Assignee: Mark Robert Miller > Publish some simple perf comparisons between facet and drill expressions. > - > > Key: SOLR-15552 > URL: https://issues.apache.org/jira/browse/SOLR-15552 > Project: Solr > Issue Type: Task > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Mark Robert Miller >Assignee: Mark Robert Miller >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org For additional commands, e-mail: issues-h...@solr.apache.org
[jira] [Created] (SOLR-15552) Publish some simple perf comparisons between facet and drill expressions.
Mark Robert Miller created SOLR-15552: - Summary: Publish some simple perf comparisons between facet and drill expressions. Key: SOLR-15552 URL: https://issues.apache.org/jira/browse/SOLR-15552 Project: Solr Issue Type: Task Security Level: Public (Default Security Level. Issues are Public) Reporter: Mark Robert Miller -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org For additional commands, e-mail: issues-h...@solr.apache.org