Thank you for the reply.  There was a typo in my email , we have 6 node 
cluster. So it does not matter how many nodes we have in a cluster, the 
collection limit for a solrcloud is fixed?.

Are there plans to fix this potential problem , as multiple solrclouds  would 
be needed to support many solr collections.

Regards,
Rajeswari 


On 10/11/22, 12:53 PM, "Shawn Heisey" <elyog...@elyograg.org> wrote:

    On 10/11/22 12:30, Natarajan, Rajeswari wrote:
    > We have a six node solrcloud cluster and we have about 780  collections  
each having one shard and 3 replicas. We have a situation where now 
create/delete collection times out and when we try async option , the job gets 
submitted and remains like that for hours.  Tried restarting the solr nodes 
,nothing changed. CPU (~0%) usage and heap  (<70%) are good .
    >
    > In the solradmin UI , cloud ->Tree section takes long time , looks like 
it takes times to connect to zk. Thinking of deleting collections manually in 
disk and in zk .  Is there any other solution to get around this issue , don’t 
see any error in the logs.

    That many collections will lead to problems.  SolrCloud has a 
    scalability problem when the number of collections gets beyond a few 
    hundred.  I did some investigation into this a while back.

    
https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fissues.apache.org%2Fjira%2Fbrowse%2FSOLR-7191&amp;data=05%7C01%7Crajeswari.natarajan%40sap.com%7Ca389d396798445eedf8308daabc25010%7C42f7676cf455423c82f6dc2d99791af7%7C0%7C0%7C638011148325033593%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&amp;sdata=IyJseI1TCFFRM6VtA6RdKL2bltoQ0NK3n%2BHHInkA0O8%3D&amp;reserved=0

    This issue was marked as resolved, though no code was committed in 
    connection with the issue.  Later tests that I did suggest that the 
    problem has gotten worse, not better, since version 6.x.  It wasn't a 
    rigorous re-test, so I have no hard numbers.

    Thanks,
    Shawn


Reply via email to