ose the added value of redundancy during
> the write cycle.
>
> Does anyone have an insight or idea if my assumptions are correct? Does
> inter-node communication really add all this network overhead?
>
> Thanks,
> Katriel
>
>
--
John Pyeatt
Singlewire Software, LLC
www.singlewire.com
--
608.661.1184
john.pye...@singlewire.com
ox my Number of rows (estimated) looks right
on one box but the other one is several 100 thousand short.
Does anyone see anything wrong with my recovery steps?
--
John Pyeatt
Singlewire Software, LLC
www.singlewire.com
--
608.661.1184
john.pye...@singlewire.com
and 3.1.3 ( that will be upgraded to 3.2.4).
>
> Thank you.
>
> --
> Daniel Curry
>
> Sr. Linux System Administrator, Network Operations
>
> PGP : AD5A 96DC 7556 A020 B8E7 0E4D 5D5E 9BA5 C83E 8C92
>
> Arrayent, Inc.
> 2317 Broadway Street, Suite 20
> Redwoo
es_at: 0.85
reduce_cache_capacity_to: 0.6
in_memory_compaction_limit_in_mb: 64
Does anyone have any ideas why we are seeing this so selectively on one box?
Any cures???
--
John Pyeatt
Singlewire Software, LLC
www.singlewire.com
--
608.661.1184
john.pye...@singlewire.com
48 and there seems, based on some of the comments that there
is a lack of confidence in this problem.
Has anyone else seen this problem?
--
John Pyeatt
Singlewire Software, LLC
www.singlewire.com
--
608.661.1184
john.pye...@singlewire.com
Is there any way of resetting the value of a nodetool info Exceptions value
manually?
Is there a JMX call I can make?
--
John Pyeatt
Singlewire Software, LLC
www.singlewire.com
--
608.661.1184
john.pye...@singlewire.com
2013 at 6:19 AM, John Pyeatt wrote:
>
>> Then my issue must be the 0.01% because
>>
>> 1) I'm running the repair as root.
>>
>
> Huh? Repair doesn't care what user your shell is. It is a process built
> into cassandra and has the permissions
t doesn't really
> matter what user is running the nodetool. Those directories should be
> writable by the user who is running the actual cassandra process.
>
> Hannu
>
>
> 2013/12/3 John Pyeatt
>
>> Then my issue must be the 0.01% because
>>
>>
ories that
were created during the repair that caused no exceptions.
The biggest issue with this is that is shuts down gossip.
On Mon, Dec 2, 2013 at 5:56 PM, Robert Coli wrote:
> On Mon, Dec 2, 2013 at 2:59 PM, John Pyeatt wrote:
>
>> Caused by: java.io.IOException: Unable to
at
org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:78)
Caused by: java.io.IOException: Unable to create directory
/data-1/cassandra/data/SinglewireSupport/Binaries/backups
--
John Pyeatt
Singlewire Software, LLC
www.singlewire.com
--
608.661.1184
john.pye...@singlewire.com
of the running time as keyspaces, or load,
> increases.
>
> -- C
>
>
> On Wed, Nov 20, 2013 at 6:53 AM, John Pyeatt
> wrote:
>
>> We have an application that has been designed to use potentially 100s of
>> keyspaces (one for each company).
>>
>&
the cluster to
limit keyspaces) to increase the performance of the nodetool repairs?
My obvious concern is that as this application grows and we get more
companies using our it we will eventually have too many keyspaces to
perform repairs on the cluster.
--
John Pyeatt
Singlewire Software, LLC
pool thus causing things to get
marked down from FailureDetector.java.
--
John Pyeatt
Singlewire Software, LLC
www.singlewire.com
--
608.661.1184
john.pye...@singlewire.com
stion is *why does gossip shutdown for these nodes that we aren't
decommissioning in the first place*?
--
John Pyeatt
Singlewire Software, LLC
www.singlewire.com
--
608.661.1184
john.pye...@singlewire.com
gt;>>
>>>
>>>
>>>
>>> =
>>> My yaml configuration files have these modified
>>>
>>>
>>> first node yaml
>>> ---
>>> initial_token: -9223372036854775808 # generat
15 matches
Mail list logo