@Siddharth , is there any specific user case where this is required ; What
if the mainHost goes down?

On Fri, May 6, 2016 at 12:50 PM, Siddharth Verma <
verma.siddha...@snapdeal.com> wrote:

> Hi,
> Whitelist worked perfectly.
> Thanks for the help.
>
> In case, someone wants to use the same, the bellow code snippet might help
> them
>
>
> private final Cluster mainCluster;
> private final Session mainSession;
> . . . . . . . . . . .
> . . . . . . . . . . .
> String mainHost = "IP_of_machine";
> . . . . . . . . . . .
> mainCluster =
> Cluster.builder().addContactPoint(mainHost).withQueryOptions(new
> QueryOptions().setFetchSize(fetchSize)).withCredentials(username, password).
>                 withLoadBalancingPolicy(new WhiteListPolicy(new
> RoundRobinPolicy(),Arrays.asList(new InetSocketAddress(mainHost,9042))))
>                 .build();
> mainSession = mainCluster.connect();
>
>
> Regards,
> SIddharth Verma
>
>
> On Thu, May 5, 2016 at 8:59 PM, Jeff Jirsa <jeff.ji...@crowdstrike.com>
> wrote:
>
>> This doesn’t actually guarantee the behavior you think it does. There’s
>> no actual way to guarantee this behavior in Cassandra, as far as I can
>> tell. A long time ago there was a ticket for a “coordinator only”
>> consistency level, which is nearly trivial to implement, but the use case
>> is so narrow that it’s unlikely to ever be done.
>>
>> Here’s an example trace on system_auth, where all nodes are replicas
>> (RF=N), and the data is fully repaired (data exists on the local node). The
>> coordinator STILL chooses a replica other than itself (far more likely to
>> see this behavior on a keyspace with a HIGH replication factor, this
>> particular cluster is N in the hundreds):
>>
>> cqlsh> tracing on;
>> Tracing is already enabled. Use TRACING OFF to disable.
>> cqlsh> CONSISTENCY local_one;
>> Consistency level set to LOCAL_ONE.
>> cqlsh> select name from system_auth.users where name='jjirsa' limit 1;
>>
>>  name
>> --------
>>  jjirsa
>>
>> (1 rows)
>>
>> Tracing session: 5ffdee70-12d5-11e6-ad58-317180027532
>>
>>  activity
>>                        | timestamp                  | source       |
>> source_elapsed
>>
>> -------------------------------------------------------------------------------------------------+----------------------------+--------------+----------------
>>
>>     Execute CQL3 query | 2016-05-05 15:23:52.919000 | x.y.z.150 |
>>    0
>>       Parsing select * from system_auth.users where name='jjirsa' limit
>> 1; [SharedPool-Worker-7] | 2016-05-05 15:23:52.919000 | x.y.z.150 |
>>    100
>>                                                        Preparing
>> statement [SharedPool-Worker-7] | 2016-05-05 15:23:52.920000 | x.y.z.150 |
>>            194
>>                                            reading data from /x.y.z.151
>> [SharedPool-Worker-7] | 2016-05-05 15:23:52.920000 | x.y.z.150 |
>>  965
>>                  Sending READ message to /x.y.z.151
>> [MessagingService-Outgoing-/x.y.z.151] | 2016-05-05 15:23:52.921000 |
>> x.y.z.150 |           1072
>>   REQUEST_RESPONSE message received from /x.y.z.151
>> [MessagingService-Incoming-/x.y.z.151] | 2016-05-05 15:23:52.924000 |
>> x.y.z.150 |           5433
>>                                     Processing response from /x.y.z.151
>> [SharedPool-Worker-4] | 2016-05-05 15:23:52.924000 | x.y.z.150 |
>> 5595
>>               READ message received from /x.y.z.150
>> [MessagingService-Incoming-/x.y.z.150] | 2016-05-05 15:23:52.927000 |
>> x.y.z.151 |            104
>>                                  Executing single-partition query on
>> users [SharedPool-Worker-6] | 2016-05-05 15:23:52.928000 | x.y.z.151 |
>>       2251
>>                                               Acquiring sstable
>> references [SharedPool-Worker-6] | 2016-05-05 15:23:52.929000 | x.y.z.151 |
>>           2353
>>                                                Merging memtable
>> tombstones [SharedPool-Worker-6] | 2016-05-05 15:23:52.929000 | x.y.z.151 |
>>           2414
>>                       Partition index with 0 entries found for sstable
>> 384 [SharedPool-Worker-6] | 2016-05-05 15:23:52.929000 | x.y.z.151 |
>>     2829
>>                                Seeking to partition beginning in data
>> file [SharedPool-Worker-6] | 2016-05-05 15:23:52.930000 | x.y.z.151 |
>>     2913
>>  Skipped 0/1 non-slice-intersecting sstables, included 0 due to
>> tombstones [SharedPool-Worker-6] | 2016-05-05 15:23:52.930000 | x.y.z.151 |
>>           3263
>>                                 Merging data from memtables and 1
>> sstables [SharedPool-Worker-6] | 2016-05-05 15:23:52.931000 | x.y.z.151 |
>>         3289
>>                                          Read 1 live and 0 tombstone
>> cells [SharedPool-Worker-6] | 2016-05-05 15:23:52.931000 | x.y.z.151 |
>>       3323
>>                                        Enqueuing response to /x.y.z.150
>> [SharedPool-Worker-6] | 2016-05-05 15:23:52.931000 | x.y.z.151 |
>> 3411
>>      Sending REQUEST_RESPONSE message to /x.y.z.150
>> [MessagingService-Outgoing-/x.y.z.150] | 2016-05-05 15:23:52.932000 |
>> x.y.z.151 |           3577
>>
>>       Request complete | 2016-05-05 15:23:52.924649 | x.y.z.150 |
>> 5649
>>
>> From: Varun Barala
>> Reply-To: "user@cassandra.apache.org"
>> Date: Thursday, May 5, 2016 at 2:40 AM
>> To: "user@cassandra.apache.org"
>> Subject: Re: Read data from specific node in cassandra
>>
>> Hi Siddharth Verma,
>>
>> You can define consistency level LOCAL_ONE.
>>
>> and you can applyh consistency level during statement creation.
>>
>> like this -> statement.setConsistencyLevel(ConsistencyLevel.LOCAL_ONE);
>>
>> On Thu, May 5, 2016 at 3:55 PM, Siddharth Verma <
>> verma.siddha...@snapdeal.com> wrote:
>>
>>> Hi,
>>> We have a 3 node cluster in DC1, where replication factor of keyspace is
>>> 3.
>>> How can i read data only from one particular node in java driver?
>>>
>>> Thanks,
>>> Siddharth Verma
>>>
>>
>>
>

Reply via email to