sCounter())
return true;
}
return false;
}
So, if the table has one column in complex type, then it would need to
iterate all sstables containing the rows, but in fact it would not
have to.
Could anyone explain why?
Jinhua Luo 于2019年1月25日周五 下午7:41写道:
>
> Hi,
&
Hi,
I found the columnfilter.isFetchAll is always true even when I select
a subset of columns.
In the codes:
private ColumnFilter gatherQueriedColumns()
{
if (selection.isWildcard())
return ColumnFilter.all(cfm);
ColumnFilter.Builder builder = ColumnFilter.allColumnsBuilder(cfm)
o, sstables keep compacting when they meet criteria. Look for compaction sub
> properties
>
>
> Regards,
>
> Nitan
>
> Cell: 510 449 9629
>
>
> On Jan 13, 2019, at 8:18 AM, Jinhua Luo wrote:
>
> is there a largest tier where the sstables stop merging anymo
is there a largest tier where the sstables stop merging anymore? if so,
then does it mean some tombstones at that last tier would not be removed?
if not, then it's hard to imagine to compact sstables in huge size, like
10gb; after all, the total size of sstables would definitelly grow, so a
huge ss
; Jeff Jirsa
>
>
> > On Jan 8, 2019, at 10:51 PM, Jinhua Luo wrote:
> >
> > Thanks. Let me clarify my questions more.
> >
> > 1) For memtable, if the selected columns (assuming they are in simple
> > types) could be found in memtable only, why bother to sear
> > On Jan 8, 2019, at 3:04 AM, Jinhua Luo wrote:
> >
> > Hi All,
> >
> > The compaction would organize the sstables, e.g. with LCS, the
> > sstables would be categorized into levels, and the read path should
> > read sstables level by level until the rea
Hi All,
The compaction would organize the sstables, e.g. with LCS, the
sstables would be categorized into levels, and the read path should
read sstables level by level until the read is fulfilled, correct?
For STCS, it would search sstables in buckets from smallest to largest?
What about other c
ld be lost?
Regards,
Jinhua Luo
-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org
r - bootstrap streams from the losing range
>
> --
> Jeff Jirsa
>
>
>> On Apr 30, 2018, at 8:57 PM, Jinhua Luo wrote:
>>
>> Hi All,
>>
>> When a new node added, due to the even distribution of the new tokens,
>> the current nodes of the ring shoul
Hi All,
When a new node added, due to the even distribution of the new tokens,
the current nodes of the ring should migrate data to this new node.
So, does it requires all nodes be present? If not, then if some nodes
are down, then it will miss the data migration of those parts, how and
when to f
HI All,
For example, if at the same time, two nodes joining and there nodes removing.
How c* handles such case? c* uses gossip to determine membership, and
it does not have central leader to serialize the membership changes.
-
To
0, 60, 80}
Then, when the DC meet each other, they should merge two rings into one, right?
Here are the questions:
a) who does the merge?
b) the tokens change after merge?
2018-04-27 1:51 GMT+08:00 Jeff Jirsa :
>
>
> On Thu, Apr 26, 2018 at 1:34 AM, Jinhua Luo wrote:
>>
>> H
Apr 26, 2018 at 1:04 AM, Jinhua Luo wrote:
>>
>> You're assuming per DC has same total num_tokens, right?
>> If I add a new node into DC1, will it change the tokens owned by DC2 and
>> DC3?
>>
>> 2018-04-12 0:59 GMT+08:00 Jeff Jirsa :
>> > Wh
okens 10,20,30 ; 11, 21,31 ; 17, 22, 36
>
>
>
>
> On Wed, Apr 11, 2018 at 9:36 AM, Jinhua Luo wrote:
>>
>> What if I add a new DC3?
>> The token ranges would reshuffled into DC1, DC2, DC3?
>>
>> 2018-04-11 22:06 GMT+08:00 Jeff Jirsa :
>> >
on each replica so
> replication factor does not really apply here
>
> On Fri, Apr 20, 2018 at 7:37 AM, Jinhua Luo wrote:
>>
>> Hi All,
>>
>> Some list operations, like set by index, needs to read the whole list
>> before update.
>> So what's the
-before-write on the cluster level is an anti-pattern.
>
> The read-before-write at local storage is somehow already an anti-pattern.
> That's why it's recommended to avoid using list as much as possible
>
>
>
>
> On Fri, Apr 20, 2018 at 1:01 PM, Jinhua Luo wrote:
&
ration on the list column is done locally on each replica so
> replication factor does not really apply here
>
> On Fri, Apr 20, 2018 at 7:37 AM, Jinhua Luo wrote:
>>
>> Hi All,
>>
>> Some list operations, like set by index, needs to read the whole list
>> befor
Hi All,
Some list operations, like set by index, needs to read the whole list
before update.
So what's the read consistency level of that read? Use the same cl of
the setting for the normal read?
-
To unsubscribe, e-mail: user-un
oes c* store the schema?
>
>
>
> It uses a “everywhere” replication strategy and its recommended to do all
> alter / create / drop statements with consistency level all — meaning it
> wouldn’t make the change to the schema if the nodes are up.
>
>
> --
> Rahul Singh
> rahul
16 17:01 GMT+08:00 DuyHai Doan :
> There is a system_schema keyspace to store all the schema information
>
> https://docs.datastax.com/en/cql/3.3/cql/cql_using/useQuerySystem.html#useQuerySystem__table_bhg_1bw_4v
>
> On Mon, Apr 16, 2018 at 10:48 AM, Jinhua Luo wrote:
>>
&g
Hi All,
Does c* use predefined keyspace/tables to store the user defined schema?
If so, what's the RWN of those meta schema? And what's the procedure
to update them?
-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
Hi All,
In the doc:
https://docs.datastax.com/en/cassandra/3.0/cassandra/dml/dmlAboutDeletes.html
It said "When an unresponsive node recovers, Cassandra uses hinted
handoff to replay the database mutationsthe node missed while it was
down. Cassandra does not replay a mutation for a tombstoned rec
if you have RF > 1). If you add/remove a node in a DC, tokens will be
> rearranged between all nodes within the DC only, the other DCs won't be
> affected.
>
> --
> Jacques-Henri Berthemet
>
> -Original Message-
> From: Jinhua Luo [mailto:luajit...@g
Hi All,
I know it seems a stupid question, but I am really confused about the
documents on the internet related to this topic, especially it seems
that it has different answers for c* with vnodes or not.
Let's assume the token range is 1-100 for the whole cluster, how does
it distributed into the
24 matches
Mail list logo