Shenghua,
> The problem is the user might only want all the data via a "select *"
> like statement. It seems that 257 connections to query the rows are necessary.
> However, is there any way to prohibit 257 concurrent connections?
Your reasoning is correct.
The number of connections should be t
I did another experiment to verify indeed 3*257 (1 of 257 ranges is null
effectively) mappers were created.
Thanks mcm for the information !
On Wed, Jan 28, 2015 at 12:17 AM, mck wrote:
> Shenghua,
>
> > The problem is the user might only want all the data via a "select *"
> > like statement. I
Hi, everyone
I have a development machine running in the UTC -0200
But the Cassandra in the same machine looks like is not running in the same
UTC because the logs are 2 hours ahead ( UTC ). For example, if my machine
is 08:49 and the Cassandra log entries are 10:49. And I'm trying change the
time
Hint: using the Java driver, you can set the fetchSize to tell the driver
how many CQL rows to fetch for each page.
Depending on the size (in bytes) of each CQL row, it would be useful to
tune this fetchSize value to avoid loading too much data into memory for
each page
On Wed, Jan 28, 2015 at 8
They have an experimental 2.0 that works (we're using it).
Thanks,
Daniel
On Tue, Jan 27, 2015 at 11:50 AM, Mikhail Strebkov
wrote:
> It is open sourced but works only with C* 1.x as far as I know.
>
> Mikhail
>
>
> On Tuesday, January 27, 2015, Mohammed Guller
> wrote:
>
>> I believe Aegisth
If you are using replication factor 1 and 3 cassandra nodes, 256 virtual
nodes should be evenly distributed on 3 nodes. So there are totally 256
virtual nodes. But in your experiment, you saw 3*257 mapper. Is that
because of the setting cassandra.input.split.size=3? It is nothing with
node number=3
I have a 3 node Cassandra 2.1.0 cluster and I am using datastax 2.1.4 driver to
create a keyspace followed by creating a column family within that keyspace
from my unit test.
But I do not see the keyspace getting created and the code for creating column
family fails because it cannot find the k
That's c* default setting. My version is 2.0.11. Check your Cassandra.yaml.
On Jan 28, 2015 4:53 PM, "Huiliang Zhang" wrote:
> If you are using replication factor 1 and 3 cassandra nodes, 256 virtual
> nodes should be evenly distributed on 3 nodes. So there are totally 256
> virtual nodes. But in
Hi All,
We need to upload 18 lacs rows into a table which consist columns with data
type "counter".
on uploading using copy command , we are getting below error:
*Bad Request: INSERT statement are not allowed on counter tables, use
UPDATE instead*
we need counter data type because after loading
Hi,
a short question about the new incremental repairs again. I am running
2.1.2 (for testing). Marcus pointed me that 2.1.2 should do incremental
repairs automatically, so I rolled back all steps taken. I expect that
routine repair times will decrease when I do not put many new data on
the c
Hi All,
I tried both insert and select query (using QueryBuilder) in Regular
statement and PreparedStatement in a multithreaded code to do the query say
10k to 50k times. But I don't see any visible improvement using the
PreparedStatement. What could be the reason?
Note : I am using the same Sess
Hi
Unsure what you mean by automatically, but you should use "-par -inc" when
you repair
And, you should wait until 2.1.3 (which will be out very soon) before doing
this, we have fixed many issues with incremental repairs
/Marcus
On Thu, Jan 29, 2015 at 7:44 AM, Roland Etzenhammer <
r.etzenham.
Hi,
the "automatically" meant this reply earlier:
If you are on 2.1.2+ (or using STCS) you don't those steps (should
probably update the blog post).
Now we keep separate levelings for the repaired/unrepaired data and
move the sstables over after the first incremental repair
My understandin
13 matches
Mail list logo