Hi,
>Some body told because the count return 1 row result
He is right
Best regards, Vladimir Yudovin,
Winguzone - Cloud Cassandra Hosting
On Wed, 21 Jun 2017 02:43:32 -0400 web master
wrote
According to
http://www.maigfrga.ntweb.co/counting-i
Is it possible for you to share tracing info for the query? You can enable
tracing at cqlsh prompt with command
Cqlsh > TRACING ON
Cqlsh> run your query
Tracing session info should be printed on screen
Tracing will enable us to know where most of the time is spent!
From: web master [mailto:sock
I guess I misspoke, sorry. It is true that count() as any other query is
still governed by the read timeout and any count that has to process a lot
of data will take a long time and will require a high timeout set to not
timeout (true of every aggregation query as it happens).
I guess I responded
+1 I also encountered timeouts many many times (using DS DevCenter).
Roughly this occured when count(*) > 1.000.000
2017-02-20 14:42 GMT+01:00 Edward Capriolo :
> Seems worth it to file a bug since some here are under the impression it
> almost always works and others are under the impression it
Seems worth it to file a bug since some here are under the impression it
almost always works and others are under the impression it almost never
works.
On Friday, February 17, 2017, kurt greaves wrote:
> really... well that's good to know. it still almost never works though. i
> guess every time
really... well that's good to know. it still almost never works though. i
guess every time I've seen it it must have timed out due to tombstones.
On 17 Feb. 2017 22:06, "Sylvain Lebresne" wrote:
On Fri, Feb 17, 2017 at 11:54 AM, kurt greaves wrote:
> if you want a reliable count, you should us
+1 for using spark for counts.
On Feb 17, 2017 4:25 PM, "kurt greaves" wrote:
> if you want a reliable count, you should use spark. performing a count (*)
> will inevitably fail unless you make your server read timeouts and
> tombstone fail thresholds ridiculous
>
> On 17 Feb. 2017 04:34, "Jan"
Hi,
We faced this issue too.
You could try with reduced paging size, so that tombstone threshold isn't
breached.
try using "paging 500" in cqlsh
[ https://docs.datastax.com/en/cql/3.3/cql/cql_reference/cqlshPaging.html ]
Similarly paging size could be set in java driver as well
This is a work ar
On Fri, Feb 17, 2017 at 11:54 AM, kurt greaves wrote:
> if you want a reliable count, you should use spark. performing a count (*)
> will inevitably fail unless you make your server read timeouts and
> tombstone fail thresholds ridiculous
>
That's just not true. count(*) is paged internally so w
if you want a reliable count, you should use spark. performing a count (*)
will inevitably fail unless you make your server read timeouts and
tombstone fail thresholds ridiculous
On 17 Feb. 2017 04:34, "Jan" wrote:
> Hi,
>
> could you post the output of nodetool cfstats for the table?
>
> Cheers
Hi,
could you post the output of nodetool cfstats for the table?
Cheers,
Jan
Am 16.02.2017 um 17:00 schrieb Selvam Raman:
> I am not getting count as result. Where i keep on getting n number of
> results below.
>
> Read 100 live rows and 1423 tombstone cells for query SELECT * FROM
> keysace.t
I am not getting count as result. Where i keep on getting n number of
results below.
Read 100 live rows and 1423 tombstone cells for query SELECT * FROM
keysace.table WHERE token(id) > token(test:ODP0144-0883E-022R-002/047-052)
LIMIT 100 (see tombstone_warn_threshold)
On Thu, Feb 16, 2017 at 12:3
With C* 3.10
cqlsh ip --request-timeout=60
Connected to x at 10.10.10.10:9042.
[cqlsh 5.0.1 | Cassandra 3.10 | CQL spec 3.4.4 | Native protocol v4]
Use HELP for help.
cqlsh> USE ;
cqlsh:> SELECT count(*) from table;
count
-
3572579
On 02/16/2017 12:27 PM, Selvam Ra
Hi,
do you got a result finally?
Those messages are simply warnings telling you that c* had to read many
tombstones while processing your query - rows that are deleted but not
garbage collected/compacted. This warning gives you some explanation why
things might be much slower than expected be
I am using cassandra 3.9.
Primary Key:
id text;
On Thu, Feb 16, 2017 at 12:25 PM, Cogumelos Maravilha <
cogumelosmaravi...@sapo.pt> wrote:
> C* version please and partition key.
>
> On 02/16/2017 12:18 PM, Selvam Raman wrote:
>
> Hi,
>
> I want to know the total records count in table.
>
> I fir
C* version please and partition key.
On 02/16/2017 12:18 PM, Selvam Raman wrote:
> Hi,
>
> I want to know the total records count in table.
>
> I fired the below query:
>select count(*) from tablename;
>
> and i have got the below output
>
> Read 100 live rows and 1423 tombstone cells for
Hi ,
Sorry for the previous incomplete message.
I am using where clause as follows:
select count(*) from trends where data1='abc' ALLOW FILTERING;
How can i store this count output to any other column.
Can you help with any wayround.
Thanks,
Poonam.
On Wed, Jan 21, 2015 at 7:46 PM, Poonam Ligad
> Some 'feature' for future implementation, maybe?
imho truncation working as a meta data operation is the correct approach. It's
generally used in testing and development. It deletes the data and removes the
SSTables, giving you a clean state.
A CF level tombstone would mean that reads had to
Check, I understand. Thanks!
The cluster certainly was overloaded and I did not realize that truncate
does not tombstone or have a timestamp. Some 'feature' for future
implementation, maybe?
It seems odd if you expect the same behaviour of "delete from usertable"
(in SQL, not yet in CQL, I presume
I don't know the YCSB code, but one theory would be…
1) The cluster is overloaded by the test.
2) A write at CL ALL fails because a node does not respond in time.
3) The coordinator stores the hint and returns failure to the client.
4) The client gets an UnavailableException and retries the ope
On Thu, Dec 22, 2011 at 4:55 AM, Varnit Khanna wrote:
> Does CQL support returning count of columns for a given key? I
> couldn't find anything in the documentation.
No, it doesn't. That's due mostly to the fact that SQL doesn't
provide anything for this; It could be implemented, but would requi
probably helpful if you change the subject when posting about a
different topic.
Is your question about "counters" or the "count" function?
Counters are cool.
Count allows you to determine how many columns exist in a row.
-sd
On Mon, Jun 13, 2011 at 5:27 PM, Sijie YANG wrote:
> Hi, All
> I am
This has not changed.
On Sat, Jan 29, 2011 at 3:37 PM, Oleg Proudnikov wrote:
> Hi All,
>
> Does Cassandra 0.7.0 need to deserialize the complete row in order to count
> all
> columns? I know from this ML that Cassandra 0.6 did that.
>
> Thank you very much,
> Oleg
>
>
>
--
Jonathan Ellis
Pr
I found the reason (in the view of client side).
I used unit test of my DAO class. the test class inserted test row and
columns before doing test, and then do test, finally delete inserted
columns after test.
The test was succeeded at first. When I do that test again, the test
code attempt to i
Sorry, I'm not following your example.
Could you describe the request you sent, what you expected to get back and what
you actually got back. Are you able to reproduce the fault in a clean install,
e.g. load this data, run these commands and then it goes bang ?
Aaron
On 18 Nov 2010, at 23:54
Just had a quick look at an 0.7b2 install and it appeared to be working as expected.Here's what I got for a row with 50 super columns, that each have 50 columns. I ran the following get_slice calls .get_slice with no super column specified, count=100returned 50 super columns, each with 50 columns g
It returned all columns within the range of start and end without regard
to the count. the CF is super column family and I send the range of
super column names of type Long. (and sub column name was UTF8)
I put 2000 super columns in a row, and tried to read the first 50
columns in some range of co
The CassandraServer is not doing the read, step through the code from the call
to readColumnFamily() in getSlice().
The read is passed to the StorageProxy.readProtocol() which looks at the CL and
determines if its a weak or strong read, sends it out to all the replicas and
manages everything. E
Well, it's a bad idea, except when it isn't. I think I'm okay with
our api evolving to handle more corner cases.
It's true that it runs the risk of encouraging bad design from new users though.
On Fri, Aug 13, 2010 at 1:07 PM, Gary Dusbabek wrote:
> Should we close https://issues.apache.org/jir
Should we close https://issues.apache.org/jira/browse/CASSANDRA-653
then? Fetching a count of all rows is just a specific instance of
fetching the count of a range or rows.
I spoke to a programmer at the summit who was working on this ticket
mainly as a way of getting familiar with the codebase.
On 8/13/10 10:52 AM, Jonathan Ellis wrote:
because it would work amazingly poorly w/ billions of rows. it's an
antipattern.
On Fri, Aug 13, 2010 at 10:50 AM, Mark wrote:
On 8/13/10 10:44 AM, Jonathan Ellis wrote:
not without fetching all of them with get_range_slices
On Fri, Aug 1
because it would work amazingly poorly w/ billions of rows. it's an
antipattern.
On Fri, Aug 13, 2010 at 10:50 AM, Mark wrote:
> On 8/13/10 10:44 AM, Jonathan Ellis wrote:
>>
>> not without fetching all of them with get_range_slices
>>
>> On Fri, Aug 13, 2010 at 10:37 AM, Mark wrote:
>>
>>>
>>>
On 8/13/10 10:44 AM, Jonathan Ellis wrote:
not without fetching all of them with get_range_slices
On Fri, Aug 13, 2010 at 10:37 AM, Mark wrote:
Is there some way I can count the number of rows in a CF.. CLI, MBean?
Gracias
Im guessing you would advise against this? Any reaso
not without fetching all of them with get_range_slices
On Fri, Aug 13, 2010 at 10:37 AM, Mark wrote:
> Is there some way I can count the number of rows in a CF.. CLI, MBean?
>
> Gracias
>
--
Jonathan Ellis
Project Chair, Apache Cassandra
co-founder of Riptano, the source for professional Cass
34 matches
Mail list logo