This is not the right data model for Cassandra. Strong encouragement to
watch one of Patrick McFadin's data modeling videos on youtube.
You very much want to always query where a WHERE clause, which usually
means knowing a partition key (or set of partition keys) likely to contain
your data, and u
Hi Jeff - yes, I'm doing a select without where - specifically: select
uuid from table limit 1000;
Not inserting nulls, and nothing is TTL'd.
At this point with zero rows, the above select fails.
Sounds like my application needs a redesign as doing 1 billion inserts,
and 100 million deletes res
The tombstone threshold is "how many tombstones are encountered within a
single read command", and the default is something like 100,000 (
https://github.com/apache/cassandra/blob/trunk/conf/cassandra.yaml#L1293-L1294
)
Deletes are not forbidden, but you have to read in such a way that you
touch l
If the table has zero rows you could truncate it
On Mon, Oct 25, 2021 at 6:29 PM Joe Obernberger <
joseph.obernber...@gmail.com> wrote:
> Update - after 10 days, I'm able to use the table again; prior to that all
> selects timed out.
> Are deletes basically forbidden with Cassandra? If you have
Update - after 10 days, I'm able to use the table again; prior to that
all selects timed out.
Are deletes basically forbidden with Cassandra? If you have a table
where you want to do lots of inserts and deletes, is there an option
that works in Cassandra? Even thought the table now has zero ro
I'm not sure if tombstones is the issue; is it? Grace is set to 10
days, that time has not passed yet.
-Joe
On 10/14/2021 1:37 PM, James Brown wrote:
What is gc_grace_seconds set to on the table? Once that passes, you
can do `nodetool scrub` to more emphatically remove tombstones...
On Thu,
What is gc_grace_seconds set to on the table? Once that passes, you can do
`nodetool scrub` to more emphatically remove tombstones...
On Thu, Oct 14, 2021 at 8:49 AM Joe Obernberger <
joseph.obernber...@gmail.com> wrote:
> Hi all - I have a table where I've needed to delete a number of rows.
> I'
Btw, if you seen the number of tombstones that is a multiply of number of
scanned rows, like in your case - that’s a explicit signal of either null
inserts, or non frozen collections...
On Fri 21. Aug 2020 at 20:21, Attila Wind wrote:
>
>
>
>
>
>
>
>
>
>
> right! silly me (regarding "can't have
right! silly me (regarding "can't have null for clustering column") :-)
OK code is modified, we stopped using NULL on that column. In a few days
we will see if this was the cause.
Thanks for the useful info eveyrone! Helped a lot!
Attila Wind
http://www.linkedin.com/in/attilaw
Mobile: +49 17
inserting null for any column will generate the tombstone (and you can't
have null for clustering column, except case when it's an empty partition
with static column).
if you're really inserting the new data, not overwriting existing one - use
UNSET instead of null
On Fri, Aug 21, 2020 at 10:45 AM
Thanks a lot! I will process every pointers you gave - appreciated!
1. we do have collection column in that table but that is (we have only
1 column) a frozen Map - so I guess "Tombstones are also implicitly
created any time you insert or update a row which has an (unfrozen)
collection column:
On Fri, Aug 21, 2020 at 9:43 AM Tobias Eriksson
wrote:
> Isn’t it so that explicitly setting a column to NULL also result in a
> tombstone
>
True, thanks for pointing that out!
Then as mentioned the use of list,set,map can also result in tombstones
>
> See
> https://www.instaclustr.com/cassandr
;user@cassandra.apache.org"
Date: Friday, 21 August 2020 at 09:36
To: User , "attila.wind@swf.technology"
Subject: Re: tombstones - however there are no deletes
On Fri, Aug 21, 2020 at 7:57 AM Attila Wind wrote:
Hi Cassandra Gurus,
Recently I captured a very interesting warnin
On Fri, Aug 21, 2020 at 7:57 AM Attila Wind wrote:
> Hi Cassandra Gurus,
>
> Recently I captured a very interesting warning in the logs saying
>
> 2020-08-19 08:08:32.492
> [cassandra-client-keytiles_data_webhits-nio-worker-2] WARN
> com.datastax.driver.core.RequestHandler - Query '[3 bound value
Tombstones could be not only generated by deletes. this happens when you:
- When insert or full update of a non-frozen collection occurs, such as
replacing the value of the column with another value like the UPDATE table
SET field = new_value …, Cassandra inserts a tombstone marker to pre
Thank you for the information !
On Thu, Jun 20, 2019 at 9:50 AM Alexander Dejanovski
wrote:
> Léo,
>
> if a major compaction isn't a viable option, you can give a go at
> Instaclustr SSTables tools to target the partitions with the most
> tombstones :
> https://github.com/instaclustr/cassandra-s
Léo,
if a major compaction isn't a viable option, you can give a go at
Instaclustr SSTables tools to target the partitions with the most
tombstones :
https://github.com/instaclustr/cassandra-sstable-tools/tree/cassandra-2.2#ic-purge
It generates a report like this:
Summary:
+-+-
My bad on date formatting, it should have been : %Y/%m/%d
Otherwise the SSTables aren't ordered properly.
You have 2 SSTables that claim to cover timestamps from 1940 to 2262, which
is weird.
Aside from that, you have big overlaps all over the SSTables, so that's
probably why your tombstones are s
On Thu, Jun 20, 2019 at 7:37 AM Alexander Dejanovski
wrote:
> Hi Leo,
>
> The overlapping SSTables are indeed the most probable cause as suggested
> by Jeff.
> Do you know if the tombstone compactions actually triggered? (did the
> SSTables name change?)
>
Hello !
I believe they have changed. I
Hi Leo,
The overlapping SSTables are indeed the most probable cause as suggested by
Jeff.
Do you know if the tombstone compactions actually triggered? (did the
SSTables name change?)
Could you run the following command to list SSTables and provide us the
output? It will display both their timesta
Probably overlapping sstables
Which compaction strategy?
> On Jun 19, 2019, at 9:51 PM, Léo FERLIN SUTTON
> wrote:
>
> I have used the following command to check if I had droppable tombstones :
> `/usr/bin/sstablemetadata --gc_grace_seconds 259200
> /var/lib/cassandra/data/stats/tablename/md
.
Enjoy!
From: Ayub M [mailto:hia...@gmail.com]
Sent: Saturday, February 23, 2019 4:36 AM
To: user@cassandra.apache.org
Subject: Re: tombstones threshold warning
Thanks Ken, further investigating what I found is the tombstones which I am
seeing are from null values in the collection
Given your data model, there’s two ways you may read a tombstone:
You select an expired row, or you scan the whole table.
If you select an expired row, you’re going to scan one tombstone. With
sufficiently high read rate, that’ll look like you’re scanning a lot - each
read will add one to the h
Thanks Jeff. I'm trying to figure out why the tombstones scans are
happening if possible eliminate it.
On Sat, Feb 23, 2019, 10:50 PM Jeff Jirsa wrote:
> G1GC with an 8g heap may be slower than CMS. Also you don’t typically set
> new gen size on G1.
>
> Again though - what problem are you solvin
gt; *From:* Rahul Reddy [mailto:rahulreddy1...@gmail.com]
> *Sent:* Saturday, February 23, 2019 7:26 PM
> *To:* user@cassandra.apache.org
> *Subject:* Re: Tombstones in memtable
>
>
>
> ```jvm setting
>
>
>
> -XX:+UseThreadPriorities
>
> -XX:ThreadPriorityPolicy=42
>
>
When the CPU utilization spikes from 5-10% to 50%, how many nodes does it
happen to at the same time?
From: Rahul Reddy [mailto:rahulreddy1...@gmail.com]
Sent: Saturday, February 23, 2019 7:26 PM
To: user@cassandra.apache.org
Subject: Re: Tombstones in memtable
```jvm setting
-XX
G1GC with an 8g heap may be slower than CMS. Also you don’t typically set new
gen size on G1.
Again though - what problem are you solving here? If you’re serving reads and
sitting under 50% cpu, it’s not clear to me what you’re trying to fix.
Tombstones scanned won’t matter for your table, so i
```jvm setting
-XX:+UseThreadPriorities
-XX:ThreadPriorityPolicy=42
-XX:+HeapDumpOnOutOfMemoryError
-Xss256k
-XX:StringTableSize=103
-XX:+AlwaysPreTouch
-XX:-UseBiasedLocking
-XX:+UseTLAB
-XX:+ResizeTLAB
-XX:+UseNUMA
-XX:+PerfDisableSharedMem
-Djava.net.preferIPv4Stack=true
-XX:+UseG1GC
-XX:G1
Thanks Jeff,
Since low writes and high reads most of the time data in memtables only.
When I noticed intially issue no stables on disk everything in memtable
only.
On Sat, Feb 23, 2019, 10:01 PM Jeff Jirsa wrote:
> Also given your short ttl and low write rate, you may want to think about
> how
Also given your short ttl and low write rate, you may want to think about how
you can keep more in memory - this may mean larger memtable and high flush
thresholds (reading from the memtable), or perhaps the partition cache (if you
are likely to read the same key multiple times). You’ll also pro
You’ll only ever have one tombstone per read, so your load is based on normal
read rate not tombstones. The metric isn’t wrong, but it’s not indicative of a
problem here given your data model.
You’re using STCS do you may be reading from more than one sstable if you
update column2 for a given
...@gmail.com]
Sent: Saturday, February 23, 2019 5:56 PM
To: user@cassandra.apache.org
Subject: Re: Tombstones in memtable
Changing gcgs didn't help
CREATE KEYSPACE ksname WITH replication = {'class': 'NetworkTopologyStrategy',
'dc1': '3&
Do you see anything wrong with this metric.
metric to scan tombstones
increase(cassandra_Table_TombstoneScannedHistogram{keyspace="mykeyspace",Table="tablename",function="Count"}[5m])
And sametime CPU Spike to 50% whenever I see high tombstone alert.
On Sat, Feb 23, 2019, 9:25 PM Jeff Jirsa wro
Your schema is such that you’ll never read more than one tombstone per select
(unless you’re also doing range reads / table scans that you didn’t mention) -
I’m not quite sure what you’re alerting on, but you’re not going to have
tombstone problems with that table / that select.
--
Jeff Jirsa
-patterns-queues-and-queue-like-datasets
Kenneth Brotman
From: Jeff Jirsa [mailto:jji...@gmail.com]
Sent: Saturday, February 23, 2019 4:47 PM
To: user@cassandra.apache.org
Subject: Re: Tombstones in memtable
Would also be good to see your schema (anonymized if needed) and the select
Changing gcgs didn't help
CREATE KEYSPACE ksname WITH replication = {'class':
'NetworkTopologyStrategy', 'dc1': '3', 'dc2': '3'} AND durable_writes =
true;
```CREATE TABLE keyspace."table" (
"column1" text PRIMARY KEY,
"column2" text
) WITH bloom_filter_fp_chance = 0.01
AND caching
Would also be good to see your schema (anonymized if needed) and the select
queries you’re running
--
Jeff Jirsa
> On Feb 23, 2019, at 4:37 PM, Rahul Reddy wrote:
>
> Thanks Jeff,
>
> I'm having gcgs set to 10 mins and changed the table ttl also to 5 hours
> compared to insert ttl to 4 h
I’m not parsing this - did the lower gcgs help or not ? Seeing the table
histograms is the next step if this is still a problem
The table level TTL doesn’t matter if you set a TTL on each insert
--
Jeff Jirsa
> On Feb 23, 2019, at 4:37 PM, Rahul Reddy wrote:
>
> Thanks Jeff,
>
> I'm ha
Thanks Jeff,
I'm having gcgs set to 10 mins and changed the table ttl also to 5 hours
compared to insert ttl to 4 hours . Tracing on doesn't show any tombstone
scans for the reads. And also log doesn't show tombstone scan alerts. Has
the reads are happening 5-8k reads per node during the peak h
Executing single-partition query on
collsndudt [CoreThread-6] | 2019-02-21 21:41:04.629001 | 10.216.87.180
|460 | 127.0.0.1
Acquiring sstable
references [CoreThread-6] | 2019-02-21 21:41:04.629001 | 10.216.87.
If all of your data is TTL’d and you never explicitly delete a cell without
using s TTL, you can probably drop your GCGS to 1 hour (or less).
Which compaction strategy are you using? You need a way to clear out those
tombstones. There exist tombstone compaction sub properties that can help
enco
Can we see the histogram? Why wouldn’t you at times have that many tombstones?
Makes sense.
Kenneth Brotman
From: Rahul Reddy [mailto:rahulreddy1...@gmail.com]
Sent: Thursday, February 21, 2019 7:06 AM
To: user@cassandra.apache.org
Subject: Tombstones in memtable
We have small table
...@yahoo.com]
Sent: Tuesday, February 19, 2019 10:12 PM
To: 'user@cassandra.apache.org'
Subject: RE: tombstones threshold warning
Hi Ayub,
Is everything flushing to SSTables? It has to be somewhere right? So is it in
the memtables?
Or is it that there are tombstones that are
Hi Ayub,
Is everything flushing to SSTables? It has to be somewhere right? So is it in
the memtables?
Or is it that there are tombstones that are sometimes detected and sometimes
not detected as described in the very detailed article on The Last Pickle by
Alex Dejanovski called Undetec
No worries! They're a data type that was introduced in 1.2:
http://www.datastax.com/dev/blog/cql3_collections
On Fri, Jan 2, 2015 at 12:07 PM, Nikolay Mihaylov wrote:
> Hi Tyler,
>
> sorry for very stupid question - what is a collection ?
>
> Nick
>
> On Wed, Dec 31, 2014 at 6:27 PM, Tyler Hobb
Hi Tyler,
sorry for very stupid question - what is a collection ?
Nick
On Wed, Dec 31, 2014 at 6:27 PM, Tyler Hobbs wrote:
> Overwriting an entire collection also results in a tombstone being
> inserted.
>
> On Wed, Dec 24, 2014 at 7:09 AM, Ryan Svihla wrote:
>
>> You should probably ask on t
Overwriting an entire collection also results in a tombstone being inserted.
On Wed, Dec 24, 2014 at 7:09 AM, Ryan Svihla wrote:
> You should probably ask on the Cassandra user mailling list.
>
> However, TTL is the only other case I can think of.
>
> On Tue, Dec 23, 2014 at 1:36 PM, Davide D'Ag
You should probably ask on the Cassandra user mailling list.
However, TTL is the only other case I can think of.
On Tue, Dec 23, 2014 at 1:36 PM, Davide D'Agostino wrote:
> Hi there,
>
> Following this:
> https://groups.google.com/a/lists.datastax.com/forum/#!searchin/java-driver-user/tombstone
On Thu, May 15, 2014 at 1:43 AM, Joel Samuelsson
wrote:
> https://issues.apache.org/jira/browse/CASSANDRA-4314 seems to say that
> tombstones on secondary indexes are not removed by a compaction. Do I need
> to do it manually?
>
The ticket you have pasted says :
"It's not exposed through nodetoo
oh,
its for for cassandra 1.x, right?
I use 2.0.7.
How could I reset leveled manifest in this case?
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Tombstones-tp7594467p7594559.html
Sent from the cassandra-u...@incubator.apache.org mailing list
e.org
Subject: Re: Tombstones
Thanks!
How could I find leveled json manifest?
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Tombstones-tp7594467p7594535.html
Sent from the cassandra-u...@incubator.apache.org mailing list archive at
Nabble.com.
Hi Dimetrio,
>From the wiki:
Since 0.6.8, minor compactions also GC tombstones
Regards
Andi
Dimetrio wrote
Does cassandra delete tombstones during simple LCS compaction or I should use
node tool repair?
Thanks.
--
View this message in context:
http://cassandra-user-incubator-ap
Thanks!
How could I find leveled json manifest?
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Tombstones-tp7594467p7594535.html
Sent from the cassandra-u...@incubator.apache.org mailing list archive at
Nabble.com.
It will delete them after gc_grace_seconds (set per table) and a compaction.
---
Chris Lohfink
On May 16, 2014, at 9:11 AM, Dimetrio wrote:
> Does cassandra delete tombstones during simple LCS compaction or I should use
> node tool repair?
>
> Thanks.
>
>
>
> --
> View this message in con
Note that Cassandra will not compact away some tombstones if you have differing
column TTLs. See the following jira and resolution I filed for this:
https://issues.apache.org/jira/browse/CASSANDRA-6654
On May 16, 2014 4:49 PM, Chris Lohfink wrote:
It will delete them after gc_grace_seconds (se
Yes, but still you need to run 'nodetool cleanup' from time to time to make
sure all tombstones are deleted.
On Fri, May 16, 2014 at 10:11 AM, Dimetrio wrote:
> Does cassandra delete tombstones during simple LCS compaction or I should
> use
> node tool repair?
>
> Thanks.
>
>
>
> --
> View this
Nodetool cleanup deletes rows that aren't owned by specific tokens
(shouldn't be on this node). And nodetool repair makes sure data is in sync
between all replicas. It is wrong to say either of these commands cleanup
tombstones. Tombstones are only cleaned up during compactions only if they
are exp
excellent, thx
On Mon, Oct 22, 2012 at 10:13 AM, Sylvain Lebresne wrote:
> The data does get removed as soon as possible (as soon as it is
> compacted with the tombstone that is).
>
> --
> Sylvain
>
> On Mon, Oct 22, 2012 at 7:03 PM, Hiller, Dean wrote:
>> My understanding is any time from that
The data does get removed as soon as possible (as soon as it is
compacted with the tombstone that is).
--
Sylvain
On Mon, Oct 22, 2012 at 7:03 PM, Hiller, Dean wrote:
> My understanding is any time from that node. Another node may have a
> different existing value and tombstone vs. that existin
My understanding is any time from that node. Another node may have a
different existing value and tombstone vs. that existing data(most recent
timestamp wins). Ie. The data is not needed on that node so compaction
should be getting rid of it, but I never confirmed thisŠ.I hope you get
confirmatio
Hi Jonathan,
Thanks for your response.
We were running a compact at least once a day over the keyspace. The
gc_grace was set to only 1 hour, so from what you said I would expect that
tombstones should be deleted after max 3 days.
When I inspected the data in the SSTables after a compact, some ro
Removing expired columns actually requires two compaction passes: one
to turn the expired column into a tombstone; one to remove the
tombstone after gc_grace_seconds. (See
https://issues.apache.org/jira/browse/CASSANDRA-1537.)
Perhaps CASSANDRA-2786 was causing things to (erroneously) be cleaned
u
Hi Radim,
I am hunting for what I believe is a bug in Cassandra and tombstone
handling that may be triggered by our particular application usage.
I appreciate your attempt to help, but without you actually knowing what
our application is doing and why, your advice to change our application is
poin
Dne 28.3.2012 13:14, Ross Black napsal(a):
Radim,
We are only deleting columns. *Rows are never deleted.*
i suggest to change app to delete rows. try composite keys.
Radim,
We are only deleting columns. *Rows are never deleted.*
We are continually adding new columns that are then deleted. * Existing
columns (deleted or otherwise) are never updated.
*
Ross*
*
On 28 March 2012 13:51, John Laban wrote:
> (Radim: I'm assuming you mean "do not delete already d
(Radim: I'm assuming you mean "do not delete already deleted columns" as
Ross doesn't delete his rows.)
Just to be clear about Ross' situation: he continually inserts columns and
later deletes columns from the same set of rows. As long as he *doesn't* *keep
deleting already-deleted columns* (wh
Dne 27.3.2012 11:13, Ross Black napsal(a):
Any pointers on what I should be looking for in our application that
would be stopping the deletion of tombstones?
do not delete already deleted rows. On read cassandra returns deleted
rows as empty in range slices.
ely and
> irrevocably delete this message and any copies.-Original Message-
> From: Radim Kolar [mailto:h...@filez.com]
> Sent: Sunday, March 25, 2012 13:20
> To: user@cassandra.apache.org
> Subject: Re: tombstones problem with 1.0.8
>
> Scenario 4
> T1 write column
ately and irrevocably delete
this message and any copies.-Original Message-
From: Radim Kolar [mailto:h...@filez.com]
Sent: Sunday, March 25, 2012 13:20
To: user@cassandra.apache.org
Subject: Re: tombstones problem with 1.0.8
Scenario 4
T1 write column
T2 Flush memtable to S1
T3 del row
Scenario 4
T1 write column
T2 Flush memtable to S1
T3 del row
T4 flush memtable to S5
T5 tomstone S5 expires
T6 S5 is compacted but not with S1
Result?
r, please contact the sender immediately and
> irrevocably delete this message and any copies.
>
> *From:* Ross Black [mailto:ross.w.bl...@gmail.com]
> *Sent:* Friday, March 23, 2012 07:16
> *To:* user@cassandra.apache.org
> *Subject:* Re: tombstones problem with 1.0.8
>
message and any copies.-Original Message-
From: Radim Kolar [mailto:h...@filez.com]
Sent: Friday, March 23, 2012 13:28
To: user@cassandra.apache.org
Subject: Re: tombstones problem with 1.0.8
Example:
T1 < T2 < T3
at T1 write column
at T2 delete row
at T3 > tombstone expiration
Example:
T1 < T2 < T3
at T1 write column
at T2 delete row
at T3 > tombstone expiration do compact ( T1 + T2 ) and drop expired
tombstone
column from T1 will be alive again?
> You are explaining that if i have expired row tombstone and there exists
> later timestamp on this row that tombstone is not deleted? If this works that
> way, it will be never deleted.
Exactly. It is merged with new one.
Example 1: a row with 1 column in sstable. delete a row, not a column.
During compaction of selected sstables Cassandra checks the whole Column
Family for the latest timestamp of the column/row, including other
sstables and memtable.
You are explaining that if i have expired row tombstone and there exists
later timestamp on this row that tombstone is not deleted
ror, please contact the sender immediately and irrevocably delete
this message and any copies.
From: Ross Black [mailto:ross.w.bl...@gmail.com]
Sent: Friday, March 23, 2012 07:16
To: user@cassandra.apache.org
Subject: Re: tombstones problem with 1.0.8
Hi Victor,
Thanks for your response.
Is t
Hi Victor,
Thanks for your response.
Is there a possibility that continual deletions during compact could be
blocking removal of the tombstones? The full manual compact takes about 4
hours per node for our data, so there is a large number of deletes
occurring during that time.
This is the descr
Just tested 1.0.8 before upgrading from 1.0.7: tombstones created by TTL or by
delete operation are perfectly deleted after either compaction or cleanup.
Have no idea about any other settings than gc_grace_seconds, check you schema
from cassandra-cli.
Best regards/ Pagarbiai
Viktor Jevdo
El mié, 20-04-2011 a las 23:00 +1200, aaron morton escribió:
> Looks like a bug, I've added a patch
> here https://issues.apache.org/jira/browse/CASSANDRA-2519
>
>
> Aaron
>
That was fast! Thanks Aaron
Looks like a bug, I've added a patch here
https://issues.apache.org/jira/browse/CASSANDRA-2519
Aaron
On 20 Apr 2011, at 13:15, aaron morton wrote:
> Thats what I was looking for, thanks.
>
> At first glance the behaviour looks inconsistent, we count the number of
> columns in the delete muta
Thats what I was looking for, thanks.
At first glance the behaviour looks inconsistent, we count the number of
columns in the delete mutation. But when deleting a row the column count is
zero. I'll try to take a look later.
In the mean time you can force a memtable via JConsole, navigate down
El mié, 20-04-2011 a las 09:08 +1200, aaron morton escribió:
> Yes, I saw that.
>
> Wanted to know what "issue deletes through pelops" means so I can work out
> what command it's sending to cassandra and hopefully I don't waste my time
> looking in the wrong place.
>
> Aaron
>
Oh, sorry. Di
Yes, I saw that.
Wanted to know what "issue deletes through pelops" means so I can work out what
command it's sending to cassandra and hopefully I don't waste my time looking
in the wrong place.
Aaron
On 20 Apr 2011, at 09:04, Héctor Izquierdo Seliva wrote:
> I poste it a couple of messages
I poste it a couple of messages back, but here it is again:
I'm using 0.7.4. I have a file with all the row keys I have to delete
(around 100 million) and I just go through the file and issue deletes
through pelops. Should I manually issue flushes with a cron every x
time?
How do you do the deletes ?
Aaron
On 20 Apr 2011, at 08:39, Héctor Izquierdo Seliva wrote:
> El mar, 19-04-2011 a las 23:33 +0300, shimi escribió:
>> You can use memtable_flush_after_mins instead of the cron
>>
>>
>> Shimi
>>
>
> Good point! I'll try that.
>
> Wouldn't it be better to count
El mar, 19-04-2011 a las 23:33 +0300, shimi escribió:
> You can use memtable_flush_after_mins instead of the cron
>
>
> Shimi
>
Good point! I'll try that.
Wouldn't it be better to count a delete as a one column operation so it
contributes to flush by operations?
> 2011/4/19 Héctor Izquierdo S
You can use memtable_flush_after_mins instead of the cron
Shimi
2011/4/19 Héctor Izquierdo Seliva
>
> El mié, 20-04-2011 a las 08:16 +1200, aaron morton escribió:
> > I think their may be an issue here, we are counting the number of columns
> in the operation. When deleting an entire row we do
El mié, 20-04-2011 a las 08:16 +1200, aaron morton escribió:
> I think their may be an issue here, we are counting the number of columns in
> the operation. When deleting an entire row we do not have a column count.
>
> Can you let us know what version you are using and how you are doing the
>
I think their may be an issue here, we are counting the number of columns in
the operation. When deleting an entire row we do not have a column count.
Can you let us know what version you are using and how you are doing the delete
?
Thanks
Aaron
On 20 Apr 2011, at 04:21, Héctor Izquierdo Sel
Ok, I've read about gc grace seconds, but i'm not sure I understand it
fully. Untill gc grace seconds have passed, and there is a compaction,
the tombstones live in memory? I have to delete 100 million rows and my
insert rate is very low, so I don't have a lot of compactions. What
should I do in th
90 matches
Mail list logo