lready the newest version.
>>> 0 upgraded, 0 newly installed, 0 to remove and 171 not upgraded.
>>> Selecting previously unselected package opscenter-agent.
>>> dpkg: regarding .../opscenter-agent.deb containing opscenter-agent:
>>> datastax-agent conflicts with opscenter-agent
>>> opscenter-agent (version 3.2.2) is to be installed.
>>> opscenter-agent provides opscenter-agent and is to be installed.
>>> dpkg: error processing opscenter_agent_setup.vYRzL0Tevn/opscenter-agent.deb
>>> (--install):
>>> conflicting packages - not installing opscenter-agent
>>> Errors were encountered while processing:
>>> opscenter_agent_setup.vYRzL0Tevn/opscenter-agent.deb
>>> FAILURE: Unable to install the opscenter-agent package. Please check
>>> your apt-get configuration as well as the agent install log
>>> (/var/log/opscenter-agent/installer.log).
>>>
>>> Exit code: 1
>>>
>>
>>
>
--
*Franc Carter* | Systems architect | Sirca Ltd
franc.car...@sirca.org.au | www.sirca.org.au
Tel: +61 2 8355 2514
Level 4, 55 Harrington St, The Rocks NSW 2000
PO Box H58, Australia Square, Sydney NSW 1215
ct like this:
>>
>> select * from user_activity order by ts;
>>
>> as it fails with "ORDER BY is only supported when the partition key is
>> restricted by an EQ or an IN".
>>
>> How would you model the thing? Just need to have a list of users b
eeing exaggerate the
> performance hit we'd see if we moved to spinners?
>
> 2) Have you successfully used a SAN or a hybrid SAN solution (some local,
> some SAN-based) to dynamically add storage to the cluster? What type of SAN
> have you used, and what issues have you run into?
were any "recommended hardware specs" someone
> could point me to for both physical and virtual (cloud) type environments.
>
> Thank you,
> Tim
> Sent from my iPhone
--
*Franc Carter* | Systems architect | Sirca Ltd
franc.car...@sirca.org.au | www.sirca.org.au
T
tely fine. I find it odd
>
> 1. No logs of why it exited at all
> 2. No heap dump which would imply there would be no logs as it crashed
>
> Is there any other way a process can die and linux would log it somehow?
> (like running out of memory)
>
> Thanks,
> Dean
> Aaron Morton
> Cassandra Consultant
> New Zealand
>
> @aaronmorton
> http://www.thelastpickle.com
>
> On 6/08/2013, at 6:48 PM, Franc Carter wrote:
>
>
> I've been thinking through some cases that I can see happening at some
> point and thoug
p is filled.
Have I understood correctly ?
thanks
--
*Franc Carter* | Systems architect | Sirca Ltd
franc.car...@sirca.org.au | www.sirca.org.au
Tel: +61 2 8355 2514
Level 4, 55 Harrington St, The Rocks NSW 2000
PO Box H58, Australia Square, Sydney NSW 1215
scripts for this project and then look
in to Cassandra-Mutagen
cheers
>
>
> On Mon, Jul 1, 2013 at 5:55 PM, Franc Carter wrote:
>
>> On Tue, Jul 2, 2013 at 10:33 AM, Todd Fast wrote:
>>
>>> Franc--
>>>
>>> I think you will find Mutagen Cassandra ver
gt;
>
> https://github.com/toddfast/mutagen-cassandra
>
> Todd
>
>
> On Mon, Jul 1, 2013 at 5:23 PM, sankalp kohli wrote:
>
>> You can generate schema through the code. That is also one option.
>>
>>
>> On Mon, Jul 1, 2013 at 4:10 PM, Franc Carter
&
Hi,
I've been giving some thought to the way we deploy schemas and am looking
for something better than out current approach, which is to use
cassandra-cli scripts.
What do people use for this ?
cheers
--
*Franc Carter* | Systems architect | Sirca Ltd
franc.car...@sirca.o
t;
> In theory you could probably :
>
> 1) start out with the largest size you want to test
> 2) stop your node
> 3) use sstable_split [1] to split sstables
> 4) start node, test
> 5) repeat 2-4
>
> I am not sure if there is anything about level compaction which makes
> thi
ow to recover from that.
cheers
>
>
>
>
>
> On Fri, Jun 21, 2013 at 3:22 PM, Franc Carter
> wrote:
>
>>
>> Hi,
>>
>> I am experimenting with Cassandra-1.2.4, and got a crash while running
>> repair. The nodes has 24GB of ram with an 8GB heap.
:05,865 FileUtils.java (line 375)
Stopping gossiper
thanks
--
*Franc Carter* | Systems architect | Sirca Ltd
franc.car...@sirca.org.au | www.sirca.org.au
Tel: +61 2 8355 2514
Level 4, 55 Harrington St, The Rocks NSW 2000
PO Box H58, Australia Square, Sydney NSW 1215
tern
cheers
>
> Cheers
>
>-
> Aaron Morton
> Freelance Cassandra Consultant
> New Zealand
>
> @aaronmorton
> http://www.thelastpickle.com
>
> On 20/06/2013, at 9:41 PM, Franc Carter wrote:
>
> On Thu, Jun 20, 2013 at 7:27 PM, aaron morton wrote:
ealand
>
> @aaronmorton
> http://www.thelastpickle.com
>
> On 19/06/2013, at 11:41 AM, Franc Carter
> wrote:
>
> On Wed, Jun 19, 2013 at 9:34 AM, Bryan Talbot wrote:
>
>> Manual compaction for LCS doesn't really do much. It certainly doesn't
>> compac
sk and that's a reasonable trade-off
cheers
--
*Franc Carter* | Systems architect | Sirca Ltd
franc.car...@sirca.org.au | www.sirca.org.au
Tel: +61 2 8355 2514
Level 4, 55 Harrington St, The Rocks NSW 2000
PO Box H58, Australia Square, Sydney NSW 1215
CF than I 'manually compacted', and when the
pending tasks reached low numbers (stuck on 9) then latencies were back to
low milliseconds
cheers
> -Bryan
>
>
>
> On Tue, Jun 18, 2013 at 3:59 PM, Franc Carter
> wrote:
>
>> On Sat, Jun 15, 2013 at 11:49 AM, Fra
On Sat, Jun 15, 2013 at 11:49 AM, Franc Carter wrote:
> On Sat, Jun 15, 2013 at 8:48 AM, Robert Coli wrote:
>
>> On Wed, Jun 12, 2013 at 3:26 PM, Franc Carter
>> wrote:
>> > We are running a test system with Leveled compaction on Cassandra-1.2.4.
>> > While d
On Mon, Jun 17, 2013 at 3:37 PM, Franc Carter wrote:
> On Mon, Jun 17, 2013 at 3:28 PM, Wei Zhu wrote:
>
>> default value of 5MB is way too small in practice. Too many files in one
>> directory is not a good thing. It's not clear what should be a good number.
>>
gt; find a "right" number.
>
Interesting - 50MB is the low end of what people are using - 5MB is a lot
lower. I'll try a 50MB set
cheers
> -Wei
>
> ------
> *From: *"Franc Carter"
> *To: *user@cassandra.apache.org
> *Sent: *Sun
a situation like the one shown in
> figure 4.
>
to mean that once a level fills up it gets compacted into a higher level
cheers
> Cheers
> Manoj
>
>
> On Mon, Jun 17, 2013 at 1:54 PM, Franc Carter
> wrote:
>
>> On Mon, Jun 17, 2013 at 2:47 PM, Manoj Mainali wrote
sstables ?
thanks
> Cheers
>
> Manoj
>
>
> On Fri, Jun 7, 2013 at 1:44 PM, Franc Carter wrote:
>
>>
>> Hi,
>>
>> We are trialling Cassandra-1.2(.4) with Leveled compaction as it looks
>> like it may be a win for us.
>>
>> The first step of t
On Fri, Jun 7, 2013 at 2:44 PM, Franc Carter wrote:
>
> Hi,
>
> We are trialling Cassandra-1.2(.4) with Leveled compaction as it looks
> like it may be a win for us.
>
> The first step of testing was to push a fairly large slab of data into the
> Column Family - we did
On Sat, Jun 15, 2013 at 8:48 AM, Robert Coli wrote:
> On Wed, Jun 12, 2013 at 3:26 PM, Franc Carter
> wrote:
> > We are running a test system with Leveled compaction on Cassandra-1.2.4.
> > While doing an initial load of the data one of the nodes ran out of file
> > de
Hi,
We are running a test system with Leveled compaction on Cassandra-1.2.4.
While doing an initial load of the data one of the nodes ran out of file
descriptors and since then it hasn't been automatically compacting.
Any suggestions on how to fix this ?
thanks
--
*Franc Carter* | Sy
-- Forwarded message --
From: "Mark Lewandowski"
Date: Jun 8, 2013 8:03 AM
Subject: Cassandra (1.2.5) + Pig (0.11.1) Errors with large column families
To:
Cc:
> I'm cur.rently trying to get Cassandra (1.2.5) and Pig (0.11.1) to play
nice together. I'm running a basic script:
>
>
all nodes.
Is this number of files expected/normal ?
cheers
--
*Franc Carter* | Systems architect | Sirca Ltd
franc.car...@sirca.org.au | www.sirca.org.au
Tel: +61 2 8355 2514
Level 4, 55 Harrington St, The Rocks NSW 2000
PO Box H58, Australia Square, Sydney NSW 1215
ncreases the instance size to get more
CPU/Memory. If you use EBS with provisioned IOPs you should be able to make
the transition reasonably quickly.
cheers
--
*Franc Carter* | Systems architect | Sirca Ltd
franc.car...@sirca.org.au | www.sirca.org.au
Tel: +61 2 9236 9118
Level 9, 80 Clarence St, Sydney NSW 2000
PO Box H58, Australia Square, Sydney NSW 1215
terminology, I guess you can consider me a hard-liner as I have
> a few problems with calling a column family a table. I might be in the
> minority, but I know I am not alone. On one hand aliases make the
> integration easier
> https://issues.apache.org/jira/browse/CASSANDRA-2743, but
he historical data that is pretty large in a short
period of time.
>
> Cheers
>
> -
> Aaron Morton
> Freelance Developer
> @aaronmorton
> http://www.thelastpickle.com
>
> On 10/05/2012, at 3:03 PM, Franc Carter wrote:
>
>
>
> On Tue, May 8, 2
on client -PyCassa (
https://github.com/pycassa/pycassa) which works well
cheers
> Regards
> Arshad
>
--
*Franc Carter* | Systems architect | Sirca Ltd
franc.car...@sirca.org.au | www.sirca.org.au
Tel: +61 2 9236 9118
Level 9, 80 Clarence St, Sydney NSW 2000
PO Box H58, Australia Square, Sydney NSW 1215
7;s new(ish)
>
> It feels a bit like a premature optimisation.
>
Yep, that's certainly possible - it's habit I tend towards ;-(
cheers
>
> -
> Aaron Morton
> Freelance Developer
> @aaronmorton
> http://www.thelastpickle.com
>
> On 2
mailinglist, I
>>> remember this has been asked before.
>>>
>>> Cheers!
>>>
>>> 2012/5/21 Luís Ferreira
>>>
>>>> Hi,
>>>>
>>>> Does the number of keyspaces affect the overall cassandra performance?
&g
gt; 2012/5/21 Luís Ferreira
>
>> Hi,
>>
>> Does the number of keyspaces affect the overall cassandra performance?
>>
>>
>> Cumprimentos,
>> Luís Ferreira
>>
>>
>>
>>
>
>
> --
> With kind regards,
>
> Robin Verlangen
in -
which is interesting becuse i am not a database guy, but yet I still have
these ingrained ways of thinking
cheers
>
> Cheers
>
> -
> Aaron Morton
> Freelance Developer
> @aaronmorton
> http://www.thelastpickle.com
>
> On 10/05/2012, at 3:03 PM
On Tue, May 8, 2012 at 8:21 PM, Franc Carter wrote:
> On Tue, May 8, 2012 at 8:09 PM, aaron morton wrote:
>
>> Can you store the corrections in a separate CF?
>>
>
We sat down and thought about this harder - it looks like a good solution
for us that may makel other hard prob
race conditions . . .
cheers
>
> Cheers
>
> -
> Aaron Morton
> Freelance Developer
> @aaronmorton
> http://www.thelastpickle.com
>
> On 8/05/2012, at 9:35 PM, Franc Carter wrote:
>
>
> Hi,
>
> I'm wondering if there is a common '
ndra doesn't support - correct ?)
thanks
--
*Franc Carter* | Systems architect | Sirca Ltd
franc.car...@sirca.org.au | www.sirca.org.au
Tel: +61 2 9236 9118
Level 9, 80 Clarence St, Sydney NSW 2000
PO Box H58, Australia Square, Sydney NSW 1215
t will be?
>
The system is batch, jobs could range between very small up to a moderate
percentage of the data set. It' even possible that we could need to read
the entire data set. How much we get resident is a cost/performance
trade-off we need to make
cheers
>
> -Jake
>
>
n
> http://www.thelastpickle.com
>
> On 20/04/2012, at 12:55 AM, Dave Brosius wrote:
>
> I think your math is 'relatively' correct. It would seem to me you should
> focus on how you can reduce the amount of storage you are using per item,
> if at all possible,
on what budget you have.
>
The bit I am trying to understand is whether my figure of 400TB/node in
practice for Cassandra is correct, or whether we can push the GB/node
higher and if so how high
cheers
> -- Y.
>
>
> On Thu, Apr 19, 2012 at 7:54 AM, Franc Carter
> wr
On Thu, Apr 19, 2012 at 10:07 PM, John Doe wrote:
> Franc Carter
>
> > One of the projects I am working on is going to need to store about
> 200TB of data - generally in manageable binary chunks. However, after doing
> some rough calculations based on rules of thumb I have
On Thu, Apr 19, 2012 at 9:38 PM, Romain HARDOUIN
wrote:
>
> Cassandra supports data compression and depending on your data, you can
> gain a reduction in data size up to 4x.
>
The data is gzip'd already ;-)
> 600 TB is a lot, hence requires lots of servers...
>
>
>
600TB = 600,000GB
Which is 1000 nodes at 600GB per node
I'm hoping I've missed something as 1000 nodes is not viable for us.
cheers
--
*Franc Carter* | Systems architect | Sirca Ltd
franc.car...@sirca.org.au | www.sirca.org.au
Tel: +61 2 9236 9118
Level 9, 80 Clarence St, Sydney N
On Wed, Apr 4, 2012 at 8:56 AM, Jonathan Ellis wrote:
> We use 2MB chunks for our CFS implementation of HDFS:
> http://www.datastax.com/dev/blog/cassandra-file-system-design
>
thanks
>
> On Mon, Apr 2, 2012 at 4:23 AM, Franc Carter
> wrote:
> >
> > Hi,
> &g
is problem.
>
> As with everything else, you'll probably need to test your specific use
> case to see what 'too big' is for you.
>
> On Mon, Apr 2, 2012 at 9:23 AM, Franc Carter wrote:
>
>>
>> Hi,
>>
>> We are in the early stages of thinking about
le of comments that you shouldn't put large chunks in to a
value - however 'large' is not well defined for the range of people using
these solutions ;-)
Doe anyone have a rough rule of thumb for how big a single value can be
before we are outside sanity?
thanks
--
*Franc Carter*
h also includes a single index for the row keys).
>
> So with compression switched on, in this specific case the storage
> requirements are roughly the same on Cassandra and MySQL.
Good to know - thanks
>
>
>
>
>
>> * Is data in an sstable sorted by key then column
rted by key then column or column then key
cheers
>
> Hope that helps.
>
>
> -
> Aaron Morton
> Freelance Developer
> @aaronmorton
> http://www.thelastpickle.com
>
> On 28/02/2012, at 8:07 PM, Franc Carter wrote:
>
>
> Hi,
>
> d
Hi,
does anyone know of a picture/image that shows the layout of
keys/columns/values in an sstable - I haven't been able to find one and am
having a hard time visualising the layout from various descriptions and
various overviews
thanks
--
*Franc Carter* | Systems architect | Sirc
like the following:
>>
>> Entity.Day1.TypeA: {col1:val1, col2:val2, . . . }
>> Entity.Day1.TypeB: {col1:val1, col2:val2, . . . }
>> .
>> .
>> Entity.DayN.TypeA: {col1:val1, col2:val2, . . . }
>> Entity.DayN.TypeB: {col1:val1, col2:val2, . . . }
>>
&
, col2:val2, . . . }
>> .
>> .
>> Entity.DayN.TypeA: {col1:val1, col2:val2, . . . }
>> Entity.DayN.TypeB: {col1:val1, col2:val2, . . . }
>>
>> It is better to avoid super columns..
>>
>> -indra
>>
>> On Thu, Feb 23, 2012 at 6:36 PM, Franc Carter
>> wrote:
t
to do this in my simplistic approach as the Days are super columns, the
types are column and then I don't have a col/val map left for data.
Does anyone have advice on a good approach ?
thanks
--
*Franc Carter* | Systems architect | Sirca Ltd
franc.car...@sirca.org.au | www.sirca.org.a
On Mon, Feb 20, 2012 at 9:42 PM, Franc Carter wrote:
> On Mon, Feb 20, 2012 at 12:00 PM, aaron morton wrote:
>
>> Aside from iostats..
>>
>> nodetool cfstats will give you read and write latency for each CF. This
>> is the latency for the operation on each node. Chec
>
Does this help ?
http://wiki.apache.org/cassandra/FAQ#iter_world
cheers
> Thanks
> Flavio
>
--
*Franc Carter* | Systems architect | Sirca Ltd
franc.car...@sirca.org.au | www.sirca.org.au
Tel: +61 2 9236 9118
Level 9, 80 Clarence St, Sydney NSW 2000
PO Box H58, Australia Square, Sydney NSW 1215
gt;
> -
> Aaron Morton
> Freelance Developer
> @aaronmorton
> http://www.thelastpickle.com
>
> On 20/02/2012, at 9:31 AM, Franc Carter wrote:
>
> On Mon, Feb 20, 2012 at 4:10 AM, Philippe wrote:
>
>> Perhaps your dataset can no longer be held in memory.
s keys are now 'far
enough away' that they are not being included in the previous read and
hence the seek penalty has to be paid a lot more often - viable ?
cheers
> Le 19 févr. 2012 11:24, "Franc Carter" a
> écrit :
>
>
>> I've been testing Cass
?
thanks
--
*Franc Carter* | Systems architect | Sirca Ltd
franc.car...@sirca.org.au | www.sirca.org.au
Tel: +61 2 9236 9118
Level 9, 80 Clarence St, Sydney NSW 2000
PO Box H58, Australia Square, Sydney NSW 1215
On 17/02/2012 8:53 AM, "Eran Chinthaka Withana"
wrote:
>
> Hi Jonathan,
>
> Thanks for the reply. Yes there is a possibility that the keys can be
distributed in multiple SSTables, but my data access patterns are such that
I always read/write the whole row. So I expect all the data to be in the
sam
---
> Aaron Morton
> Freelance Developer
> @aaronmorton
> http://www.thelastpickle.com
>
> On 15/02/2012, at 12:42 AM, Franc Carter wrote:
>
>
> Hi,
>
> I'm running the DataSatx 1.0.7 AMI in ec2. I started with two nodes and
> have just added a thir
On Wed, Feb 15, 2012 at 9:25 AM, Rob Coli wrote:
> On Tue, Feb 14, 2012 at 2:02 PM, Franc Carter
> wrote:
>
>> On Wed, Feb 15, 2012 at 8:49 AM, Brandon Williams wrote:
>>
>>> Before 1.0.8, use https://issues.apache.org/jira/browse/CASSANDRA-3337
>>> to rem
On Wed, Feb 15, 2012 at 8:49 AM, Brandon Williams wrote:
> Before 1.0.8, use https://issues.apache.org/jira/browse/CASSANDRA-3337
> to remove it.
>
I'm missing something ;-( I don't see a solution in this link . .
cheers
>
> On Tue, Feb 14, 2012 at 3:44 PM, Franc C
java:886)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
Any ideas on how to deal with this ?
thanks
--
*Franc Carter* | Systems architect | Sirca Ltd
franc.car...@sirca.org.au | www.sirca.org.au
Tel: +61
>
> -
> Aaron Morton
> Freelance Developer
> @aaronmorton
> http://www.thelastpickle.com
>
> On 14/02/2012, at 12:43 PM, Franc Carter wrote:
>
> On Tue, Feb 14, 2012 at 6:06 AM, aaron morton wrote:
>
>> What CL are you reading at ?
>&g
tream(IncomingTcpConnection.java:185)
at
org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:81)
Any advice on how to resolve this ?
thanks
--
*Franc Carter* | Systems architect | Sirca Ltd
franc.car...@sirca.org.au | www.sirca.org.au
Tel: +61 2 9236 9118
Level
s to the same
server). The requested keys don't overlap and I would expect/assume the
keys are in the keycache
I am looking at the output of nodetool -h tpstats
cheers
> Cheers
>
>
> -
> Aaron Morton
> Freelance Developer
> @aaronmorton
> htt
de and the other two go to a different node
then the pending queue on the node gets much longer than if they all go to
the one node.
I'm clearly missing something here as I would have expected the opposite
cheers
--
*Franc Carter* | Systems architect | Sirca Ltd
franc.car.
low on heap.
> Watch cassandra.log for messages to that effect (don't remember the
> exact message right now).
>
>
I just grep'd the logs and couldn't see anything that looked like that
> --
> / Peter Schuller (@scode, http://worldmodscode.wordpress.com)
>
--
you have
> something specifically ensuring that it is entirely smooth. A
> completely random distribution over time for example would look very
> even on almost any graph you can imagine unless you have sub-second
> resolution, but would still exhibit un-evenness and have an affect
ehow got to a rather odd number
>
> --
> / Peter Schuller (@scode, http://worldmodscode.wordpress.com)
>
--
*Franc Carter* | Systems architect | Sirca Ltd
franc.car...@sirca.org.au | www.sirca.org.au
Tel: +61 2 9236 9118
Level 9, 80 Clarence St, Sydney NSW 2000
PO Box H58, Australia Square, Sydney NSW 1215
to explain s why it is good sometimes in an environment that is pretty well
controlled - other than being on ec2
>
> --
> / Peter Schuller (@scode, http://worldmodscode.wordpress.com)
>
--
*Franc Carter* | Systems architect | Sirca Ltd
franc.car...@sirca.org.au | www.sirca.o
while
> where you sometimes see consistently good latencies, that sounds
> different but would hopefully be observable somehow.
>
> --
> / Peter Schuller (@scode, http://worldmodscode.wordpress.com)
>
--
*Franc Carter* | Systems architect | Sirca Ltd
franc.car...@sirca.org.au | www.sirca.org.au
Tel: +61 2 9236 9118
Level 9, 80 Clarence St, Sydney NSW 2000
PO Box H58, Australia Square, Sydney NSW 1215
x27;t help to eliminate seeking to get the data (but as usual, it
> may still be in the operating system page cache).
>
Yep - I haven't enabled row caches, my calculations at the moment indicate
that the hit-ratio won't be great - but I'll be testing that later
>
>
ey are in the cache. I have my keycache set to 2 million, and am
only querying ~900,000 keys. so after the first time I'm assuming they are
in the cache.
cheers
>
>
> 2012/2/13 Franc Carter
>
>> 2012/2/13 R. Verlangen
>>
>>> This is because of the "warm
bserve. Figure out what the bottleneck is. iostat, top, nodetool
> tpstats, nodetool netstats, nodetool compactionstats.
>
I now why it is slow - it's clearly I/O bound. I am trying to hunt down why
it is sometimes much faster even though I have (tried) to replicate the
same conditi
hut down Cassandra, flushed the O/S buffer cache and
then bought it back up. The performance wasn't significantly different to
the pre-flush performance
cheers
>
>
> 2012/2/13 Franc Carter
>
>> On Mon, Feb 13, 2012 at 5:03 PM, zhangcheng wrote:
>>
>>> **
>
s
>
> 2012-02-13
> --
> zhangcheng
> ------
> *发件人:* Franc Carter
> *发送时间:* 2012-02-13 13:53:56
> *收件人:* user
> *抄送:*
> *主题:* keycache persisted to disk ?
>
> Hi,
>
> I am testing Cassandra on Ama
Hi,
I am testing Cassandra on Amazon and finding performance can vary fairly
wildly. I'm leaning towards it being an artifact of the AWS I/O system but
have one other possibility.
Are keycaches persisted to disk and restored on a clean shutdown and
restart ?
cheers
--
*Franc C
tp://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Read-Latency-td5636553.html#a5652476
>
thanks
>
> Cheers
>
>
> -----
> Aaron Morton
> Freelance Developer
> @aaronmorton
> http://www.thelastpickle.com
>
> On 7/02/2012, at 2:28 PM, Franc Carter w
r very
> large.
>
> I've modeled this with a simple column family for the keys with the row
> key being the concatenation of the entity and date. My first go, used only
> the entity as the row key and then used a supercolumn for each date. I
> decided against this mostly because
his mostly because it seemed more complex for a gain I didn't
really understand.
Does this seem sensible ?
thanks
--
*Franc Carter* | Systems architect | Sirca Ltd
franc.car...@sirca.org.au | www.sirca.org.au
Tel: +61 2 9236 9118
Level 9, 80 Clarence St, Sydney NSW 2000
PO Box H58, A
81 matches
Mail list logo