Is there a concern of a large falloff in commit log write performance
(sequential) when sharing 2 drives (RAID 1) with the OS (os and services
writing their own logs, etc)? Do you expect the hit to be marginal?
On Tue, Oct 30, 2012 at 7:58 PM, aaron morton wrote:
> We also have 4-disk nodes, an
> We also have 4-disk nodes, and we use the following layout:
> 2 x OS + Commit in RAID 1
> 2 x Data disk in RAID 0
+1
You are replicating data at the application level and want the fastest possible
IO performance per node.
> You can already distribute the
> individual Cassandra column familie
On Mon, May 21, 2012 at 7:08 AM, Alain RODRIGUEZ wrote:
> Here are my 2 nodes starting logs, I hop it can help...
>
> https://gist.github.com/2762493
> https://gist.github.com/2762495
I see in these logs that you replay 2 mutations per node, despite
doing nodetool drain before restarting. However
Yes. Any fuse filesystem is going to be substantially slower than a massive
one like ext4.
-Tupshin
On Oct 30, 2012 2:09 PM, "Brian Tarbox" wrote:
> I got some new ubuntu servers to add to my cluster and found that the file
> system is "fuseblk" which really means NTFS.
>
> All else being equal
One follow up questions.
I have been told that it's much easier to scale the cluster by doubling the
number of nodes, since no token changed needed on the existing nodes.
But if the number of nodes is substantial, it's not realistic to double it
every time. How easy is to add let's say 3 additional
I got some new ubuntu servers to add to my cluster and found that the file
system is "fuseblk" which really means NTFS.
All else being equal would I expect to get any performance boost if I
converted the file system to EXT4? Edward Capriolo's "Cookbook" book seems
to suggest so.
Thanks.
Brian T
Out of interest can you quantify the throughput reduction ? It looks like less
than 10%.
Nice to see it roughly correspond to SEDA throughput :)
> Check how many concurrent real requests you have vs size of thread pools.
Did the change the defaults for the hsha settings below?
How many concurre
Hi,
I have the exact same problem with 1.1.6. HintsColumnFamily consists
of one row (Rowkey 00, nothing more). The "problem" started after
upgrading from 1.1.4 to 1.1.6. Every ten minutes
HintedHandoffManager starts and finishes after sending "0 rows".
.vegard,
- Original Message -
Hi,
I have the exact same problem with 1.1.6. HintsColumnFamily consists
of one row (Rowkey 00, nothing more). The "problem" started after
upgrading from 1.1.4 to 1.1.6. Every ten minutes
HintedHandoffManager starts and finishes after sending "0 rows".
.vegard,
- Original Message -
> My use case relied on "mangling" the column names for various
Can you provide details on the use case?
CQL 3 is not locked, the more feedback the better.
> That's not a problem, but I still needed a definitive statement that
> "schema-free tables" (not column families obviously) are now
> impo
maybe enable the debug in log4j-server.properties and going through the log
to see what actually happen?
On Tue, Oct 30, 2012 at 7:31 PM, Alain RODRIGUEZ wrote:
> Hi,
>
> I have an issue with counters, yesterday I had a lot of ununderstandable
> reads/sec on one server. I finally restart Cassand
On Tue, Oct 30, 2012 at 11:56 AM, Timmy Turner wrote:
> Does the cell transposition that is necessary for CQL3 happen on the
> server side after the query execution, or is it something that the
> Cassandra/CQL-client does before ultimately handing over the result
> set to the caller?
The former (
1. High availability
2. You can hold much much more data
3. Better performance
4. You can do disaster recovery live-live datacenters (if you configure
cassandra)
On 10/29/12 4:02 PM, "Andrey Ilinykh" wrote:
>This is how cassandra scales. More nodes means better performance.
>
>thank you,
> And
If you want to avoid the type 1 opaque type uuid's, PlayOrm also does a simple
key generator if you do
@NoSqlId
Private String id;
It uses the simple hostname(not mac) like a1, a2, a3 which is the names in our
cluster plus a unique timestamp within the node. We do not expose the
subdomain tha
Hi,
We have several Cassandra clusters in our department, each for a single
application.
Now we're considering merge these clusters to a single one, and this
single one will serve
all applications using cassandra, each with a single keyspace.
We tried unifying all cluster names to a same one, but
Does the cell transposition that is necessary for CQL3 happen on the
server side after the query execution, or is it something that the
Cassandra/CQL-client does before ultimately handing over the result
set to the caller?
We also have 4-disk nodes, and we use the following layout:
2 x OS + Commit in RAID 1
2 x Data disk in RAID 0
This gives us the advantage we never have to reinstall the node when a drive
crashes.
Kind regards,
Pieter
From: Ran User [mailto:ranuse...@gmail.com]
Sent: dinsdag 30 oktober 2012 4:3
Check how many concurrent real requests you have vs size of thread pools.
Regards,
Terje
On 30 Oct 2012, at 13:28, Peter Bailis wrote:
>> I'm using YCSB on EC2 with one m1.large instance to drive client load
>
> To add, I don't believe this is due to YCSB. I've done a fair bit of
> client-sid
> Is your use case covered in the article above ?
My use case relied on "mangling" the column names for various
purposes, and CQL3 with its transposed columns does of course still
allow for that, but it means rewriting the part of the application
that dealt with CQL so it can handle the new syntax
19 matches
Mail list logo