Hi,

Tried to use n=.. instead of duration=..., as one of users suggested.
Have more data on disks, indeed.

Succeed to fill up data disks up to 4% and have an error.

Questions:

1)      May be you can hint what could that error mean?

2)       When I see everything during the test, printed 3 times, does it mean I 
have actually 3 stress "engines" running, each reporting its stuff.
Or I see reports from 3 actually connected nodes?


My test was following:

6 nodes in circle
RF=2

cassandra-stress user profile=./test.yaml ops\(insert=100, 
get300spartaworriers=1\) n=1000000000000  no-warmup cl=ONE -rate threads=400 
-node node1, node2, node3, node4, node5, node6 -log file=./stress.log


test.yaml:

---
columnspec:
  -
    name: SECURITY_ID
    population: uniform(1..300)
    size: gaussian(10..15)
  -
    name: MARKET_SEGMENT_ID
    population: uniform(50..100)
    size: fixed(10)
  -
    name: EMS_INSTANCE_ID
    population: fixed(10)
    size: fixed(4)
  -
    name: PUB_DATE_ONLY
    population: fixed(1)
  -
    name: LP_DEAL_CODE
    population: fixed(300)
    size: fixed(4)
  -
    name: PUB_TIMESTAMP
    cluster: UNIFORM(1..10000000000)
    size: fixed(10)
  -
    name: PUB_TIME_ONLY
    cluster: UNIFORM(1..10000000000)
    size: fixed(10)
  -
    name: PUB_SEQ
    size: fixed(10)
  -
    name: PUB_TIME_MICROS
    population: UNIFORM(1..100B)
    size: fixed(10)
  -
    name: PAYLOAD_TYPE
    population: uniform(1..5)
    size: fixed(10)
  -
    name: PAYLOAD_SERIALIZED
    population: uniform(1..500)
    size: fixed(256)
  -
    name: EMS_LOG_TIMESTAMP
    population: uniform(10..100000000)
    size: fixed(10)
  -
    name: EMS_LOG_TYPE
    population: uniform(1..5)
    size: fixed(10)
insert:
  batchtype: UNLOGGED
  partitions: fixed(1)
  select: uniform(1..10)/10
keyspace: marketdata_ttl2
keyspace_definition: "CREATE KEYSPACE marketdata_ttl2 with replication = 
{'class':'NetworkTopologyStrategy','NY':2};\n"
queries:
  get300spartaworriers:
    cql: "select PAYLOAD_SERIALIZED,PUB_TIME_MICROS from ems_md_esp_var01_ttl2 
where SECURITY_ID = ? and MARKET_SEGMENT_ID=? and EMS_INSTANCE_ID=? and 
PUB_DATE_ONLY=? and LP_DEAL_CODE=? LIMIT 100"
    fields: samerow
table: ems_md_esp_var01_ttl2
table_definition: |
    create table ems_md_esp_var01_ttl2 (SECURITY_ID bigint,MARKET_SEGMENT_ID  
int , EMS_INSTANCE_ID       int, LP_DEAL_CODE          ascii, PUB_TIME_MICROS   
    bigint, PUB_SEQ               text,PUB_TIMESTAMP         text , 
PUB_DATE_ONLY         date      , PUB_TIME_ONLY         text      , 
PAYLOAD_TYPE          int, PAYLOAD_SERIALIZED    blob, EMS_LOG_TIMESTAMP     
timestamp,EMS_LOG_TYPE          int   , primary key  ( ( SECURITY_ID, 
MARKET_SEGMENT_ID, EMS_INSTANCE_ID, PUB_DATE_ONLY, LP_DEAL_CODE ) , 
PUB_TIMESTAMP, PUB_TIME_ONLY))with clustering order by (PUB_TIMESTAMP asc, 
PUB_TIME_ONLY asc) and  default_time_to_live = 2419200


Last lines from the test log:


get300spartaworriers,  22611773,     772,       5,       5,    49.3,     3.2,   
285.8,   746.7,  1961.9,  2576.5,23072.2,  0.00344,      0,      0,       0,    
   0,       0,       0
get300spartaworriers,  22611773,     772,       5,       5,    49.3,     3.2,   
285.8,   746.7,  1961.9,  2576.5,23072.2,  0.00344,      0,      0,       0,    
   0,       0,       0
get300spartaworriers,  22611773,     772,       5,       5,    49.3,     3.2,   
285.8,   746.7,  1961.9,  2576.5,23072.2,  0.00344,      0,      0,       0,    
   0,       0,       0

insert,   2259485753,   78396,   78396,   78396,     4.6,     1.6,     7.5,    
51.8,   508.0,   709.5,23072.2,  0.00344,      0,      0,       0,       0,     
  0,       0
insert,   2259485753,   78396,   78396,   78396,     4.6,     1.6,     7.5,    
51.8,   508.0,   709.5,23072.2,  0.00344,      0,      0,       0,       0,     
  0,       0
insert,   2259485753,   78396,   78396,   78396,     4.6,     1.6,     7.5,    
51.8,   508.0,   709.5,23072.2,  0.00344,      0,      0,       0,       0,     
  0,       0

total,    2282097526,   79166,   78399,   78399,     5.0,     1.6,     7.8,    
61.7,   577.1,  2576.5,23072.2,  0.00344,      0,      0,       0,       0,     
  0,       0
total,    2282097526,   79166,   78399,   78399,     5.0,     1.6,     7.8,    
61.7,   577.1,  2576.5,23072.2,  0.00344,      0,      0,       0,       0,     
  0,       0
total,    2282097526,   79166,   78399,   78399,     5.0,     1.6,     7.8,    
61.7,   577.1,  2576.5,23072.2,  0.00344,      0,      0,       0,       0,     
  0,       0

java.io.IOException: Operation x10 on key(s) 
[38127572172|905017789|1736561174|1639597-07-18|u-?]: Error executing: 
(NoSuchElementException)
java.io.IOException: Operation x10 on key(s) 
[38127572172|905017789|1736561174|1639597-07-18|u-?]: Error executing: 
(NoSuchElementException)

java.io.IOException: Operation x10 on key(s) 
[38127572172|905017789|1736561174|1639597-07-18|u-?]: Error executing: 
(NoSuchElementException)

                at 
org.apache.cassandra.stress.Operation.error(Operation.java:138)      at 
org.apache.cassandra.stress.Operation.error(Operation.java:138)
                at 
org.apache.cassandra.stress.Operation.error(Operation.java:138)
                at 
org.apache.cassandra.stress.Operation.timeWithRetry(Operation.java:116)  at 
org.apache.cassandra.stress.Operation.timeWithRetry(Operation.java:116)
                at 
org.apache.cassandra.stress.Operation.timeWithRetry(Operation.java:116)
                at 
org.apache.cassandra.stress.operations.userdefined.SchemaQuery.run(SchemaQuery.java:156)
        at 
org.apache.cassandra.stress.operations.userdefined.SchemaQuery.run(SchemaQuery.java:156)
                at 
org.apache.cassandra.stress.operations.userdefined.SchemaQuery.run(SchemaQuery.java:156)
                at 
org.apache.cassandra.stress.StressAction$Consumer.run(StressAction.java:321)    
     at 
org.apache.cassandra.stress.StressAction$Consumer.run(StressAction.java:321)
                at 
org.apache.cassandra.stress.StressAction$Consumer.run(StressAction.java:321)

get300spartaworriers,  22617002,     225,       2,       2,   137.0,     3.5,   
883.6,  2624.1,  3429.8,  5031.7,23095.4,  0.00349,      1,      0,       0,    
   0,       0,       0
get300spartaworriers,  22617002,     225,       2,       2,   137.0,     3.5,   
883.6,  2624.1,  3429.8,  5031.7,23095.4,  0.00349,      1,      0,       0,    
   0,       0,       0
get300spartaworriers,  22617002,     225,       2,       2,   137.0,     3.5,   
883.6,  2624.1,  3429.8,  5031.7,23095.4,  0.00349,      1,      0,       0,    
   0,       0,       0

insert,   2259648876,    7025,    7025,    7025,    10.1,     1.5,     9.2,    
87.5,  2445.5,  2961.1,23095.4,  0.00349,      0,      0,       0,       0,     
  0,       0
insert,   2259648876,    7025,    7025,    7025,    10.1,     1.5,     9.2,    
87.5,  2445.5,  2961.1,23095.4,  0.00349,      0,      0,       0,       0,     
  0,       0
insert,   2259648876,    7025,    7025,    7025,    10.1,     1.5,     9.2,    
87.5,  2445.5,  2961.1,23095.4,  0.00349,      0,      0,       0,       0,     
  0,       0

total,    2282265878,    7249,    7026,    7026,    14.0,     1.5,    11.0,   
221.3,  2447.1,  5031.7,23095.4,  0.00349,      1,      0,       0,       0,    
   0,       0
total,    2282265878,    7249,    7026,    7026,    14.0,     1.5,    11.0,   
221.3,  2447.1,  5031.7,23095.4,  0.00349,      1,      0,       0,       0,    
   0,       0
total,    2282265878,    7249,    7026,    7026,    14.0,     1.5,    11.0,   
221.3,  2447.1,  5031.7,23095.4,  0.00349,      1,      0,       0,       0,    
   0,       0

FAILUREFAILURE
FAILURE



From: Peter Kovgan
Sent: Wednesday, June 15, 2016 3:25 PM
To: 'user@cassandra.apache.org'
Subject: how to force cassandra-stress to actually generate enough data

Hi,

The cassandra-stress is not helping really to populate the disk sufficiently.

I tried several table structures, providing

cluster: UNIFORM(1..10000000000)  on clustering parts of the PK.

Partition part of PK makes about 660 000 partitions.

The hope was create enough cells in a row, make the row really WIDE.

No matter what I tried, does no matter how long it runs, I see maximum 2-3 
SSTables per node and maximum 300Mb of data per node.

(I have 6 nodes and very active 400 threads stress)

It looks, like It is impossible to make the row really wide and disk really 
full.

Is it intentional?

I mean, if there was an intention to avoid really wide rows, why there is no 
hint on this in docs?

Do you have similar experience and do you know how resolve that?

Thanks.

**************************************************************************************************************************************************************
This communication and all or some of the information contained therein may be 
confidential and is subject to our Terms and Conditions. If you have received 
this
communication in error, please destroy all electronic and paper copies and 
notify the sender immediately. Unless specifically indicated, this 
communication is 
not a confirmation, an offer to sell or solicitation of any offer to buy any 
financial product, or an official statement of ICAP or its affiliates. 
Non-Transactable Pricing Terms and Conditions apply to any non-transactable 
pricing provided. All terms and conditions referenced herein available
at www.icapterms.com. Please notify us by reply message if this link does not 
work.
**************************************************************************************************************************************************************

Reply via email to