Hello, I am testing the performance of cassandra. We write 200k records to database and each record is 1k size. Then we read these 200k records. It takes more than 400s to finish the read which is much slower than mysql (20s around). I read some discussion online and someone suggest to make multiple connections to make it faster. But I am not sure how to do it, do I need to change my storage setting file or just change the java client code?
Here is my read code, Properties info = new Properties(); info.put(DriverManager.CONSISTENCY_LEVEL, ConsistencyLevel.ONE.toString()); IConnection connection = DriverManager.getConnection( "thrift://localhost:9160", info); // 2. Get a KeySpace by name IKeySpace keySpace = connection.getKeySpace("Keyspace1"); // 3. Get a ColumnFamily by name IColumnFamily cf = keySpace.getColumnFamily("Standard2"); ByteArray nameFirst = ByteArray.ofASCII("first"); ICriteria criteria = cf.createCriteria(); long readBytes = 0; long start = System.currentTimeMillis(); for (int i = 0; i < numOfRecords; i++) { int n = random.nextInt(numOfRecords); userName = keySet[n]; criteria.keyList(Lists.newArrayList(userName)).columnRange(nameFirst, nameFirst, 10); Map<String, List<IColumn>> map = criteria.select(); List<IColumn> list = map.get(userName); ByteArray bloc = list.get(0).getValue(); byte[] byteArrayloc = bloc.toByteArray(); loc = new String(byteArrayloc); // System.out.println(userName+" "+loc); readBytes = readBytes + loc.length(); } long finish=System.currentTimeMillis(); I once commented these lines ByteArray bloc = list.get(0).getValue(); byte[] byteArrayloc = bloc.toByteArray(); loc = new String(byteArrayloc); // System.out.println(userName+" "+loc); readBytes = readBytes + loc.length(); And the performance doesn't improve much. Any suggestion is welcome. Thanks,