Hi Sylvain,

thanks for fast answer. I have updated keyspace definition and cassandra-topologies.properties to all 3 nodes and restarted each node. Both problems are still reproducible. I'm not able to read my writes and also the selects shows same data as in my previous email.

for write and read I'm using:
private static final String WRITE_STATEMENT = "INSERT INTO avatars (id, image_type, avatar) VALUES (?,?,?);"; private static final String READ_STATEMENT = "SELECT avatar, image_type FROM avatars WHERE id=?";

I'm using java-driver (1.0.0-beta1) with prepared statement, sync calls.

Write snippet:

Session session;
        try {
            session = cassandraSession.getSession();
            BoundStatement stmt = session.prepare(WRITE_STATEMENT)
.setConsistencyLevel(ConsistencyLevel.LOCAL_QUORUM).bind();
            stmt.enableTracing();
            stmt.setLong("id", accountId);
            stmt.setString("image_type", image.getType());
            stmt.setBytes("avatar", ByteBuffer.wrap(image.getBytes()));
            ResultSet result = session.execute(stmt);
            LOG.info("UPLOAD COORDINATOR: {}", result.getQueryTrace()
                    .getCoordinator().getCanonicalHostName());

        } catch (NoHostAvailableException e) {
            LOG.error("Could not prepare the statement.", e);
            throw new StorageUnavailableException(e);
        } finally {
            cassandraSession.releaseSession();
        }

Read snippet:

Session session = null;
        byte[] imageBytes = null;
        String imageType = "png";
        try {
            session = cassandraSession.getSession();
            BoundStatement stmt = session.prepare(READ_STATEMENT)
.setConsistencyLevel(ConsistencyLevel.LOCAL_QUORUM).bind();
            stmt.setLong("id", accountId);
            ResultSet result = session.execute(stmt);
            Iterator<Row> it = result.iterator();
            ByteBuffer avatar = null;

            while (it.hasNext()) {
                Row row = it.next();
                avatar = row.getBytes("avatar");
                imageType = row.getString("image_type");
            }
            if (avatar == null) {
throw new AvatarNotFoundException("Avatar hasn't been found");
            }
            int length = avatar.remaining();
            imageBytes = new byte[length];
            avatar.get(imageBytes, 0, length);

        } catch (NoHostAvailableException e) {
            LOG.error("Could not prepare the statement.", e);
            throw new StorageUnavailableException(e);
        } finally {
            cassandraSession.releaseSession();
        }

Let me know what other information is need it.

Thanks,
Gabi


On 3/5/13 12:52 PM, Sylvain Lebresne wrote:
Without looking into details too closely, I'd say you're probably hitting https://issues.apache.org/jira/browse/CASSANDRA-5292 (since you use NTS+propertyFileSnitch+a DC name in caps).

Long story short, the CREATE KEYSPACE interpret your DC-TORONTO as dc-toronto, which then probably don't match what you have in you property file. This will be fixed in 1.2.3. In the meantime, a workaround would be to use the cassandra-cli to create/update your keyspace definition.

--
Sylvain


On Tue, Mar 5, 2013 at 11:24 AM, Gabriel Ciuloaica <gciuloa...@gmail.com <mailto:gciuloa...@gmail.com>> wrote:

    Hello,

    I'm trying to find out what the problem is and where it is located.
    I have a 3 nodes Cassandra cluster (1.2.1), RF=3.
    I have a keyspace and a cf as defined (using PropertyFileSnitch):

    CREATE KEYSPACE backend WITH replication = {
      'class': 'NetworkTopologyStrategy',
      'DC-TORONTO': '3'
    };

    USE backend;

    CREATE TABLE avatars (
      id bigint PRIMARY KEY,
      avatar blob,
      image_type text
    ) WITH
      bloom_filter_fp_chance=0.010000 AND
      caching='KEYS_ONLY' AND
      comment='' AND
      dclocal_read_repair_chance=0.000000 AND
      gc_grace_seconds=864000 AND
      read_repair_chance=0.100000 AND
      replicate_on_write='true' AND
      compaction={'class': 'SizeTieredCompactionStrategy'} AND
      compression={'sstable_compression': 'SnappyCompressor'};

    Status of the cluster:
    Datacenter: DC-TORONTO
    ======================
    Status=Up/Down
    |/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns Host ID Rack
    UN  10.11.1.109       44.98 MB   256     46.8%
    726689df-edc3-49a0-b680-370953994a8c  RAC2
    UN  10.11.1.200       6.57 MB    64      10.3%
    d6d700d4-28aa-4722-b215-a6a7d304b8e7  RAC3
    UN  10.11.1.108       54.32 MB   256     42.8%
    73cd86a9-4efb-4407-9fe8-9a1b3a277af7  RAC1

    I'm trying to read my writes, by using CQL (datastax-java-driver),
    using LOCAL_QUORUM for reads and writes. For some reason, some of
    the writes are lost. Not sure if it is a driver issue or cassandra
    issue.
    Dinging further, using cqlsh client (1.2.1), I found a strange
    situation:

    select count(*) from avatars;

     count
    -------
       226

    select id from avatars;

     id
    ---------
         314
         396
          19
     .........    ->  77 rows in result

    select id, image_type from avatars;

     id      | image_type
    ---------+------------
         332 |        png
         314 |        png
         396 |       jpeg
          19 |        png
     1250014 |       jpeg
    ........ -> 226 rows in result.

    I do not understand why for second select I'm able to retrieve
    just a part of the rows and not all rows.

    Not sure if this is related or not to the initial problem.

    Any help is really appreciated.
    Thanks,
    Gabi






Reply via email to