You have to wait for schema agreement which most drivers should do by default. At least have a check schema agreement method you can use.
https://datastax.github.io/java-driver/2.1.9/features/metadata/ The new cqlsh uses the python driver so the same should apply: https://datastax.github.io/python-driver/api/cassandra/cluster.html But check 'nodetool describecluster' to confirm that all nodes have the same schema version. Note: This will not help you in the concurrency / multiple writers scenario. all the best, Sebastián On Jan 23, 2016 7:29 PM, "Kevin Burton" <bur...@spinn3r.com> wrote: > Once the CREATE TABLE returns in cqlsh (or programatically) is it safe to > assume it's on all nodes at that point? > > If not I'll have to put in even more logic to handle this case.. > > On Fri, Jan 22, 2016 at 9:22 PM, Jack Krupansky <jack.krupan...@gmail.com> > wrote: > >> I recall that there was some discussion last year about this issue of how >> risky it is to do an automated CREATE TABLE IF NOT EXISTS due to the >> unpredictable amount of time it takes for the table creation to fully >> propagate around the full cluster. I think it was recognized as a real >> problem, but without an immediate solution, so the recommended practice for >> now is to only manually perform the operation (sure, it can be scripted, >> but only under manual control) to assure that the operation completes and >> that only one attempt is made to create the table. I don't recall if there >> was a specific Jira assigned, and the antipattern doc doesn't appear to >> reference this scenario. Maybe a committer can shed some more light. >> >> -- Jack Krupansky >> >> On Fri, Jan 22, 2016 at 10:29 PM, Kevin Burton <bur...@spinn3r.com> >> wrote: >> >>> I sort of agree.. but we are also considering migrating to hourly >>> tables.. and what if the single script doesn't run. >>> >>> I like having N nodes make changes like this because in my experience >>> that central / single box will usually fail at the wrong time :-/ >>> >>> >>> >>> On Fri, Jan 22, 2016 at 6:47 PM, Jonathan Haddad <j...@jonhaddad.com> >>> wrote: >>> >>>> Instead of using ZK, why not solve your concurrency problem by removing >>>> it? By that, I mean simply have 1 process that creates all your tables >>>> instead of creating a race condition intentionally? >>>> >>>> On Fri, Jan 22, 2016 at 6:16 PM Kevin Burton <bur...@spinn3r.com> >>>> wrote: >>>> >>>>> Not sure if this is a bug or not or kind of a *fuzzy* area. >>>>> >>>>> In 2.0 this worked fine. >>>>> >>>>> We have a bunch of automated scripts that go through and create >>>>> tables... one per day. >>>>> >>>>> at midnight UTC our entire CQL went offline.. .took down our whole >>>>> app. ;-/ >>>>> >>>>> The resolution was a full CQL shut down and then a drop table to >>>>> remove the bad tables... >>>>> >>>>> pretty sure the issue was with schema disagreement. >>>>> >>>>> All our CREATE TABLE use IF NOT EXISTS.... but I think the IF NOT >>>>> EXISTS only checks locally? >>>>> >>>>> My work around is going to be to use zookeeper to create a mutex lock >>>>> during this operation. >>>>> >>>>> Any other things I should avoid? >>>>> >>>>> >>>>> -- >>>>> >>>>> We’re hiring if you know of any awesome Java Devops or Linux >>>>> Operations Engineers! >>>>> >>>>> Founder/CEO Spinn3r.com >>>>> Location: *San Francisco, CA* >>>>> blog: http://burtonator.wordpress.com >>>>> … or check out my Google+ profile >>>>> <https://plus.google.com/102718274791889610666/posts> >>>>> >>>>> >>> >>> >>> -- >>> >>> We’re hiring if you know of any awesome Java Devops or Linux Operations >>> Engineers! >>> >>> Founder/CEO Spinn3r.com >>> Location: *San Francisco, CA* >>> blog: http://burtonator.wordpress.com >>> … or check out my Google+ profile >>> <https://plus.google.com/102718274791889610666/posts> >>> >>> >> > > > -- > > We’re hiring if you know of any awesome Java Devops or Linux Operations > Engineers! > > Founder/CEO Spinn3r.com > Location: *San Francisco, CA* > blog: http://burtonator.wordpress.com > … or check out my Google+ profile > <https://plus.google.com/102718274791889610666/posts> > >