Re: How to remove huge files with all expired data sooner?

2015-09-28 Thread Erick Ramirez
Hello,

You should never run `nodetool compact` since this will result in a massive
SSTable that will almost never get compacted out or take a very long time
to get compacted out.

You are correct that there needs to be 4 similar-sized SSTables for them to
get compacted. If you want the expired data to be deleted quicker, try
lowering the STCS `min_threshold` to 3 or even 2. Good luck!

Cheers,
Erick


On Sat, Sep 26, 2015 at 4:40 AM, Dongfeng Lu  wrote:

> Hi I have a table where I set TTL to only 7 days for all records and we
> keep pumping records in every day. In general, I would expect all data
> files for that table to have timestamps less than, say 8 or 9 days old,
> giving the system some time to work its magic. However, I see some files
> more than 9 days old occationally. Last Friday, I saw 4 large files, each
> about 10G in size, with timestamps about 5, 4, 3, 2 weeks old.
> Interestingly they are all gone this Monday, leaving 1 new file 9 GB in
> size.
>
> The compaction strategy is SizeTieredCompactionStrategy, and I can
> understand why the above happened. It seems we have 10G of data every week
> and when SizeTieredCompactionStrategy works to create various tiers, it
> just happened the file size for the next tier is 10G, and all the data is
> packed into this huge file. Then it starts the next cycle. Another week
> goes by, and another 10G file is created. This process continues until the
> minimum number of files of the same size is reached, which I think is 4 by
> default. Then it started to compact this set of 4 10G files. At this time,
> all data in these 4 files have expired so we end up with nothing or much
> smaller file if there is still some records with TTL left.
>
> I have many tables like this, and I'd like to reclaim those spaces sooner.
> What would be the best way to do it? Should I run "nodetool compact" when I
> see two large files that are 2 weeks old? Is there configuration parameters
> I can tune to achieve the same effect? I looked through all the CQL
> Compaction Subproperties for STCS, but I am not sure how they can help
> here. Any suggestion is welcome.
>
> BTW, I am using Cassandra 2.0.6.
>


Re: Running Cassandra on Java 8 u60..

2015-09-28 Thread Nathan Bijnens
We are running OpenJDK7 with G1GC and encountered no issues so far. We took
the tuning parameters from the Cassandra 3.0 branch.

Kind regards,
  Nathan

On Mon, Sep 28, 2015 at 6:25 AM Kevin Burton  wrote:

> Possibly for existing apps… we’re running G1 for everything except
> Elasticsearch and Cassandra and are pretty happy with it.
>
> On Sun, Sep 27, 2015 at 10:28 AM, Graham Sanderson 
> wrote:
>
>> IMHO G1 is still buggy on JDK8 (based solely on being subscribed to the
>> gc-dev mailing list)… I think JDK9 will be the one.
>>
>> On Sep 25, 2015, at 7:14 PM, Stefano Ortolani  wrote:
>>
>> I think those were referring to Java7 and G1GC (early versions were
>> buggy).
>>
>> Cheers,
>> Stefano
>>
>>
>> On Fri, Sep 25, 2015 at 5:08 PM, Kevin Burton  wrote:
>>
>>> Any issues with running Cassandra 2.0.16 on Java 8? I remember there is
>>> long term advice on not changing the GC but not the underlying version of
>>> Java.
>>>
>>> Thoughts?
>>>
>>> --
>>>
>>> We’re hiring if you know of any awesome Java Devops or Linux Operations
>>> Engineers!
>>>
>>> Founder/CEO Spinn3r.com 
>>> Location: *San Francisco, CA*
>>> blog: http://burtonator.wordpress.com
>>> … or check out my Google+ profile
>>> 
>>>
>>>
>>>
>>
>>
>
>
> --
>
> Founder/CEO Spinn3r.com
> Location: *San Francisco, CA*
> blog: http://burtonator.wordpress.com
> … or check out my Google+ profile
> 
>
>


Re: Does Cassandra 2.2.1 works with Java 7?

2015-09-28 Thread Paulo Motta
Yes, the target version for 2.2 is 1.7.

2015-09-28 2:23 GMT-04:00 Lu, Boying :

> Hi, All,
>
>
>
> The latest stable release of Cassandra is 2.2.1 and I notice the following
> line in the “Requirements” section of  README.asc comes with the source
> codes:
>
> Java >=1.7 (OpenJDK and Oracle JVMS have been tested)
>
>
>
> Does this mean that Cassandra 2.2.1 (binary release) can work with Java 7?
>
>
>
> Thanks
>
>
> Boying
>
>
>
>
>
>
>
>
>


DC's versions compatibility

2015-09-28 Thread Carlos Alonso
Hi guys.

I have a very old cassandra cluster 1.2.19 and I'm looking to add a new
datacenter to it for analytics purposes in a newer version, let's say
2.1.8. Will those DC's communicate properly?

Regards

Carlos Alonso | Software Engineer | @calonso 


Re: How to remove huge files with all expired data sooner?

2015-09-28 Thread Ken Hancock
On Mon, Sep 28, 2015 at 2:59 AM, Erick Ramirez  wrote:

> have many tables like this, and I'd like to reclaim those spaces sooner.
> What would be the best way to do it? Should I run "nodetool compact" when I
> see two large files that are 2 weeks old? Is there configuration parameters
> I can tune to achieve the same effect? I looked through all the CQL
> Compaction Subproperties for STCS, but I am not sure how they can help
> here. Any suggestion is welcome.


You can use the JMX org.apache.cassandra.db:type=StorageService
forceTableCompaction to compact a single table.

Last time this came up, Robert Coli also indicated he thought nodetool
cleanup would trigger the same thing, but I never got a chance to confirm
that as I'd already done something with forceTableCompaction.  If you have
the data and try a cleanup, please report back your findings.


Re: DC's versions compatibility

2015-09-28 Thread Jonathan Haddad
No, they won't.  Always run the same version across your cluster.

On Mon, Sep 28, 2015 at 5:29 AM Carlos Alonso  wrote:

> Hi guys.
>
> I have a very old cassandra cluster 1.2.19 and I'm looking to add a new
> datacenter to it for analytics purposes in a newer version, let's say
> 2.1.8. Will those DC's communicate properly?
>
> Regards
>
> Carlos Alonso | Software Engineer | @calonso 
>


Re: Running Cassandra on Java 8 u60..

2015-09-28 Thread Jonathan Haddad
There are plenty of people running huge clusters on G1.

On Mon, Sep 28, 2015 at 12:30 AM Nathan Bijnens  wrote:

> We are running OpenJDK7 with G1GC and encountered no issues so far. We
> took the tuning parameters from the Cassandra 3.0 branch.
>
> Kind regards,
>   Nathan
>
> On Mon, Sep 28, 2015 at 6:25 AM Kevin Burton  wrote:
>
>> Possibly for existing apps… we’re running G1 for everything except
>> Elasticsearch and Cassandra and are pretty happy with it.
>>
>> On Sun, Sep 27, 2015 at 10:28 AM, Graham Sanderson 
>> wrote:
>>
>>> IMHO G1 is still buggy on JDK8 (based solely on being subscribed to the
>>> gc-dev mailing list)… I think JDK9 will be the one.
>>>
>>> On Sep 25, 2015, at 7:14 PM, Stefano Ortolani 
>>> wrote:
>>>
>>> I think those were referring to Java7 and G1GC (early versions were
>>> buggy).
>>>
>>> Cheers,
>>> Stefano
>>>
>>>
>>> On Fri, Sep 25, 2015 at 5:08 PM, Kevin Burton 
>>> wrote:
>>>
 Any issues with running Cassandra 2.0.16 on Java 8? I remember there is
 long term advice on not changing the GC but not the underlying version of
 Java.

 Thoughts?

 --

 We’re hiring if you know of any awesome Java Devops or Linux Operations
 Engineers!

 Founder/CEO Spinn3r.com 
 Location: *San Francisco, CA*
 blog: http://burtonator.wordpress.com
 … or check out my Google+ profile
 



>>>
>>>
>>
>>
>> --
>>
>> Founder/CEO Spinn3r.com
>> Location: *San Francisco, CA*
>> blog: http://burtonator.wordpress.com
>> … or check out my Google+ profile
>> 
>>
>>


Re: How to remove huge files with all expired data sooner?

2015-09-28 Thread Jeff Jirsa
There’s a seldom discussed parameter called:

unchecked_tombstone_compaction

The documentation describes the option as follows:

True enables more aggressive than normal tombstone compactions. A single 
SSTable tombstone compaction runs without checking the likelihood of success. 
Cassandra 2.0.9 and later.

You’d need to upgrade to newer than 2.0.9, but by doing so, and enabling 
unchecked_tombstone_compaction, you could encourage cassandra to compact just 
one single large sstable to purge tombstones.



From:   on behalf of Erick Ramirez
Reply-To:  "user@cassandra.apache.org"
Date:  Sunday, September 27, 2015 at 11:59 PM
To:  "user@cassandra.apache.org", Dongfeng Lu
Subject:  Re: How to remove huge files with all expired data sooner?

Hello, 

You should never run `nodetool compact` since this will result in a massive 
SSTable that will almost never get compacted out or take a very long time to 
get compacted out.

You are correct that there needs to be 4 similar-sized SSTables for them to get 
compacted. If you want the expired data to be deleted quicker, try lowering the 
STCS `min_threshold` to 3 or even 2. Good luck!

Cheers,
Erick 


On Sat, Sep 26, 2015 at 4:40 AM, Dongfeng Lu  wrote:
Hi I have a table where I set TTL to only 7 days for all records and we keep 
pumping records in every day. In general, I would expect all data files for 
that table to have timestamps less than, say 8 or 9 days old, giving the system 
some time to work its magic. However, I see some files more than 9 days old 
occationally. Last Friday, I saw 4 large files, each about 10G in size, with 
timestamps about 5, 4, 3, 2 weeks old. Interestingly they are all gone this 
Monday, leaving 1 new file 9 GB in size.

The compaction strategy is SizeTieredCompactionStrategy, and I can understand 
why the above happened. It seems we have 10G of data every week and when 
SizeTieredCompactionStrategy works to create various tiers, it just happened 
the file size for the next tier is 10G, and all the data is packed into this 
huge file. Then it starts the next cycle. Another week goes by, and another 10G 
file is created. This process continues until the minimum number of files of 
the same size is reached, which I think is 4 by default. Then it started to 
compact this set of 4 10G files. At this time, all data in these 4 files have 
expired so we end up with nothing or much smaller file if there is still some 
records with TTL left.

I have many tables like this, and I'd like to reclaim those spaces sooner. What 
would be the best way to do it? Should I run "nodetool compact" when I see two 
large files that are 2 weeks old? Is there configuration parameters I can tune 
to achieve the same effect? I looked through all the CQL Compaction 
Subproperties for STCS, but I am not sure how they can help here. Any 
suggestion is welcome.

BTW, I am using Cassandra 2.0.6.




smime.p7s
Description: S/MIME cryptographic signature


Re: How to remove huge files with all expired data sooner?

2015-09-28 Thread Dongfeng Lu
Thanks, Erick, Ken, and Jeff.

Erick,

I thought about min_threshold. The document says it "Sets the minimum number of 
SSTables to trigger a minor compaction." I thought removing those large files 
would be considered a major compaction, and this parameter may not help. Am I 
wrong?

I also wondered what side effect it may have by lowering min_threshold value. 
Will there be more compactions? I understand it is a balance sometimes to 
either have multiple small compactions or a single big compaction. 

About your comment "never run nodetool compact". Is it what Cassandra does when 
it finally compact those 4 files? I don't really see the difference between 
what Cassandra does programatically and what if I run it once every two weeks 
to reclaim the disk space.

Ken,

Interesting way to do it. I will think about it.

Jeff,

That would be an ideal solution. Actually I am planning to migrate to the 
latest 2.1 version, and hopefully it will be solved then.

Thanks again, everyone, for your responses.

Dongfeng 


 On Monday, September 28, 2015 10:36 AM, Jeff Jirsa 
 wrote:
   

 There’s a seldom discussed parameter called:
unchecked_tombstone_compaction
The documentation describes the option as follows:

| True enables more aggressive than normal tombstone compactions. A single 
SSTable tombstone compaction runs without checking the likelihood of success. 
Cassandra 2.0.9 and later.
 |


You’d need to upgrade to newer than 2.0.9, but by doing so, and enabling 
unchecked_tombstone_compaction, you could encourage cassandra to compact just 
one single large sstable to purge tombstones.


From:   on behalf of Erick Ramirez
Reply-To:  "user@cassandra.apache.org"
Date:  Sunday, September 27, 2015 at 11:59 PM
To:  "user@cassandra.apache.org", Dongfeng Lu
Subject:  Re: How to remove huge files with all expired data sooner?

Hello,
You should never run `nodetool compact` since this will result in a massive 
SSTable that will almost never get compacted out or take a very long time to 
get compacted out.
You are correct that there needs to be 4 similar-sized SSTables for them to get 
compacted. If you want the expired data to be deleted quicker, try lowering the 
STCS `min_threshold` to 3 or even 2. Good luck!

Cheers,
Erick

On Sat, Sep 26, 2015 at 4:40 AM, Dongfeng Lu  wrote:

Hi I have a table where I set TTL to only 7 days for all records and we keep 
pumping records in every day. In general, I would expect all data files for 
that table to have timestamps less than, say 8 or 9 days old, giving the system 
some time to work its magic. However, I see some files more than 9 days old 
occationally. Last Friday, I saw 4 large files, each about 10G in size, with 
timestamps about 5, 4, 3, 2 weeks old. Interestingly they are all gone this 
Monday, leaving 1 new file 9 GB in size.

The compaction strategy is SizeTieredCompactionStrategy, and I can understand 
why the above happened. It seems we have 10G of data every week and when 
SizeTieredCompactionStrategy works to create various tiers, it just happened 
the file size for the next tier is 10G, and all the data is packed into this 
huge file. Then it starts the next cycle. Another week goes by, and another 10G 
file is created. This process continues until the minimum number of files of 
the same size is reached, which I think is 4 by default. Then it started to 
compact this set of 4 10G files. At this time, all data in these 4 files have 
expired so we end up with nothing or much smaller file if there is still some 
records with TTL left.

I have many tables like this, and I'd like to reclaim those spaces sooner. What 
would be the best way to do it? Should I run "nodetool compact" when I see two 
large files that are 2 weeks old? Is there configuration parameters I can tune 
to achieve the same effect? I looked through all the CQL Compaction 
Subproperties for STCS, but I am not sure how they can help here. Any 
suggestion is welcome.

BTW, I am using Cassandra 2.0.6.




  

Re: memory usage problem of Metadata.tokenMap.tokenToHost

2015-09-28 Thread Alex Popescu
Besides the others' advice that 2000+ keyspaces might be too much, the
latest Java driver (2.0.11) includes an option to disable the Metadata API
http://www.datastax.com/dev/blog/datastax-java-driver-2-0-11-released. I'm
not sure at this moment if this has been merged into 2.1 already.

On Sun, Sep 20, 2015 at 9:22 AM, joseph gao  wrote:

> cassandra: 2.1.7
> java driver: datastax java driver 2.1.6
>
> Here is the problem:
>My application uses 2000+ keyspaces, and will dynamically create
> keyspaces and tables. And then in java client, the
> Metadata.tokenMap.tokenToHost would use about 1g memory. so this will cause
> a lot of  full gc.
>As I see, the key of the tokenToHost is keyspace, and the value is a
> tokenId_to_replicateNodes map.
>
>When I try to solve this problem, I find something not sure: all
> keyspaces have same 'tokenId_to_replicateNodes' map.
> My replication strategy of all keyspaces is : simpleStrategy and
> replicationFactor is 3
>
> So would it be possible if keyspaces use same strategy, the value of
> tokenToHost map use a same map. So it would extremely reduce the memory
> usage
>
>  thanks a lot
>
> --
> --
> Joseph Gao
> PhoneNum:15210513582
> QQ: 409343351
>



-- 
Bests,

Alex Popescu | @al3xandru
Sen. Product Manager @ DataStax





Re: How to remove huge files with all expired data sooner?

2015-09-28 Thread Robert Coli
On Sun, Sep 27, 2015 at 11:59 PM, Erick Ramirez 
wrote:

> You should never run `nodetool compact` since this will result in a
> massive SSTable that will almost never get compacted out or take a very
> long time to get compacted out.
>

Respectfully disagree. There are various cases where nodetool compact will
result in a small SSTable.

There are other cases where one might wish to major compact and then stop
the node and run sstablesplit.

I agree that in modern Cassandra, if one has not made an error, one should
rarely wish to run nodetool compact, but "never" is too strong.

=Rob


Re: DC's versions compatibility

2015-09-28 Thread Robert Coli
On Mon, Sep 28, 2015 at 5:29 AM, Carlos Alonso  wrote:

> I have a very old cassandra cluster 1.2.19 and I'm looking to add a new
> datacenter to it for analytics purposes in a newer version, let's say
> 2.1.8. Will those DC's communicate properly?
>

As Jonathan suggests :

1) Upgrade your existing cluster to 2.1.X
2) Then add an additional DC for analytics

=Rob


INSERT JSON TimeStamp

2015-09-28 Thread Ashish Soni
If Anyone can help for below as i am getting the error

effectiveStartDate and effectiveEndDate are TimeStamp

INSERT INTO model.RuleSetSchedule JSON ' {
"ruleSetName": "BOSTONRATES",
"ruleSetId": "829aa84a-4bba-411f-a4fb-38167a987cda",
"scheduleId":1,
"effectiveStartDate": "01/01/2015",
"effectiveEndDate": "12/31/2015",
"rules": {
"1": {
"condition": "BoardStation",
"action": "FareAmount=9.25",
"ruleOrder": "1"
}
}

message="Error decoding JSON value for effectivestartdate: Unable to coerce
'01/01/2015' to a formatted date (long)"


Re: INSERT JSON TimeStamp

2015-09-28 Thread Russell Bradberry
That is not a valid date in CQL, and JSON does not enforce a specific date 
format.  A correctly formatted date would look something like “2015-01-01 
00:00:00”. 

From:  Ashish Soni
Reply-To:  
Date:  Monday, September 28, 2015 at 3:51 PM
To:  
Subject:  INSERT JSON TimeStamp

If Anyone can help for below as i am getting the error

effectiveStartDate and effectiveEndDate are TimeStamp

INSERT INTO model.RuleSetSchedule JSON ' {
"ruleSetName": "BOSTONRATES",
"ruleSetId": "829aa84a-4bba-411f-a4fb-38167a987cda",
"scheduleId":1,
"effectiveStartDate": "01/01/2015",
"effectiveEndDate": "12/31/2015",
"rules": {
"1": {
"condition": "BoardStation",
"action": "FareAmount=9.25",
"ruleOrder": "1"
}
}

message="Error decoding JSON value for effectivestartdate: Unable to coerce 
'01/01/2015' to a formatted date (long)"



Re: INSERT JSON TimeStamp

2015-09-28 Thread Steve Robenalt
Hi Ashish,

Most Json parsers expect either a raw long integer value or some version of
an iso-8601 date or timestamp.

See https://en.wikipedia.org/wiki/ISO_8601 for a good reference.

Steve


On Mon, Sep 28, 2015 at 1:08 PM, Russell Bradberry 
wrote:

> That is not a valid date in CQL, and JSON does not enforce a specific date
> format.  A correctly formatted date would look something like “2015-01-01
> 00:00:00”.
>
> From: Ashish Soni
> Reply-To: 
> Date: Monday, September 28, 2015 at 3:51 PM
> To: 
> Subject: INSERT JSON TimeStamp
>
> If Anyone can help for below as i am getting the error
>
> effectiveStartDate and effectiveEndDate are TimeStamp
>
> INSERT INTO model.RuleSetSchedule JSON ' {
> "ruleSetName": "BOSTONRATES",
> "ruleSetId": "829aa84a-4bba-411f-a4fb-38167a987cda",
> "scheduleId":1,
> "effectiveStartDate": "01/01/2015",
> "effectiveEndDate": "12/31/2015",
> "rules": {
> "1": {
> "condition": "BoardStation",
> "action": "FareAmount=9.25",
> "ruleOrder": "1"
> }
> }
>
> message="Error decoding JSON value for effectivestartdate: Unable to
> coerce '01/01/2015' to a formatted date (long)"
>



-- 
Steve Robenalt
Software Architect
sroben...@highwire.org 
(office/cell): 916-505-1785

HighWire Press, Inc.
425 Broadway St, Redwood City, CA 94063
www.highwire.org

Technology for Scholarly Communication


Re: INSERT JSON TimeStamp

2015-09-28 Thread Ashish Soni
Thanks a Lot , Also i have single quote in JSON but CQL Doesnt Like it even
when i escape it
 * "condition": "BoardStation =='Lowell' ",*

*i tried *

 * "condition": "BoardStation ==\'Lowell\' ",*

INSERT INTO model.RuleSetSchedule JSON ' {
"ruleSetName": "BOSTONRATES",
"ruleSetId": "829aa84b-4bba-411f-a4fb-38167a987cda",
"scheduleId":1,
"effectiveStartDate": "2015-02-01 00:00:00",
"effectiveEndDate": "2015-03-01 00:00:00",
"rules": {
"1": {
   * "condition": "BoardStation =='Lowell' ",*
"action": "FareAmount=9.25",
"ruleOrder": "1"
}

}
}';

On Mon, Sep 28, 2015 at 4:11 PM, Steve Robenalt 
wrote:

> Hi Ashish,
>
> Most Json parsers expect either a raw long integer value or some version
> of an iso-8601 date or timestamp.
>
> See https://en.wikipedia.org/wiki/ISO_8601 for a good reference.
>
> Steve
>
>
> On Mon, Sep 28, 2015 at 1:08 PM, Russell Bradberry 
> wrote:
>
>> That is not a valid date in CQL, and JSON does not enforce a specific
>> date format.  A correctly formatted date would look something like
>> “2015-01-01 00:00:00”.
>>
>> From: Ashish Soni
>> Reply-To: 
>> Date: Monday, September 28, 2015 at 3:51 PM
>> To: 
>> Subject: INSERT JSON TimeStamp
>>
>> If Anyone can help for below as i am getting the error
>>
>> effectiveStartDate and effectiveEndDate are TimeStamp
>>
>> INSERT INTO model.RuleSetSchedule JSON ' {
>> "ruleSetName": "BOSTONRATES",
>> "ruleSetId": "829aa84a-4bba-411f-a4fb-38167a987cda",
>> "scheduleId":1,
>> "effectiveStartDate": "01/01/2015",
>> "effectiveEndDate": "12/31/2015",
>> "rules": {
>> "1": {
>> "condition": "BoardStation",
>> "action": "FareAmount=9.25",
>> "ruleOrder": "1"
>> }
>> }
>>
>> message="Error decoding JSON value for effectivestartdate: Unable to
>> coerce '01/01/2015' to a formatted date (long)"
>>
>
>
>
> --
> Steve Robenalt
> Software Architect
> sroben...@highwire.org 
> (office/cell): 916-505-1785
>
> HighWire Press, Inc.
> 425 Broadway St, Redwood City, CA 94063
> www.highwire.org
>
> Technology for Scholarly Communication
>


Re: INSERT JSON TimeStamp

2015-09-28 Thread Russell Bradberry
You escape single quotes by doubling them. 

Eg:

  "condition": "BoardStation ==''Lowell'' ",

That is not double quotes arounf ‘lowel’ but in-fact 4 single-quotes, 2 before 
and 2 after


From:  Ashish Soni
Reply-To:  
Date:  Monday, September 28, 2015 at 4:32 PM
To:  
Subject:  Re: INSERT JSON TimeStamp

Thanks a Lot , Also i have single quote in JSON but CQL Doesnt Like it even 
when i escape it 
  "condition": "BoardStation =='Lowell' ",

i tried 

  "condition": "BoardStation ==\'Lowell\' ",

INSERT INTO model.RuleSetSchedule JSON ' {
"ruleSetName": "BOSTONRATES",
"ruleSetId": "829aa84b-4bba-411f-a4fb-38167a987cda",
"scheduleId":1,
"effectiveStartDate": "2015-02-01 00:00:00",
"effectiveEndDate": "2015-03-01 00:00:00",
"rules": {
"1": {
"condition": "BoardStation =='Lowell' ",
"action": "FareAmount=9.25",
"ruleOrder": "1"
}

}
}';

On Mon, Sep 28, 2015 at 4:11 PM, Steve Robenalt  wrote:
Hi Ashish,

Most Json parsers expect either a raw long integer value or some version of an 
iso-8601 date or timestamp.

See https://en.wikipedia.org/wiki/ISO_8601 for a good reference.

Steve


On Mon, Sep 28, 2015 at 1:08 PM, Russell Bradberry  wrote:
That is not a valid date in CQL, and JSON does not enforce a specific date 
format.  A correctly formatted date would look something like “2015-01-01 
00:00:00”. 

From:  Ashish Soni
Reply-To:  
Date:  Monday, September 28, 2015 at 3:51 PM
To:  
Subject:  INSERT JSON TimeStamp

If Anyone can help for below as i am getting the error

effectiveStartDate and effectiveEndDate are TimeStamp

INSERT INTO model.RuleSetSchedule JSON ' {
"ruleSetName": "BOSTONRATES",
"ruleSetId": "829aa84a-4bba-411f-a4fb-38167a987cda",
"scheduleId":1,
"effectiveStartDate": "01/01/2015",
"effectiveEndDate": "12/31/2015",
"rules": {
"1": {
"condition": "BoardStation",
"action": "FareAmount=9.25",
"ruleOrder": "1"
}
}

message="Error decoding JSON value for effectivestartdate: Unable to coerce 
'01/01/2015' to a formatted date (long)"



-- 
Steve Robenalt 
Software Architect
sroben...@highwire.org 
(office/cell): 916-505-1785

HighWire Press, Inc.
425 Broadway St, Redwood City, CA 94063
www.highwire.org

Technology for Scholarly Communication