Unfortunately no messages at ERROR level: INFO [Thread-460] 2011-05-04 21:31:14,427 StreamInSession.java (line 121) Streaming of file /raiddrive/MDR/MeterRecords-f-2264-Data.db/(98339515276,197218618166) progress=41536315392/98879102890 - 42% from org.apache.cassandra.streaming.StreamInSession@4eef9d00 failed: requesting a retry. DEBUG [Thread-460] 2011-05-04 21:31:14,427 FileUtils.java (line 48) Deleting MeterRecords-tmp-f-3522-Data.db DEBUG [Thread-460] 2011-05-04 21:31:16,410 IncomingTcpConnection.java (line 125) error reading from socket; closing java.io.IOException: No space left on device at sun.nio.ch.FileDispatcher.pwrite0(Native Method) at sun.nio.ch.FileDispatcher.pwrite(FileDispatcher.java:45) at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:96) at sun.nio.ch.IOUtil.write(IOUtil.java:56) at sun.nio.ch.FileChannelImpl.write(FileChannelImpl.java:648) at sun.nio.ch.FileChannelImpl.transferFromArbitraryChannel(FileChannelImpl.java:569) at sun.nio.ch.FileChannelImpl.transferFrom(FileChannelImpl.java:603) at org.apache.cassandra.streaming.IncomingStreamReader.readFile(IncomingStreamReader.java:86) at org.apache.cassandra.streaming.IncomingStreamReader.read(IncomingStreamReader.java:61) at org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:91)
Not sure why we didn't think to check available disk space to begin with, but it would have been nice to get an error regardless. Thanks again for your help! From: aaron morton [mailto:aa...@thelastpickle.com] Sent: Thursday, May 05, 2011 4:54 PM To: user@cassandra.apache.org Subject: Re: Decommissioning node is causing broken pipe error Could you provide some of the log messages when the receiver ran out of disk space ? Sounds like it should be at ERROR level. Thanks ----------------- Aaron Morton Freelance Cassandra Developer @aaronmorton http://www.thelastpickle.com On 6 May 2011, at 09:16, Sameer Farooqui wrote: Just wanted to update you guys that we turned on DEBUG level logging on the decommissioned node and the node receiving the decommissioned node's range. We did this by editing <cassandra-home>/conf/log4j-server.properties and changing the log4j.rootLogger to DEBUG. We ran decommission again and saw the that the receiving node was running out of disk space. The 184GB file was not able to fully stream to the receiving node. We simply added more disk space to the receiving node and then decommission ran successfully. Thanks for your help Aaron and also thanks for all those Cassandra articles on your blog. We found them helpful. - Sameer Accenture Technology Labs On Thu, May 5, 2011 at 3:59 AM, aaron morton <aa...@thelastpickle.com<mailto:aa...@thelastpickle.com>> wrote: Yes that was what I was trying to say. thanks ----------------- Aaron Morton Freelance Cassandra Developer @aaronmorton http://www.thelastpickle.com<http://www.thelastpickle.com/> On 5 May 2011, at 18:52, Tyler Hobbs wrote: On Thu, May 5, 2011 at 1:21 AM, Peter Schuller <peter.schul...@infidyne.com<mailto:peter.schul...@infidyne.com>> wrote: > It's no longer recommended to run nodetool compact regularly as it can mean > that some tombstones do not get to be purged for a very long time. I think this is a mis-typing; it used to be that major compactions were necessary to remove tombstones, but this is no longer the case in 0.7 so that the need for major compactions is significantly lessened or even eliminated. However, running major compactions won't cause tombstones *not* to be removed; it's just not required *in order* for them to be removed. I think he was suggesting that any tombstones *left* in the large sstable generated by the major compaction won't be removed for a long time because that sstable itself will not participate in any minor compactions for a long time. (In general, rows in that sstable will not be merged for a long time.) -- Tyler Hobbs Software Engineer, DataStax<http://datastax.com/> Maintainer of the pycassa<http://github.com/pycassa/pycassa> Cassandra Python client library ________________________________ This message is for the designated recipient only and may contain privileged, proprietary, or otherwise private information. If you have received it in error, please notify the sender immediately and delete the original. Any other use of the email by you is prohibited.