That's what I originally thought but the OOYALA presentation from C*2012 got me 
confused. Do you guys know what's going on here?

The video: 
http://www.youtube.com/watch?v=r2nGBUuvVmc&feature=player_detailpage#t=790s
The slides: Slide 22 @ 
http://www.datastax.com/wp-content/uploads/2012/08/C2012-Hastur-NoahGibbs.pdf

-- Drew


On Mar 18, 2013, at 6:14 AM, Edward Capriolo <edlinuxg...@gmail.com> wrote:

> 
> Imho it is probably more efficient for wide. When you decompress 8k blocks to 
> get at a 200 byte row you create overhead , particularly young gen.
> On Monday, March 18, 2013, Sylvain Lebresne <sylv...@datastax.com> wrote:
> > The way compression is implemented, it is oblivious to the CF being 
> > wide-row or narrow-row. There is nothing intrinsically less efficient in 
> > the compression for wide-rows.
> > --
> > Sylvain
> >
> > On Fri, Mar 15, 2013 at 11:53 PM, Drew Kutcharian <d...@venarc.com> wrote:
> >>
> >> Hey Guys,
> >>
> >> I remember reading somewhere that C* compression is not very effective 
> >> when most of the CFs are in wide-row format and some folks turn the 
> >> compression off and use disk level compression as a workaround. 
> >> Considering that wide rows with composites are "first class citizens" in 
> >> CQL3, is this still the case? Has there been any improvements on this?
> >>
> >> Thanks,
> >>
> >> Drew
> >

Reply via email to