Hello Marcelo, Monday, September 8, 2008, 1:51:09 PM, you wrote:
ML> If i understand well, the recordsize is really important for big ML> files. Because with small files, and small updates, we have a lot ML> of chances to have the data well organized on disk. I think the ML> problem is the big files... where we have tiny updates. In the ML> creation´s time of the pool, the recordsize is 128k, but i don´t ML> know if that limit is real for, let´s say, when we are copying a ML> DVD image. I think the recordsize can be lager. If so, if in lager ML> files we can have a recordsize of... 1mb? So, what happen if we would change after that, 1k? Maximum record size currently supported is 128KB. -- Best regards, Robert Milkowski mailto:[EMAIL PROTECTED] http://milek.blogspot.com _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss