John Hodrien wrote:
> On Wed, 9 Mar 2011, Ross Walker wrote:
>
>> On Mar 8, 2011, at 12:02 PM, John Hodrien
>> wrote:
>>
>>> The absolute definiton of safe here is quite important. In the event
>>> of a power loss, and a failure of the UPS, quite possibly also
followed by a
>>> failure of the RAI
On Mar 9, 2011, at 8:44 AM, John Hodrien wrote:
> On Wed, 9 Mar 2011, Ross Walker wrote:
>
>> On Mar 8, 2011, at 12:02 PM, John Hodrien wrote:
>>
>>> The absolute definiton of safe here is quite important. In the event of a
>>> power loss, and a failure of the UPS, quite possibly also followe
On Wed, 9 Mar 2011, Ross Walker wrote:
> On Mar 8, 2011, at 12:25 PM, John Hodrien wrote:
>>
>> I think you're right that this is how it should work, I'm just not entirely
>> sure that's actually generally the case (whether that's because typical
>> applications try to do sync writes or if it's f
On Wed, 9 Mar 2011, Ross Walker wrote:
> On Mar 8, 2011, at 12:02 PM, John Hodrien wrote:
>
>> The absolute definiton of safe here is quite important. In the event of a
>> power loss, and a failure of the UPS, quite possibly also followed by a
>> failure of the RAID battery you'll get data loss,
On Tue, 8 Mar 2011, wessel van der aart wrote:
> does anyone here uses nfs without sync in production? does data corrupt
> often?
Yes, I use it. If you had an NFS server that regularly died due to hardware
faults, or kernel panics, then I wouldn't consider using it.
> all the data send from the
On Mar 8, 2011, at 12:02 PM, John Hodrien wrote:
> The absolute definiton of safe here is quite important. In the event of a
> power loss, and a failure of the UPS, quite possibly also followed by a
> failure of the RAID battery you'll get data loss, as some writes won't be
> committed to disk d
On Mar 8, 2011, at 12:25 PM, John Hodrien wrote:
> On Tue, 8 Mar 2011, Ross Walker wrote:
>
>> Well on my local disk I don't cache the data of tens or hundreds of clients
>> and a server can have a memory fault and oops just as easily as any client.
>>
>> Also I believe it doesn't sync every si
On 3/8/2011 3:14 PM, wessel van der aart wrote:
>
> the software we're using to distribute our renders is RoyalRender, i'm not
> sure if any optimization is possible, i'll check it out.
> so far it seems that the option of using nfs stands or falls with he use
> of sync.
> does anyone here uses nfs
thanks for all the response , really gives me a good idea where to pay
attention to.
the software we're using to distribute our renders is RoyalRender, i'm not
sure if any optimization is possible, i'll check it out.
so far it seems that the option of using nfs stands or falls with he use
of sync.
On Tue, 8 Mar 2011, Ross Walker wrote:
> Well on my local disk I don't cache the data of tens or hundreds of clients
> and a server can have a memory fault and oops just as easily as any client.
>
> Also I believe it doesn't sync every single write (unless mounted on the
> client sync which is onl
On Tue, 8 Mar 2011, Ross Walker wrote:
> The OP wanted 90MB/s per node and we have no clue whether the application he
> is using is capable of driving 1MB block sizes.
I thought he wanted 90MB/s reads per node (and I've demonstrated that's doable
with NFS). The only reason I'm not showing it wi
On Mar 8, 2011, at 9:48 AM, Les Mikesell wrote:
> On 3/8/11 8:32 AM, Ross Walker wrote:
>>
>> Why wouldn't you want safe writes? Is that like saying, and if you care for
>> your data?
>
> You don't fsync every write on a local disk. Why demand it over NFS where
> the
> server is probably le
On 3/8/11 8:32 AM, Ross Walker wrote:
>
> Why wouldn't you want safe writes? Is that like saying, and if you care for
> your data?
You don't fsync every write on a local disk. Why demand it over NFS where the
server is probably less likely to crash than the writing node? That's like
saying yo
On Mar 7, 2011, at 9:55 AM, John Hodrien wrote:
> On Mon, 7 Mar 2011, Ross Walker wrote:
>
>> 1Gbe can do 115MB/s @ 64K+ IO size, but at 4k IO size (NFS) 55MB/s is about
>> it.
>>
>> If you need each node to be able to read 90-100MB/s you would need to setup
>> a cluster file system using iSCSI
On Mon, 7 Mar 2011, Ross Walker wrote:
> 1Gbe can do 115MB/s @ 64K+ IO size, but at 4k IO size (NFS) 55MB/s is about
> it.
>
> If you need each node to be able to read 90-100MB/s you would need to setup
> a cluster file system using iSCSI or FC and make sure the cluster file
> system can handle la
On Mar 7, 2011, at 6:12 AM, wessel van der aart wrote:
> Hi All,
>
> I've been asked to setup a 3d renderfarm at our office , at the start it
> will contain about 8 nodes but it should be build at growth. now the
> setup i had in mind is as following:
> All the data is already stored on a Stor
Hi :)
On Mon, Mar 7, 2011 at 12:12 PM, wessel van der aart
wrote:
> Hi All,
>
> I've been asked to setup a 3d renderfarm at our office , at the start it
> will contain about 8 nodes but it should be build at growth. now the
> setup i had in mind is as following:
> All the data is already stored o
17 matches
Mail list logo