Re: [ceph-users] Directly addressing files on individual OSD

2017-03-19 Thread Youssef Eldakar
: ceph-users@lists.ceph.com Subject: Re: [ceph-users] Directly addressing files on individual OSD On 16.03.2017 08:26, Youssef Eldakar wrote: > Thanks for the reply, Anthony, and I am sorry my question did not give > sufficient background. > > This is the cluster behind archive.bibalex.

Re: [ceph-users] Directly addressing files on individual OSD

2017-03-16 Thread Ronny Aasen
On 16.03.2017 08:26, Youssef Eldakar wrote: Thanks for the reply, Anthony, and I am sorry my question did not give sufficient background. This is the cluster behind archive.bibalex.org. Storage nodes keep archived webpages as multi-member GZIP files on the disks, which are formatted using XFS

Re: [ceph-users] Directly addressing files on individual OSD

2017-03-16 Thread Youssef Eldakar
: Youssef Eldakar Sent: Thursday, March 16, 2017 09:26 To: Anthony D'Atri; ceph-users@lists.ceph.com Subject: RE: [ceph-users] Directly addressing files on individual OSD Thanks for the reply, Anthony, and I am sorry my question did not give sufficient background. This is the cluster b

Re: [ceph-users] Directly addressing files on individual OSD

2017-03-16 Thread Youssef Eldakar
[ceph-users-boun...@lists.ceph.com] on behalf of Anthony D'Atri [a...@dreamsnake.net] Sent: Thursday, March 16, 2017 01:37 To: ceph-users@lists.ceph.com Subject: Re: [ceph-users] Directly addressing files on individual OSD As I parse Youssef’s message, I believe there are some misconceptions

Re: [ceph-users] Directly addressing files on individual OSD

2017-03-15 Thread Anthony D'Atri
As I parse Youssef’s message, I believe there are some misconceptions. It might help if you could give a bit more info on what your existing ‘cluster’ is running. NFS? CIFS/SMB? Something else? 1) Ceph regularly runs scrubs to ensure that all copies of data are consistent. The checksumming

[ceph-users] Directly addressing files on individual OSD

2017-03-14 Thread Youssef Eldakar
We currently run a commodity cluster that supports a few petabytes of data. Each node in the cluster has 4 drives, currently mounted as /0 through /3. We have been researching alternatives for managing the storage, Ceph being one possibility, iRODS being another. For preservation purposes, we wo