> So is there any other alternative for over the WAN deployment ..
> I hava use case to connect two Swedish unversities (few hundreds km apart) .
> Target is that user from univ A can write to cluster to univ B and can read
> the data from other users .
You could have a look at OpenStack Swift: i
Thanks James, I will look into it
Zeeshan
On Tue, Jan 13, 2015 at 2:00 PM, James wrote:
> Gregory Farnum writes:
>
>
> > Ceph isn't really suited for WAN-style distribution. Some users have
> > high-enough and consistent-enough bandwidth (with low enough latency)
> > to do it, but otherwise y
Gregory Farnum writes:
> Ceph isn't really suited for WAN-style distribution. Some users have
> high-enough and consistent-enough bandwidth (with low enough latency)
> to do it, but otherwise you probably want to use Ceph within the data
> centers and layer something else on top of it.
> -Greg
So is there any other alternative for over the WAN deployment .. I hava
use case to connect two Swedish unversities (few hundreds km apart) .
Target is that user from univ A can write to cluster to univ B and can read
the data from other users .
/Zee
On Tue, Jan 13, 2015 at 7:41 AM, Robert van
>> however for geographic distributed datacentres specially when network
>> flactuate how to handle that as i read it seems CEPH need big pipe of
>> network
>Ceph isn't really suited for WAN-style distribution. Some users have
>high-enough and consistent-enough bandwidth (with low enough latency)
On Mon, Jan 12, 2015 at 3:55 AM, Zeeshan Ali Shah wrote:
> Thanks Greg, No i am more into large scale RADOS system not filesystem .
>
> however for geographic distributed datacentres specially when network
> flactuate how to handle that as i read it seems CEPH need big pipe of
> network
Ceph isn'
Thanks Greg, No i am more into large scale RADOS system not filesystem .
however for geographic distributed datacentres specially when network
flactuate how to handle that as i read it seems CEPH need big pipe of
network
/Zee
On Fri, Jan 9, 2015 at 7:15 PM, Gregory Farnum wrote:
> On Thu, Jan
On Thu, Jan 8, 2015 at 5:46 AM, Zeeshan Ali Shah wrote:
> I just finished configuring ceph up to 100 TB with openstack ... Since we
> are also using Lustre in our HPC machines , just wondering what is the
> bottle neck in ceph going on Peta Scale like Lustre .
>
> any idea ? or someone tried it
I
I just finished configuring ceph up to 100 TB with openstack ... Since we
are also using Lustre in our HPC machines , just wondering what is the
bottle neck in ceph going on Peta Scale like Lustre .
any idea ? or someone tried it
--
Regards
Zeeshan Ali Shah
System Administrator - PDC HPC
PhD