On Sat, Apr 26, 2014 at 9:56 AM, Jingyuan Luke wrote:
> Hi Greg,
>
> Actually our cluster is pretty empty, but we suspect we had a temporary
> network disconnection to one of our OSD, not sure if this caused the
> problem.
>
> Anyway we don't mind try the method you mentioned, how can we do that?
I have done it. It can be built on an isolated system if you create
your own OS and Ceph repositories.
You can also build Ceph from a tar ball on an isolated system.
Eric
hi, Im on a site with no access to the internet and Im trying to
install ceph
during the installation it tries to downlo
hi, Im on a site with no access to the internet and Im trying to install
ceph
during the installation it tries to download files from the internet and
then I get an error
I tried to download the files and make my own repository, also i have
changed the installation code to point to a different path
Hi,
I also did a few other experiments, trying to get what the maximum bandwidth we
can get from each data disk. The output is not encouraging: for disks that can
provide 150 MB/s block-level sequential read bandwidths, we can only get about
90MB/s from each disk. Something that is particular i
Hi Gregory,
Thanks very much for your quick reply. When I started to look into Ceph,
Bobtail was the latest stable release and that was why I picked that version
and started to make a few modifications. I have not ported my changes to 0.79
yet. The plan is if v-0.79 can provide a higher disk ba
Hi Greg,
Actually our cluster is pretty empty, but we suspect we had a temporary
network disconnection to one of our OSD, not sure if this caused the
problem.
Anyway we don't mind try the method you mentioned, how can we do that?
Regards,
Luke
On Saturday, April 26, 2014, Gregory Farnum wrote:
Hi Mark,
Thanks for sharing this. I did read these blogs early. If we look at the
aggregated bandwidth, 600-700 MB/s for reads for 6 disks are quite good.
But consider it is shared among 256 concurrent read streams, each one
gets as little as 2-3 MB/s bandwidth. This does not sound that right.
Hi Mark,
That seems pretty good. What is the block level sequential read
bandwidth of your disks? What configuration did you use? What was the
replica size, read_ahead for your rbds and what were the number of
workloads you used? I used btrfs in my experiments as well.
Thanks,
Xing
On 04/25
This usually means that your OSDs all stopped running at the same time, and
will eventually be marked down by the monitors. You should verify that
they're running.
-Greg
On Saturday, April 26, 2014, Srinivasa Rao Ragolu
wrote:
> Hi,
>
> My monitor node and osd nodes are running fine. But my clus
I've not defined cluster IPs for each OSD server but only the whole subnet.
Should I define each IP for each OSD ? This is not wrote on docs and
could be tricky to do this in big environments with hundreds of nodes
2014-04-24 20:04 GMT+02:00 McNamara, Bradley :
> Do you have all of the cluster IP'
Some discussion about this can be found here:
http://ceph.com/dev-notes/incremental-snapshots-with-rbd/
Cheers
Mark
On 25/04/14 08:25, Brian Rak wrote:
Is there a recommended way to copy an RBD image between two different
clusters?
My initial thought was 'rbd export - | ssh "rbd import -"', b
Hi,
My monitor node and osd nodes are running fine. But my cluster health is
"stale+active+clean"
root@node1:/etc/ceph# ceph status
cluster a7f64266-0894-4f1e-a635-d0aeaca0e993
health HEALTH_WARN 2856 pgs stale; 2856 pgs stuck stale
monmap e1: 1 mons at {mon=192.168.0.102:6789/0}, e
12 matches
Mail list logo