On 2019-01-31 6:05 a.m., M Ranga Swami Reddy wrote:
My thought was - Ceph block volume with raid#0 (means I mounted a ceph
block volumes to an instance/VM, there I would like to configure this
volume with RAID0).
Just to know, if anyone doing the same as above, if yes what are the
constraints?
I tried to start on the Monitor node itself
Yes Dashboard is enabled
# ceph mgr services
{
"dashboard": "https://ip-10-8-36-16.internal:8443/";,
"restful": "https://ip-10-8-36-16.internal:8003/";
}
# curl -k https://ip-10-8-36-16.eu-west-2.compute.internal:8443/api/health
{"status": "404 Not Foun
Il 30/01/19 17:04, Amit Ghadge ha scritto:
Better way is increase osd set-full-ratio slightly (.97) and then
remove buckets.
Many thanks
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Il 30/01/19 17:00, Paul Emmerich ha scritto:
Quick and dirty solution: take the full OSD down to issue the deletion
command ;)
Better solutions: temporarily incrase the full limit (ceph osd
set-full-ratio) or reduce the OSD's reweight (ceph osd reweight)
Paul
Many thanks
___
Hello, ceph users,
I see the following HEALTH_ERR during cluster rebalance:
Degraded data redundancy (low space): 8 pgs backfill_toofull
Detailed description:
I have upgraded my cluster to mimic and added 16 new bluestore OSDs
on 4 hosts. The hosts are in a separate region in my
Hi Jan,
You might be hitting the same issue as Wido here:
https://www.spinics.net/lists/ceph-users/msg50603.html
Kind regards,
Caspar
Op do 31 jan. 2019 om 14:36 schreef Jan Kasprzak :
> Hello, ceph users,
>
> I see the following HEALTH_ERR during cluster rebalance:
>
> Degrade
"...Dashboard is a dashboard so could not get health thru curl..."
If i didn't miss the question, IMHO "dashboard" does this job adequately:
curl -s -XGET :7000/health_data | jq -C ".health.status"
ceph version 12.2.10
Am Do., 31. Jan. 2019 um 11:02 Uhr schrieb PHARABOT Vincent <
vincent.phara.
Hi,
I finally figured out how to measure the statistics of a specific RBD volume;
$ ceph --admin-daemon perf dump
It outputs a lot, but I don't know what it means, is there any documentation
about the output?
For now the most important values are:
- bytes read
- bytes written
I think I n
Hi!
I saw the same several times when I added a new osd to the cluster. One-two pg
in "backfill_toofull" state.
In all versions of mimic.
- Original Message -
From: "Caspar Smit"
To: "Jan Kasprzak"
Cc: "ceph-users"
Sent: Thursday, 31 January, 2019 15:43:07
Subject: Re: [ceph-users] b
Fyodor Ustinov wrote:
: Hi!
:
: I saw the same several times when I added a new osd to the cluster. One-two
pg in "backfill_toofull" state.
:
: In all versions of mimic.
Yep. In my case it is not (only) after adding the new OSDs.
An hour or so ago my cluster reached the HEALTH_OK state,
Hi all,
Trying to utilize the 'ceph-ansible' project
(https://github.com/ceph/ceph-ansible ) to deploy some Ceph servers in a
Vagrant testbed; hitting some issues with some of the plays - where is the
right (best) venue to ask questions about this?
Thanks,
Will
OKay, now I changed the crush rule also on a pool with
the real data, and it seems all the client i/o on that pool has stopped.
The recovery continues, but things like qemu I/O, "rbd ls", and so on
are just stuck doing nothing.
Can I unstuck it somehow (faster than waiting for all
Hi,
On 31/01/2019 16:06, Will Dennis wrote:
Trying to utilize the ‘ceph-ansible’ project
(https://github.com/ceph/ceph-ansible)
to deploy some Ceph servers in a Vagrant testbed; hitting some issues
with some of the plays – where is the right (best) venue to ask
questions about this?
There'
> : We're currently co-locating our mons with the head node of our Hadoop
> : installation. That may be giving us some problems, we dont know yet, but
> : thus I'm speculation about moving them to dedicated hardware.
Would it be ok to run them on kvm VM’s - of course not backed by ceph?
Jesper
Jan Kasprzak wrote:
: OKay, now I changed the crush rule also on a pool with
: the real data, and it seems all the client i/o on that pool has stopped.
: The recovery continues, but things like qemu I/O, "rbd ls", and so on
: are just stuck doing nothing.
:
: Can I unstuck it somehow (
"perf schema" has a description field that may or may not contain
additional information.
My best guess for these fields would be bytes read/written since
startup of this particular librbd instance. (Based on how these
counters usually work)
Paul
--
Paul Emmerich
Looking for help with your Cep
Has anyone automated the ability to generate S3 keys for OpenStack users in
Ceph? Right now we take in a users request manually (Hey we need an S3 API
key for our OpenStack project 'X', can you help?). We as cloud/ceph admins
just use radosgw-admin to create them an access/secret key pair for their
Hi,
There is an admin API for RGW :
http://docs.ceph.com/docs/master/radosgw/adminops/
You can check out rgwadmin¹ to see how to use it
Best regards,
[1] https://github.com/UMIACS/rgwadmin
On 01/31/2019 06:11 PM, shubjero wrote:
> Has anyone automated the ability to generate S3 keys for OpenSt
Hey guys!
First post to the list and new Ceph user so I might say/ask some stupid stuff ;)
I've setup a Ceph Storage (and crashed it 2 days after), with 2 ceph-mon, 2 ceph-ods (same host), 2 ceph-mgr and 1 ceph-mgs. Everything is up
and running and works great.
Now I'm trying to integrate the C
Hi Carlos - just a guess, but you might need your credentials from
/etc/ceph on the host mounted inside the container.
-- jacob
Hey guys!
First post to the list and new Ceph user so I might say/ask some
stupid stuff ;)
I've setup a Ceph Storage (and crashed it 2 days after), with 2
cep
Hey everyone,
Just a last minute reminder if you're considering presenting at
Cephalocon Barcelona 2019, the CFP will be ending tomorrow.
Early bird ticket rate ends February 15.
https://ceph.com/cephalocon/barcelona-2019/
--
Mike Perez (thingee)
___
Hi Will,
there is a dedicated mailing list for ceph-ansible:
http://lists.ceph.com/listinfo.cgi/ceph-ansible-ceph.com
Best,
Martin
On Thu, Jan 31, 2019 at 5:07 PM Will Dennis wrote:
>
> Hi all,
>
>
>
> Trying to utilize the ‘ceph-ansible’ project
> (https://github.com/ceph/ceph-ansible ) to de
On 31/01/2019 18:51, Jacob DeGlopper wrote:
Hi Carlos - just a guess, but you might need your credentials from /etc/ceph on
the host mounted inside the container.
-- jacob
Hi Jacob!
That's not the case afaik. Docker daemon itself mounts the target, so it's still the host in here, and th
On Thu, Jan 31, 2019 at 12:16 PM Paul Emmerich wrote:
>
> "perf schema" has a description field that may or may not contain
> additional information.
>
> My best guess for these fields would be bytes read/written since
> startup of this particular librbd instance. (Based on how these
> counters us
We have a public object storage cluster running Ceph rados gateway lumious
12.2.4, which we plan to update soon.
My question concerns some multipart object that appear to upload
successfully but when retrieving the object the client can only get 4MB.
An example would be
radosgw-admin object stat --
Here user requirement is - less write and more reads...so not much
worried on performance .
Thanks
Swami
On Thu, Jan 31, 2019 at 1:55 PM Piotr Dałek wrote:
>
> On 2019-01-31 6:05 a.m., M Ranga Swami Reddy wrote:
> > My thought was - Ceph block volume with raid#0 (means I mounted a ceph
> > block
Thank you - we were expecting that, but wanted to be sure.
By the way - we are running our clusters on IPv6-BGP, to achieve massive
scalability and load-balancing ;-)
Mit freundlichen Grüßen
Carsten Buchberger
[WITCOM_LOGO_CS4_CMYK_1.png]
WiTCOM Wiesbadener Informations-
und Telekommunikations
Thanks for the clarification!
Great that the next release will include the feature. We are running on Red Hat
Ceph, so we might have to wait longer before having the feature available.
Another related (simple) question:
We are using
/var/run/ceph/$cluster-$type.$id.$pid.$cctid.asok
in ceph.conf,
We are glad to announce the eleventh bug fix release of the Luminous
v12.2.x long term stable release series. We recommend that all users
upgrade to this release. Please note the following precautions while
upgrading.
Notable Changes
---
* This release fixes the pg log hard limit
On 2/1/19 8:44 AM, Abhishek wrote:
> We are glad to announce the eleventh bug fix release of the Luminous
> v12.2.x long term stable release series. We recommend that all users
> upgrade to this release. Please note the following precautions while
> upgrading.
>
> Notable Changes
> -
Den fre 1 feb. 2019 kl 06:30 skrev M Ranga Swami Reddy :
> Here user requirement is - less write and more reads...so not much
> worried on performance .
>
So why go for raid0 at all?
It is the least secure way to store data.
--
May the most significant bit of your life be positive.
___
31 matches
Mail list logo