Hi,
I just wanted to give a friendly reminder for this issue. I would
appreciate if someone
can help me out here. Also, please do let me know in case some more
information is
required here.
On Thu, Aug 10, 2017 at 2:41 PM, Mandar Naik wrote:
> Hi Peter,
> Thanks a lot for the reply. Please find
Not going through the obvious of that crush map is just not looking
correct or even sane... or that the policy itself doesn't sound very
sane - but I'm sure you'll understand the caveats and issues it may
present...
what's most probably happening is that a (or several) pool is using
those same OSD
Hi,
Your crushmap has issues.
You don't have any root and you have duplicates entries. Currently you store
data on a single OSD.
You can manually fix the crushmap by decompiling, editing and compiling.
http://docs.ceph.com/docs/hammer/rados/operations/crush-map/#editing-a-crush-map
(if you
Hello,
How can i delete a pg completly from a ceph server? I think i have all
Data manually from the Server deleted. But i a ceph pg query
shows the pg already? A ceph pg force_create_pg doesn't create the pg.
The ceph says he has created the pg, and a pg is stuck more than 300 sec.
Thanks for
You can check linux source code to see the feature supported by kernel client.
e.g. linux 4.13-rc5
(https://github.com/torvalds/linux/blob/v4.13-rc5/drivers/block/rbd.c)
in drivers/block/rbd.c:
/* Feature bits */
#define RBD_FEATURE_LAYERING(1ULL<<0)
#define RBD_FEATURE_STRIPINGV2
Hi,
We are currently running two Proxmox/ceph clusters that work perfectly
(since 2014) and thank to this succesful experience, we plan to install
a new Ceph cluster for storage of our computing cluster.
Until now, we only used RBD (virtualization context) but now we want to
use CephFS for t
On Tue, 15 Aug 2017, Gregory Farnum said:
> On Tue, Aug 15, 2017 at 4:23 AM Sean Purdy wrote:
> > I have a three node cluster with 6 OSD and 1 mon per node.
> >
> > I had to turn off one node for rack reasons. While the node was down, the
> > cluster was still running and accepting files via rado
On Tue, Aug 15, 2017 at 10:35 PM, Matt Benjamin wrote:
> I think we need a v12.1.5 including #17040
*I* think that this is getting to a point where we should just have
nightly development releases.
What is the benefit of waiting for each RC every two weeks (or so) otherwise?
On one side we are
Hi David,
We are running 10.2.7, but it seems it is ok now and it reflected all the
changes.
Thank you!
Regards,
Ossi
From: "David Turner"
To: "Osama Hasebou" , "ceph-users"
Sent: Tuesday, 8 August, 2017 23:31:17
Subject: Re: [ceph-users] Running commands on Mon or OSD nodes
Reg
Hi Nick,
Thanks for replying! If Ceph is combined with Openstack then, does that mean
that actually when openstack writes are happening, it is not fully sync'd (as
in written to disks) before it starts receiving more data, so acting as async ?
In that scenario there is a chance for data loss i
Alfredo Deza writes:
> On Tue, Aug 15, 2017 at 10:35 PM, Matt Benjamin wrote:
>> I think we need a v12.1.5 including #17040
We discussed this in the RGW Standups today & may not need one more RC
for the bug above, and should be fine as far as the fix is in 12.2.0
>
> *I* think that this is gett
Hi!
I have the following issue: While “radosgw bucket list” shows me my buckets, S3
API clients only get a “404 Not Found”. With debug level 20, I see the
following output of the radosgw service:
2017-08-16 14:02:21.725959 7fc7f5317700 20 rgw::auth::s3::LocalEngine granted
access
2017-08-16 14
Hi Matt,
Well behaved applications are the problem here. ESXi sends all writes as sync
writes. So although OS’s will still do their own buffering, any ESXi level
operation is all done as sync. This is probably seen the greatest when
migrating vm’s between datastores, everything gets done as
Would reads and writes to the SSD on another server be faster than reads
and writes to HDD on the local server? If the answer is no, then even if
this was possible it would be worse than just putting your WAL and DB on
the same HDD locally. I don't think this is a use case the devs planned
for.
Y
Hello,
I have use case for billions of small files (~1KB) on CephFS and as to
my experience having billions of objects in a pool is not very good idea
(ops slow down, large memory usage, etc) I decided to test CephFS
inline_data. After activating this feature and starting copy process I
notic
You need to fix your endpoint URLs. The line that shows this is.
2017-08-16 14:02:21.725967 7fc7f5317700 10 s->object=s3testbucket-1
s->bucket=ceph-kl-mon1.de.empolis.com
It thinks you're bucket is your domain name and your object is your bucket
name. If you did this using an IP instead of URL it
Thanks for the hint!
If I use the IP address of the rados gateway or the DNS name configured under
„rgw dns name”, I get a 403 instead of 404. And that could be remedied by using
the user that initially created the bucket.
Cheers,
Martin
Von: David Turner
Datum: Mittwoch, 16. August 2017 um
On Wed, Aug 16, 2017 at 7:28 AM Henrik Korkuc wrote:
> Hello,
>
> I have use case for billions of small files (~1KB) on CephFS and as to
> my experience having billions of objects in a pool is not very good idea
> (ops slow down, large memory usage, etc) I decided to test CephFS
> inline_data. Af
Hello,
we have:
Ceph version: Jewel
Hosts: 6
OSDs per Host: 12
OSDs type: 6 SATA / 6 SSD
We started with a "generic" pool on our SSDs. Now we added the SATA OSDs
on the same hosts. We reassign the hierarchic:
==
ID WEIGHT TYPE NAMEUP/DOW
Thanks a lot for the reply. To eliminate issue of root not being present
and duplicate entries
in crush map I have updated my crush map. Now I have default root and I
have crush hierarchy
without duplicate entries.
I have now created one pool local to host "ip-10-0-9-233" while other pool
local to
:( no suggestions or recommendations on this?
Am 14. August 2017 16:50:15 MESZ schrieb Mehmet :
>Hi friends,
>
>my actual hardware setup per OSD-node is as follow:
>
># 3 OSD-Nodes with
>- 2x Intel(R) Xeon(R) CPU E5-2603 v3 @ 1.60GHz ==> 12 Cores, no
>Hyper-Threading
>- 64GB RAM
>- 12x 4TB HGST
On Wed, Aug 16, 2017 at 3:27 PM, Henrik Korkuc wrote:
> Hello,
>
> I have use case for billions of small files (~1KB) on CephFS and as to my
> experience having billions of objects in a pool is not very good idea (ops
> slow down, large memory usage, etc) I decided to test CephFS inline_data.
> Af
Honestly there isn't enough information about your use case. RBD usage
with small IO vs ObjectStore with large files vs ObjectStore with small
files vs any number of things. The answer to your question might be that
for your needs you should look at having a completely different hardware
configur
On 17-08-16 19:40, John Spray wrote:
On Wed, Aug 16, 2017 at 3:27 PM, Henrik Korkuc wrote:
Hello,
I have use case for billions of small files (~1KB) on CephFS and as to my
experience having billions of objects in a pool is not very good idea (ops
slow down, large memory usage, etc) I decided t
Hi,
As ceph-deploy utility does not work properly with named clusters (other
than the default ceph) In order to have a named cluster I have created the
monitor using the manual procedure:
http://docs.ceph.com/docs/master/install/manual-deployment/#monitor-bootstrapping
In the end, it starts up p
Hi,
Am 16.08.2017 um 19:31 schrieb Henrik Korkuc:
> On 17-08-16 19:40, John Spray wrote:
>> On Wed, Aug 16, 2017 at 3:27 PM, Henrik Korkuc wrote:
> maybe you can suggest any recommendations how to scale Ceph for billions
> of objects? More PGs per OSD, more OSDs, more pools? Somewhere in the
> li
Hi Mehmet!
On 08/16/2017 11:12 AM, Mehmet wrote:
:( no suggestions or recommendations on this?
Am 14. August 2017 16:50:15 MESZ schrieb Mehmet :
Hi friends,
my actual hardware setup per OSD-node is as follow:
# 3 OSD-Nodes with
- 2x Intel(R) Xeon(R) CPU E5-2603 v3 @ 1.60GHz =
On Wed, Aug 16, 2017 at 4:04 AM Sean Purdy wrote:
> On Tue, 15 Aug 2017, Gregory Farnum said:
> > On Tue, Aug 15, 2017 at 4:23 AM Sean Purdy
> wrote:
> > > I have a three node cluster with 6 OSD and 1 mon per node.
> > >
> > > I had to turn off one node for rack reasons. While the node was down
We are using Ceph on NFS for VMWare – we are using SSD tiers in front of SATA
and some direct SSD pools. The datastores are just XFS file systems on RBD
managed by a pacemaker cluster for failover.
Lessons so far are that large datastores quickly run out of IOPS and compete
for performance –
Hello,
On Thu, 17 Aug 2017 00:13:24 + Adrian Saul wrote:
> We are using Ceph on NFS for VMWare – we are using SSD tiers in front of SATA
> and some direct SSD pools. The datastores are just XFS file systems on RBD
> managed by a pacemaker cluster for failover.
>
> Lessons so far are that
Hi,
Is it possible to enable copy on read for a rbd child image? I've been checking
around and looks like the only way to enable copy-on-read is enabling it for
the whole cluster using:
rbd_clone_copy_on_read = true
Can it be enabled just for specific images or pools?
We keep some parent imag
> I'd be interested in details of this small versus large bit.
The smaller shares is just simply to distribute the workload over more RBDs so
the bottleneck doesn’t become the RBD device. The size itself doesn’t
particularly matter but just the idea to distribute VMs across many shares
rather t
You should be able to utilize image-meta to override the configuration
on a particular image:
# rbd image-meta set conf_rbd_clone_copy_on_read true
On Wed, Aug 16, 2017 at 8:36 PM, Xavier Trilla
wrote:
> Hi,
>
>
>
> Is it possible to enable copy on read for a rbd child image? I’ve been
> chec
33 matches
Mail list logo