Am 26.02.2018 um 20:31 schrieb Gregory Farnum:
> On Mon, Feb 26, 2018 at 11:26 AM Oliver Freyermuth
> mailto:freyerm...@physik.uni-bonn.de>> wrote:
>
> Am 26.02.2018 um 20:09 schrieb Oliver Freyermuth:
> > Am 26.02.2018 um 19:56 schrieb Gregory Farnum:
> >>
> >>
> >> On Mon, F
On Mon, Feb 26, 2018 at 11:26 AM Oliver Freyermuth <
freyerm...@physik.uni-bonn.de> wrote:
> Am 26.02.2018 um 20:09 schrieb Oliver Freyermuth:
> > Am 26.02.2018 um 19:56 schrieb Gregory Farnum:
> >>
> >>
> >> On Mon, Feb 26, 2018 at 8:25 AM Oliver Freyermuth <
> freyerm...@physik.uni-bonn.de
Am 26.02.2018 um 20:09 schrieb Oliver Freyermuth:
> Am 26.02.2018 um 19:56 schrieb Gregory Farnum:
>>
>>
>> On Mon, Feb 26, 2018 at 8:25 AM Oliver Freyermuth
>> mailto:freyerm...@physik.uni-bonn.de>> wrote:
>>
>> Am 26.02.2018 um 16:59 schrieb Patrick Donnelly:
>> > On Sun, Feb 25, 2018 at
Am 26.02.2018 um 19:56 schrieb Gregory Farnum:
>
>
> On Mon, Feb 26, 2018 at 8:25 AM Oliver Freyermuth
> mailto:freyerm...@physik.uni-bonn.de>> wrote:
>
> Am 26.02.2018 um 16:59 schrieb Patrick Donnelly:
> > On Sun, Feb 25, 2018 at 10:26 AM, Oliver Freyermuth
> > mailto:freyerm...@p
On Mon, Feb 26, 2018 at 8:25 AM Oliver Freyermuth <
freyerm...@physik.uni-bonn.de> wrote:
> Am 26.02.2018 um 16:59 schrieb Patrick Donnelly:
> > On Sun, Feb 25, 2018 at 10:26 AM, Oliver Freyermuth
> > wrote:
> >> Looking with:
> >> ceph daemon osd.2 perf dump
> >> I get:
> >> "bluefs": {
> >>
On Mon, Feb 26, 2018 at 7:59 AM, Patrick Donnelly wrote:
> It seems in the above test you're using about 1KB per inode (file).
> Using that you can extrapolate how much space the data pool needs
s/data pool/metadata pool/
--
Patrick Donnelly
___
ceph-
Am 26.02.2018 um 17:31 schrieb David Turner:
> That was a good way to check for the recovery sleep. Does your `ceph status`
> show 128 PGs backfilling (or a number near that at least)? The PGs not
> backfilling will say 'backfill+wait'.
Yes:
pgs: 37778254/593342240 objects degraded (6.
That was a good way to check for the recovery sleep. Does your `ceph
status` show 128 PGs backfilling (or a number near that at least)? The PGs
not backfilling will say 'backfill+wait'.
On Mon, Feb 26, 2018 at 11:25 AM Oliver Freyermuth <
freyerm...@physik.uni-bonn.de> wrote:
> Am 26.02.2018 um
Am 26.02.2018 um 16:59 schrieb Patrick Donnelly:
> On Sun, Feb 25, 2018 at 10:26 AM, Oliver Freyermuth
> wrote:
>> Looking with:
>> ceph daemon osd.2 perf dump
>> I get:
>> "bluefs": {
>> "gift_bytes": 0,
>> "reclaim_bytes": 0,
>> "db_total_bytes": 84760592384,
>>
Patrick's answer supersedes what I said about RocksDB usage. My knowledge
was more general for actually storing objects, not the metadata inside of
MDS. Thank you for sharing Patrick.
On Mon, Feb 26, 2018 at 11:00 AM Patrick Donnelly
wrote:
> On Sun, Feb 25, 2018 at 10:26 AM, Oliver Freyermuth
On Sun, Feb 25, 2018 at 10:26 AM, Oliver Freyermuth
wrote:
> Looking with:
> ceph daemon osd.2 perf dump
> I get:
> "bluefs": {
> "gift_bytes": 0,
> "reclaim_bytes": 0,
> "db_total_bytes": 84760592384,
> "db_used_bytes": 78920024064,
> "wal_total_bytes":
When a Ceph system is in recovery, it uses much more RAM than it does while
running healthy. This increase is often on the order of 4x more memory (at
least back in the days of filestore, I'm not 100% certain about bluestore,
but I would assume the same applies). You have another thread on the ML
Dear Cephalopodians,
I have to extend my question a bit - in our system with 105,000,000 objects in
CephFS (mostly stabilized now after the stress-testing...),
I observe the following data distribution for the metadata pool:
# ceph osd df | head
ID CLASS WEIGHT REWEIGHT SIZE USEAVAIL %USE
Dear Cephalopodians,
as part of our stress test with 100,000,000 objects (all small files) we ended
up with
the following usage on the OSDs on which the metadata pool lives:
# ceph osd df | head
ID CLASS WEIGHT REWEIGHT SIZE USEAVAIL %USE VAR PGS
[...]
2 ssd 0.21819 1.0 223G 7
Based on this limited info, Object storage if behind proxy. We use Ceph
behind HAProxy and hardware load-balancers at Bloomberg. Our Chef recipes
are at https://github.com/ceph/ceph-chef and
https://github.com/bloomberg/chef-bcs. The chef-bcs cookbooks show the
HAProxy info.
Thanks,
Chris
On Wed,
Moving this to ceph-user list where it'll get some attention.
On Thu, Dec 22, 2016 at 2:08 PM, SIBALA, SATISH wrote:
> Hi,
>
>
>
> Could you please give me an recommendation on kind of Ceph storage to be
> used with NGINX proxy server (Object / Block / FileSystem)?
>
>
>
> Best Regards
>
> Satis
On Tue, 19 Jul 2016 13:59:48 + Andrey Ptashnik wrote:
> Hi Team,
>
> Is there any way to implement storage tiering in Ceph Jewel?
There is, you may want to re-read the documentation and the various
cache-tier posts here.
> I’ve read about different placing different pool on different perfor
Hi Team,
Is there any way to implement storage tiering in Ceph Jewel?
I’ve read about different placing different pool on different performance
hardware, however is there any automation possible in Ceph that will promote
data from slow hardware to fast one and back?
Regards,
Andrey
__
Hello,
On Mon, 31 Aug 2015 22:44:05 + Stillwell, Bryan wrote:
> We have the following in our ceph.conf to bring in new OSDs with a weight
> of 0:
>
> [osd]
> osd_crush_initial_weight = 0
>
>
> We then set 'nobackfill' and bring in each OSD at full weight one at a
> time (letting things se
On Mon, 31 Aug 2015 08:57:23 +0200 Udo Lembke wrote:
> Hi Christian,
> for my setup "b" takes too long - too much data movement and stress to
> all nodes. I have simply (with replica 3) "set noout", reinstall one
> node (with new filesystem on the OSDs, but leave them in the crushmap)
> and start
We have the following in our ceph.conf to bring in new OSDs with a weight
of 0:
[osd]
osd_crush_initial_weight = 0
We then set 'nobackfill' and bring in each OSD at full weight one at a
time (letting things settle down before bring in the next OSD). Once all
the OSDs are brought in we unset 'no
When we know we need to off a node, we weight it down over time. Depending
on your cluster, you may need to do this over days or hours.
In theory, you could do the same when putting OSDs in, by setting noin,
and then setting weight to something very low, and going up over time. I
haven¹t tried thi
On Mon, Aug 31, 2015 at 5:07 AM, Christian Balzer wrote:
>
> Hello,
>
> I'm about to add another storage node to small firefly cluster here and
> refurbish 2 existing nodes (more RAM, different OSD disks).
>
> Insert rant about not going to start using ceph-deploy as I would have to
> set the clus
Hi Christian,
for my setup "b" takes too long - too much data movement and stress to all
nodes.
I have simply (with replica 3) "set noout", reinstall one node (with new
filesystem on the OSDs, but leave them in the
crushmap) and start all OSDs (at friday night) - takes app. less than one day
for
Hello,
I'm about to add another storage node to small firefly cluster here and
refurbish 2 existing nodes (more RAM, different OSD disks).
Insert rant about not going to start using ceph-deploy as I would have to
set the cluster to no-in since "prepare" also activates things due to the
udev magi
Hi Cephers,
I'm using ceph0.94 and libvirt1.2.14. Normally, I can show the storage pool is
active using virsh pool-list. But when I refresh it, it will be inactive while
some of rbd volumes are being deleted.
From the processing of refreshing storage pool,
storagePoolRefresh
|
On 05/06/14 17:01, yalla.gnan.ku...@accenture.com wrote:
Hi All,
I have a ceph storage cluster with four nodes. I have created block storage
using cinder in openstack and ceph as its storage backend.
So, I see a volume is created in ceph in one of the pools. But how to get
information like o
Hi All,
I have a ceph storage cluster with four nodes. I have created block storage
using cinder in openstack and ceph as its storage backend.
So, I see a volume is created in ceph in one of the pools. But how to get
information like on which OSD, PG, the volume is created in ?
Thanks
Kumar
Hello,
How can check ceph client session in clients side, for example, when
mount iscsi or nfs, you can check it (nfs just mount, iscsi iscsiadm -m
session), but how can do that with ceph? And is there more detailed
documentation about openstack and ceph than
http://ceph.com/docs/master/rbd/rbd-op
Hi,
Am 04.06.2014 14:51, schrieb yalla.gnan.ku...@accenture.com:
> Hi All,
>
>
>
> I have a ceph storage cluster with four nodes. I have created block storage
> using cinder in openstack and ceph as its storage backend.
>
> So, I see a volume is created in ceph in one of the pools. But how
Hi All,
I have a ceph storage cluster with four nodes. I have created block storage
using cinder in openstack and ceph as its storage backend.
So, I see a volume is created in ceph in one of the pools. But how to get
information like on which OSD, PG, the volume is created in ?
Thanks
Kumar
Jeroen,
Actually this is more a question for the OpenStack ML.
All the use cases you described are not possible at the moment.
The only thing you can get is shared ressources across all the tenants, you
can’t really pin any ressource to a specific tenant.
This could done I guess, but not availab
Hello,
Currently I am integrating my ceph cluster into Openstack by using Ceph’s RBD.
I’d like to store my KVM virtual machines on pools that I have made on the ceph
cluster.
I would like to achieve to have multiple storage solutions for multiple
tenants. Currently when I launch an instance the
Thanks Sebastien.
-Original Message-
From: Sebastien Han [mailto:sebastien@enovance.com]
Sent: Tuesday, February 25, 2014 8:23 PM
To: Gnan Kumar, Yalla
Cc: ceph-users
Subject: Re: [ceph-users] storage
Hi,
RBD blocks are stored as objects on a filesystem usually under:
/var/lib
Hi,
RBD blocks are stored as objects on a filesystem usually under:
/var/lib/ceph/osd//current//
RBD is just an abstraction layer.
Cheers.
Sébastien Han
Cloud Engineer
"Always give 100%. Unless you're giving blood.”
Phone: +33 (0)1 49 70 99 72
Mail: sebastien@enovance.com
Addres
Hi All,
By default in which directory/directories, does ceph store the block device
files ? Is it in the /dev or other filesystem ?
Thanks
Kumar
This message is for the designated recipient only and may contain privileged,
proprietary, or otherwise confidenti
On Tue, 27 Aug 2013, ker can wrote:
> This was very helpful -thanks. However I'm still trying to reconcile this
> with something that Sage mentioned a while back on a similar topic.
> Apparently you can disable the journal if you're using btrfs. Is that
> possible because btrfs takes care of thi
This was very helpful -thanks. However I'm still trying to reconcile this
with something that Sage mentioned a while back on a similar topic.
Apparently you can disable the journal if you're using btrfs. Is that
possible because btrfs takes care of things like atomic object writes and
updates to
: Johannes Klarenbeek; ceph-users@lists.ceph.com
Onderwerp: Re: [ceph-users] Storage, File Systems and Data Scrubbing
ceph-osd builds a transactional interface on top of the usual posix
operations so that we can do things like atomically perform an object
write and update the osd metadata. The current
ceph-osd builds a transactional interface on top of the usual posix
operations so that we can do things like atomically perform an object
write and update the osd metadata. The current implementation
requires our own journal and some metadata ordering (which is provided
by the backing filesystem's
Let me make a simpler case, to do ACID (https://en.wikipedia.org/wiki/ACID)
which are all properties you want in a filesystem or a database, you need a
journal. You need a journaled filesystem to make the object store's file
operations safe. You need a journal in ceph to make sure the object o
I think you are missing the distinction between metadata journaling and data
journaling. In most cases a journaling filesystem is one that journal's it's
own metadata but your data is on its own. Consider the case where you have a
replication level of two, the osd filesystems have journalin
I think you are missing the distinction between metadata journaling and data
journaling. In most cases a journaling filesystem is one that journal's it's
own metadata but your data is on its own. Consider the case where you have a
replication level of two, the osd filesystems have journaling d
Dear ceph-users,
I read a lot of documentation today about ceph architecture and linux file
system benchmarks in particular and I could not help notice something that I
like to clear up for myself. Take into account that it has been a while that I
actually touched linux, but I did some programm
Hi again,
[root@xen-blade05 ~]# virsh pool-info rbd
Name: rbd
UUID: ebc61120-527e-6e0a-efdc-4522a183877e
State: running
Persistent: no
Autostart: no
Capacity: 5.28 TiB
Allocation: 16.99 GiB
Available: 5.24 TiB
I managed to get it running. How
Hi,
thanks a lot for your answer. I was successfully able to create the
storage pool with virsh.
However it is not listed when I issue a virsh pool-list:
Name State Autostart
-
If I try to add it again:
[root@xen-blade05 ~]# virsh
Hi,
On 07/21/2013 08:14 AM, Sébastien RICCIO wrote:
Hi !
I'm currently trying to get the xenserver on centos 6.4 tech preview
working against a test ceph cluster and having the same issue.
Some infos: the cluster is named "ceph", the pool is named "rbd".
ceph.xml:
rbd
ceph
Hi !
I'm currently trying to get the xenserver on centos 6.4 tech preview
working against a test ceph cluster and having the same issue.
Some infos: the cluster is named "ceph", the pool is named "rbd".
ceph.xml:
rbd
ceph
secret.xml:
client.admin
[root@xen-b
Hi John,
Could you try without the cat'ing and such?
Could you also try this:
$ virsh secret define secret.xml
$ virsh secret-set-value
$ virsh pool-create ceph.xml
Could you post both XML files and not use any Xen commands like 'xe'?
I want to verify where this problem is.
Wido
On 07/11/
Wido, Thanks! I tried again with your command syntax but the result is the
same.
[root@xen02 ~]# virsh secret-set-value $(cat uuid) $(cat client.admin.key)
Secret value set
[root@xen02 ~]# xe sr-create type=libvirt name-label=ceph
device-config:xml-filename=ceph.xml
Error code: libvirt
Error para
Hi.
So, the problem here is a couple of things.
First: libvirt doesn't handle RBD storage pools without auth. That's my
bad, but I never resolved that bug: http://tracker.ceph.com/issues/3493
For now, make sure cephx is enabled.
Also, the commands you are using don't seem to be right.
It sh
Hi Dave, Thank you so much for getting back to me.
the command returns the same errors:
[root@xen02 ~]# virsh pool-create ceph.xml
error: Failed to create pool from ceph.xml
error: Invalid secret: virSecretFree
[root@xen02 ~]#
the secret was precreated for the user admin that I use elsewhere wi
[sorry I didn't manage to reply to the original message; I only just joined
this list.
Sorry if this breaks your threading!]
On 10 Jul 2013 at 16:01 John Shen wrote:
> I was following the tech preview of libvirt/ceph integration in xenserver,
> but ran
> into an issue with ceph auth in setting
I was following the tech preview of libvirt/ceph integration in xenserver,
but ran into an issue with ceph auth in setting up the SR. any help would
be greatly appreciated.
uuid was generated per: http://eu.ceph.com/docs/wip-dump/rbd/libvirt/
according to inktank, storage pool auth syntax differs
54 matches
Mail list logo