I am running ceph bench mark on a 3 node cluster.
I am seeing that the bandwidth is going down and latencies are going
up beyond 16k.
What must be going on ? Is there anything I must check?
Thanks in advance for your help.
Hard disk can handle it.(dd below) and it is a 1Gb network.
vjujjuri@rg
Thanks for the information Italo.
I think RGW should support all the pools on top of EC backend, not sure this is
because of bucket-index sharding or not. You should probably raise a defect in
the community.
Regards
Somnath
From: Italo Santos [mailto:okd...@gmail.com]
Sent: Tuesday, May 05, 201
I use RGW with the .rgw.bucktes as EC pool and works fine as well, I’m able to
reach ~300MB/s using a physical RGW server with 4 OSD nodes with SAS 10K drives
w/out SSD journal.
Also, I’ve tested create all other pools as EC pool too but RGW daemon doesn’t
start, so I realised that the only p
Hi,
I am planning to setup RGW on top of Erasure coded pool. RGW stores all of its
data in the .rgw.buckets pool and I am planning to configure this pool as
erasure-coded. I think configuring all other rgw pools as replicated should be
fine as they don't store lot of data.
Please let me know if
On 05/05/2015 08:55 AM, Andrija Panic wrote:
> Hi,
>
> small update:
>
> in 3 months - we lost 5 out of 6 Samsung 128Gb 850 PROs (just few days in
> between of each SSD death) - cant believe it - NOT due to wearing out... I
> really hope we got efective series from suplier...
>
That's ridiculou
This is the first development release for the Infernalis cycle, and the
first Ceph release to sport a version number from the new numbering
scheme. The "9" indicates this is the 9th release cycle--I (for
Infernalis) is the 9th letter. The first "0" indicates this is a
development release ("1"
On 05/05/2015 18:17, Lincoln Bryant wrote:
Hello all,
I'm seeing some warnings regarding trimming and cache pressure. We're running
0.94.1 on our cluster, with erasure coding + cache tiering backing our CephFS.
health HEALTH_WARN
mds0: Behind on trimming (250/30)
> On 05/05/2015, at 18.52, Sage Weil wrote:
>
> On Tue, 5 May 2015, Tony Harris wrote:
>> So with this, will even numbers then be LTS? Since 9.0.0 is following
>> 0.94.x/Hammer, and every other release is normally LTS, I'm guessing 10.x.x,
>> 12.x.x, etc. will be LTS...
>
> It looks that way n
Robert,
I did try that without success.
The error was:
Invalid command: missing required parameter srcpool()
Upon "debian112"'s recommendation on IRC channel and looking at this
post:
http://cephnotes.ksperis.com/blog/2014/10/29/remove-pool-without-name
I 've used the command:
rados rmpoo
Can you try
ceph osd pool rename " " new-name
On Tue, May 5, 2015 at 12:43 PM, Georgios Dimitrakakis
wrote:
>
> Hi all!
>
> Somehow I have a pool without a name...
>
> $ ceph osd lspools
> 3 data,4 metadata,5 rbd,6 .rgw,7 .rgw.control,8 .rgw.gc,9 .log,10
> .intent-log,11 .usage,12 .users,13 .u
*
Hello Yehuda and the rest of the mailing list.
My main question currently is why are the bucket index and the object
manifest ever different? Based on how we are uploading data I do not
think that the rados gateway should ever know the full file size without
having all of the objects withi
Hi all!
Somehow I have a pool without a name...
$ ceph osd lspools
3 data,4 metadata,5 rbd,6 .rgw,7 .rgw.control,8 .rgw.gc,9 .log,10
.intent-log,11 .usage,12 .users,13 .users.email,14 .users.swift,15
.users.uid,16 .rgw.root,17 .rgw.buckets.index,18 .rgw.buckets,19
.rgw.buckets.extra,20 volum
Yes, so it seems. The librados::nobjects_begin() call expects at least a Hammer
(0.94) backend. Probably need to add a try/catch there to catch this issue, and
maybe see if using a different api would be better compatible with older
backends.
Yehuda
- Original Message -
> From: "Anthon
Hello all,
I'm seeing some warnings regarding trimming and cache pressure. We're running
0.94.1 on our cluster, with erasure coding + cache tiering backing our CephFS.
health HEALTH_WARN
mds0: Behind on trimming (250/30)
mds0: Client 74135 failing to respond to cache
Unfortunately it immediately aborted (running against a 0.80.9 Ceph).
Does Ceph also have to be a 0.94 level?
last error was
-3> 2015-05-06 01:11:11.710947 7f311dd15880 0 run(): building
index of all objects in pool
-2> 2015-05-06 01:11:11.710995 7f311dd15880 1 --
10.200.3.92:0/1001510 --
On Tue, 5 May 2015, Tony Harris wrote:
> So with this, will even numbers then be LTS? Since 9.0.0 is following
> 0.94.x/Hammer, and every other release is normally LTS, I'm guessing 10.x.x,
> 12.x.x, etc. will be LTS...
It looks that way now, although I can't promise the pattern will hold!
sage_
So with this, will even numbers then be LTS? Since 9.0.0 is following
0.94.x/Hammer, and every other release is normally LTS, I'm guessing
10.x.x, 12.x.x, etc. will be LTS...
On Tue, May 5, 2015 at 11:45 AM, Sage Weil wrote:
> On Tue, 5 May 2015, Joao Eduardo Luis wrote:
> > On 05/04/2015 05:09
On Tue, 5 May 2015, Joao Eduardo Luis wrote:
> On 05/04/2015 05:09 PM, Sage Weil wrote:
> > The first Ceph release back in Jan of 2008 was 0.1. That made sense at
> > the time. We haven't revised the versioning scheme since then, however,
> > and are now at 0.94.1 (first Hammer point release).
Just another quick question,
Do you know if you RAID Controller is disabling the local disk write caches?
I'm wondering how this corruption occurred and if this is a problem that is
specific to your hardware/software config or is a general Ceph issue that makes
it vulnerable to sudden power lo
Hello everyone,
I recently had to install ceph giant on ubuntu 15.04 and had to solve
some problems, so here is the best way to do it.
1)replace in your ubuntu 15.04 fresh install systemd with upstart
apt-get update
apt-get install upstart
apt-get install upstart-sysv (remove systemd and repla
Hi,
small update:
in 3 months - we lost 5 out of 6 Samsung 128Gb 850 PROs (just few days in
between of each SSD death) - cant believe it - NOT due to wearing out... I
really hope we got efective series from suplier...
Regards
On 18 April 2015 at 14:24, Andrija Panic wrote:
> yes I know, but t
Gregory Farnum mailto:g...@gregs42.com>> wrote:
Oh. That's strange; they are all mapped to two OSDs but are placed on
two different ones. I'm...not sure why that would happen. Are these
PGs active? What's the full output of "ceph -s"?
Those 4 PG’s went inactive at some point, and we had the luxu
Can you try creating the .log pool?
Yehda
- Original Message -
> From: "Anthony Alba"
> To: "Yehuda Sadeh-Weinraub"
> Cc: "Ben" , "ceph-users"
> Sent: Tuesday, May 5, 2015 3:37:15 AM
> Subject: Re: [ceph-users] Shadow Files
>
> ...sorry clicked send to quickly
>
> /opt/ceph/bin/rados
On 05/04/2015 05:09 PM, Sage Weil wrote:
> The first Ceph release back in Jan of 2008 was 0.1. That made sense at
> the time. We haven't revised the versioning scheme since then, however,
> and are now at 0.94.1 (first Hammer point release). To avoid reaching
> 0.99 (and 0.100 or 1.00?) we ha
Hi!
>Which ceph.conf do you talk about ?
>The one on host server (on which vm is running) ?
Yes, that ceph.conf on client host, which is not part of a ceph cluster
(no OSD, no MON) and it used solely to run VMs with RBD backend.
>Interesting, can you explain this please ?
I think, that libvir
Hi!
Sorry, I've found the reason of this strange results - rbd cache was enabled
in local ceph.conf on client node, I used for testing. I remove it from config
and get more sane results.
On all tests direct=1 iodepth=32 ioengine=aio fio=seqwr bs=4k sync=0
cache=wb -> iops=31700,bw=126Mb/s, 75%
I’ve live migrated RBD images of our VMs (with ext4 FS) through our Proxmox PVE
cluster from one pool to anther and now it seems those device are no longer so
sparse as before, ie. pool usage has grown to almost sum of full image sizes,
wondering if there’s a way to untrim RBD images to become m
Hi
Previously I had to delete one pool because of a mishap I did. Now I need to
create the pool again and give it the same id. How would one do that?
I assume my root problem is that, since I had to delete the images pool, the
base images vm's use are missing. I have the images available i
I test performance from inside VM using fio and a 64G test file,
located on the same volume with VM's rootfs.
fio 2.0.8 from Debian Wheezy repos was running with cmdline:
#fio --filename=/test/file --direct=1 --sync=0 --rw=write --bs=4k --runtime=60 \
--ioengine=aio --iodeph=32 --time_based --siz
Hi,
rbd_cache is client config only,
so no need to restart osd.
if you set cache=writeback in libvirt, it'll enable it,
so you don't need to setup rbd_cache=true in ceph.conf.
(it should override it)
you can verify it enable, doing a sequantial write benchmark with 4k block.
you should have a
Hi!
After examining our running OSD configuration through an admin socket
we suddenly noticed, that "rbd_cache" parameter is set to "false". Till
that moment, I suppose, that rbd cache is entirly client-side feature,
and it is enabled with "cache=writeback" parameter in libvirt VM xml
definiti
...sorry clicked send to quickly
/opt/ceph/bin/radosgw-admin orphans find --pool=.rgw.buckets --job-id=abcd
ERROR: failed to open log pool ret=-2
job not found
On Tue, May 5, 2015 at 6:36 PM, Anthony Alba wrote:
> Hi Yehuda,
>
> First run:
>
> /opt/ceph/bin/radosgw-admin --pool=.rgw.buckets --j
Hi Yehuda,
First run:
/opt/ceph/bin/radosgw-admin --pool=.rgw.buckets --job-id=testing
ERROR: failed to open log pool ret=-2
job not found
Do I have to precreate some pool?
On Tue, May 5, 2015 at 8:17 AM, Yehuda Sadeh-Weinraub wrote:
>
> I've been working on a new tool that would detect leak
When I visit ceph.com, it returns an error, like thus
[cid:image001.png@01D0875B.CAA04860]
Is this my question?How can I resolve it, thanks
-
??
Hi,
The cache doesn't give you any additional storage capacity as the cache
can never store data, thats not on the tier below it (or store more
writes than the underlying storage has room for).
As for how much you should go for... thats very much up to your use
case. Try to come up with an e
Hi folks,
one more question:
after some more interanal discussions, I'm faced with the question how a
SSD Cache Pool Tiering is calculated in the "overall" usable storage space.
And how "big" do I calculate an SSD Cache Pool?
From my understanding, the cache pool is not calculated into the over
Hi,
I want to sign up an account for the ceph wiki system, but I can not find
the entry. I can only find the "sign in" entry in the page.
Can someone tell me why, is the system reject registry recently?
Thanks
Wenjun Huang
___
ceph-users mailing list
On 05/05/15 06:30, Timofey Titovets wrote:
> Hi list,
> Excuse me, what I'm saying is off topic
>
> @Lionel, if you use btrfs, did you already try to use btrfs compression for
> OSD?
> If yes, сan you share the your experience?
Btrfs compresses by default using zlib. We force lzo compression inst
38 matches
Mail list logo