Hi,
I'm looking for some pros cons related to RBD stripe/chunk size indicated
by image order number. Default is 4MB (order 22), but OpenStack use 8MB
(order 23) as default. What if we use smaller size (lower order number),
isn't it more chance that image objects is spreaded through OSDs and cached
Hi,
larger stripe size (to an extent) will generally improve large
sequential read and write performance. There's overhead though. It
means more objects which can slow things down at the filestore level
when PG splits occur and also potentially means more inodes fall out of
cache, longer sy
On 06/16/2016 03:54 AM, Mark Nelson wrote:
Hi,
larger stripe size (to an extent) will generally improve large
sequential read and write performance.
Oops, I should have had my coffee. I missed a sentence here. larger
strip size will generally improve large sequential read and write
perfor
Hi,
it seems using resize2fs on rbd image would generate lots of garbage
objects in ceph.
The experiment is:
1. use resize2fs to extent 50G rbd image A to 400G image with ext4 format
in vm.
2. calculate the total object size in rbd pool, 35GB(already divided by
replicas#).
3. cone ImageB based on 4
resize2fs is some kind of incremental, I guess
You may notice that on a slow system, if you give many more spaces to a
large partition
Running resize2fs on a screen, and watching df -h on an other will show
you an incremental increase of disk space
Maybe the discard option can help you on that
Answering to myself and to whom may be interested. After some strace
and better looking in logs, I realized that the cluster knew different
fsids for my redeployed OSDs, so I realized that I did not 'rm' the
OSDs before readding them to the cluster
So the fact is that ceph does not update OSD fsid
2016-06-16 3:53 GMT+02:00 Christian Balzer :
> Gandalf, first read:
> https://www.mail-archive.com/ceph-users@lists.ceph.com/msg29546.html
>
> And this thread by Nick:
> https://www.mail-archive.com/ceph-users@lists.ceph.com/msg29708.html
Interesting reading. Thanks.
> Overly optimistic.
> In an
Hi,
aside from the question of the coolness factor of Infinitiband,
you should always also consider the question of replacing parts and
extending cluster.
A 10G Network environment is up to date currently, and will be for some
more years. You can easily get equipment for it, and the pricing gets
Hi,
thank you for the release !
http://docs.ceph.com/docs/master/_downloads/v10.2.2.txt
->
404 Not Found
nginx/1.4.6 (Ubuntu)
http://docs.ceph.com/docs/master/release-notes/#v10-2-2-jewel
links
http://docs.ceph.com/docs/master/_downloads/v10.2.1.txt
For me the more detailed changelog would be
Hi All,
I have a repeatable condition when the object count in a pool gets to
320-330 million the object write time dramatically and almost instantly
increases as much as 10X, exhibited by fs_apply_latency going from 10ms to
100s of ms.
Can someone point me in a direction / have an explanation ?
Hi,
When a rgw service is started, by default below pools are created.
.rgw.root
default.rgw.control
default.rgw.data.root
default.rgw.gc
default.rgw.log
When a swift user is created, some default pools are created. But I would
like to use "Pool_A" for the swift user.
>From client when I run Cos
Hi Wade,
What IO are you seeing on the OSD devices when this happens (see e.g.
iostat), are there short periods of high read IOPS where (almost) no
writes occur? What does your memory usage look like (including slab)?
Cheers,
On 16 June 2016 at 22:14, Wade Holler wrote:
> Hi All,
>
> I have a r
On Wed, Jun 15, 2016 at 5:19 AM, siva kumar <85s...@gmail.com> wrote:
> Yes , We need to similar to inotify/fanotity .
>
> came through link
> http://docs.ceph.com/docs/master/dev/osd_internals/watch_notify/?highlight=notify#watch-notify
>
> Just want to know if i can use this ?
>
> If yes means ho
On Wed, Jun 15, 2016 at 10:21 AM, Kostis Fardelas wrote:
> Hello Jacob, Gregory,
>
> did you manage to start up those OSDs at last? I came across a very
> much alike incident [1] (no flags preventing the OSDs from getting UP
> in the cluster though, no hardware problems reported) and I wonder if
>
On Wed, Jun 15, 2016 at 11:30 AM, Dan van der Ster wrote:
> Dear Ceph Community,
>
> Yesterday we had the pleasure of hosting Ceph Day Switzerland, and we
> wanted to let you know that the slides and videos of most talks have
> been posted online:
>
> https://indico.cern.ch/event/542464/timetabl
Hello,
On Thu, 16 Jun 2016 12:44:51 +0200 Gandalf Corvotempesta wrote:
> 2016-06-16 3:53 GMT+02:00 Christian Balzer :
> > Gandalf, first read:
> > https://www.mail-archive.com/ceph-users@lists.ceph.com/msg29546.html
> >
> > And this thread by Nick:
> > https://www.mail-archive.com/ceph-users@lis
Hi,
How can I down a osd and bring it back in RHEL 7.2 with ceph verison 10.2.2
sudo start ceph-osd id=1 fails with “sudo: start: command not found”.
I have 5 osds in each node and i want to down one particular osd (sudo stop
ceph-sd id=1 also fails) and see whether replicas are written to other
I see the same behavior with the threshold of around 20M objects for 4 nodes,
16 OSDs, 32TB, hdd-based cluster. The issue dates back to hammer.
Sent from my Windows 10 phone
From: Blair Bethwaite
Sent: Thursday, June 16, 2016 2:48 PM
To: Wade Holler
Cc: Ceph Development; ceph-users@lists.ceph.c
Blairo,
Thats right, I do see "lots" of READ IO! If I compare the "bad (330Mil)"
pool, with the new test (good) pool:
iostat while running to the "good" pool shows almost all writes.
iostat while running to the "bad" pool has VERY large read spikes, with
almost no writes.
Sounds like you have a
Blairo,
Thats right, I do see "lots" of READ IO! If I compare the "bad
(330Mil)" pool, with the new test (good) pool:
iostat while running to the "good" pool shows almost all writes.
iostat while running to the "bad" pool has VERY large read spikes,
with almost no writes.
Sounds like you have a
RHEL 7.2 and Jewel should be using the systemd unit files by default, so you'd
do something like:
> sudo systemctl stop ceph-osd@
and then
> sudo systemctl start ceph-osd@
when you're done.
--
Joshua M. Boniface
Linux System Ærchitect
Sigmentation fault. Core dumped.
On 16/06/16 09:44 AM, Ka
Hi,
the ceph dokumentation does currently not catch up with the development
of the software.
I advice you to check always the -for your OS- responsible runlevel files.
In case of Redhat 7, its systemd.
So in that case a
systemctl -a | grep ceph
will show you all available commands for ceph.
A few questions.
First, is there a good step by step to setting up a caching tier with NVMe SSDs
that are on separate hosts? Is that even possible?
Second, what sort of performance are people seeing from caching
tiers/journaling on SSDs in Jewel?
Right now I am working on trying to find best
Hi,
I have the same problem with osd disks not mounted at boot on jessie with ceph
jewel
workaround is to re-add 60-ceph-partuuid-workaround.rules file to udev
http://tracker.ceph.com/issues/16351
- Mail original -
De: "aderumier"
À: "Karsten Heymann" , "Loris Cuoghi"
Cc: "Loic Dac
Hi,
Same issue with Centos 7, I also put back this file in /etc/udev/rules.d.
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
Alexandre DERUMIER
Sent: Thursday, June 16, 2016 17:53
To: Karsten Heymann; Loris Cuoghi
Cc: Loic Dachary; ceph-users
2016-06-16 12:54 GMT+02:00 Oliver Dzombic :
> aside from the question of the coolness factor of Infinitiband,
> you should always also consider the question of replacing parts and
> extending cluster.
>
> A 10G Network environment is up to date currently, and will be for some
> more years. You can
Hi all,
I am new to the rbd engine for fio, and ran into the following problems
when i try to run a 4k write with my rbd image:
rbd_iodepth32: (g=0): rw=randwrite, bs=4K-4K/4K-4K/4K-4K, ioengine=rbd,
iodepth=32
fio-2.11-17-ga275
Starting 1 process
rbd engine: RBD version: 0.1.8
rados_connect
Moving this over to ceph-user where it’ll get the eyeballs you need.
On Mon, Jun 13, 2016 at 2:58 AM, Marcus Strasser
wrote:
> Hello!
>
>
>
> I have a little test cluster with 2 server. Each Server have an osd with 800
> GB, there is a 10 Gbps Link between the servers.
>
> On a ceph-client i have
What is your fio script ?
Make sure you do this..
1. Run say ‘ceph-s’ from the server you are trying to connect and see if it is
connecting properly or not. If so, you don’t have any keyring issues.
2. Now, make sure you have given the following param value properly based on
your setup.
pool
This is the latest default kernel with CentOS7. We also tried a newer
kernel (from elrepo), a 4.4 that has the same problem, so I don't think
that is it. Thank you for the suggestion though.
We upgraded our cluster to the 10.2.2 release today, and it didn't resolve
all of the issues. It's possi
This sounds an awful lot like a a bug I've run into a few times (not
often enough to get a good backtrace out of the kernel or mds)
involving vim on a symlink to a file in another directory. It will
occasionally corrupt the symlink in such a way that the symlink is
unreadable. Filling dmesg with:
On Fri, Jun 10, 2016 at 3:01 AM, Venkata Manojawa Paritala
wrote:
> Hello Friends,
>
> I am Manoj Paritala, working in Vedams Software Solutions India Pvt Ltd,
> Hyderabad, India. We are developing a POC with the below specification. I
> would like to know if it is technically possible to configur
I wanted to report back what the solution was to this problem. It appears
like I was running into this bug:
http://tracker.ceph.com/issues/16113
After running 'ceph osd unset sortbitwise' all the unfound objects were
found! Which makes me happy again. :)
Bryan
On 5/24/16, 4:27 PM, "Stillwel
I'm probably misunderstanding the question but if you're getting 3GB/s from
your dd, you're already caching. Can you provide some more detail on what
you're trying to achieve.
On 16 Jun 2016 21:53, "Patrick McGarry" wrote:
> Moving this over to ceph-user where it’ll get the eyeballs you need.
>
>
Agree with David
Its being cached , you can try
- oflag options for dd
- monitor system cache during dd
- Karan -
On Fri, Jun 17, 2016 at 1:58 AM, David wrote:
> I'm probably misunderstanding the question but if you're getting 3GB/s
> from your dd, you're already caching. Can you provide some
Hi Somnath,
Thank you for your reply!!
The if script is:
[global]
#logging
#write_iops_log=write_iops_log
#write_bw_log=write_bw_log
#write_lat_log=write_lat_log
ioengine=rbd
clientname=client.admin
pool=ecssdcache
rbdname=imagecacherbd
invalidate=0
rw=randwrite
bs=4k
[rbd_iodepth32]
On Thu, Jun 16, 2016 at 8:14 PM, Mavis Xiang wrote:
> clientname=client.admin
Try "clientname=admin" -- I think it's treating the client "name" as
the "id", so specifying "client.admin" is really treated as
"client.client.admin".
--
Jason
___
ceph-use
Hello,
as others already stated, your numbers don't add up or make sense.
More below.
On Thu, 16 Jun 2016 16:53:10 -0400 Patrick McGarry wrote:
> Moving this over to ceph-user where it’ll get the eyeballs you need.
>
> On Mon, Jun 13, 2016 at 2:58 AM, Marcus Strasser
> wrote:
> > Hello!
> >
this really did the trick! Thank you guys!
Best,
Mavis
On Thu, Jun 16, 2016 at 8:18 PM, Jason Dillaman wrote:
> On Thu, Jun 16, 2016 at 8:14 PM, Mavis Xiang wrote:
> > clientname=client.admin
>
> Try "clientname=admin" -- I think it's treating the client "name" as
> the "id", so specifying "cl
Hello,
On Thu, 16 Jun 2016 15:31:13 + Tim Gipson wrote:
> A few questions.
>
> First, is there a good step by step to setting up a caching tier with
> NVMe SSDs that are on separate hosts? Is that even possible?
>
Yes. And with a cluster of your size that's the way I'd do it.
Larger clust
On Fri, Jun 17, 2016 at 5:18 AM, Adam Tygart wrote:
> This sounds an awful lot like a a bug I've run into a few times (not
> often enough to get a good backtrace out of the kernel or mds)
> involving vim on a symlink to a file in another directory. It will
> occasionally corrupt the symlink in suc
Hello devs and other sage(sic) people,
Ceph 0.94.5, cache tier in writeback mode.
As mentioned before, I'm running a cron job every day at 23:40 dropping
the flush dirty target by 4% (0.60 to 0.56) and then re-setting it to the
previous value 10 minutes later.
The idea is to have all the flushin
Hello,
I don't have anything running Jewel yet, so this is for devs and people
who have played with bluestore or read the code.
With filestore, Ceph benefits from ample RAM, both in terms of pagecache
for reads of hot objects and SLAB to keep all the dir-entries and inodes
in memory.
With blues
On Fri, Jun 17, 2016 at 5:03 AM, Jason Gress wrote:
> This is the latest default kernel with CentOS7. We also tried a newer
> kernel (from elrepo), a 4.4 that has the same problem, so I don't think
> that is it. Thank you for the suggestion though.
>
> We upgraded our cluster to the 10.2.2 relea
According to Sage[1] Bluestore makes use of the pagecache. I don't
believe read-ahead is a filesystem tunable in Linux, it is set on the
block device itself, therefore read-ahead shouldn't be an issue.
I'm not familiar enough with Bluestore to comment on the rest.
[1] http://www.spinics.net/lists
Hello Adam,
On Thu, 16 Jun 2016 23:40:26 -0500 Adam Tygart wrote:
> According to Sage[1] Bluestore makes use of the pagecache. I don't
> believe read-ahead is a filesystem tunable in Linux, it is set on the
> block device itself, therefore read-ahead shouldn't be an issue.
>
Thank's for that li
Hi,
just to verify this:
no symlink usage == no problem/bug
right ?
--
Mit freundlichen Gruessen / Best regards
Oliver Dzombic
IP-Interactive
mailto:i...@ip-interactive.de
Anschrift:
IP Interactive UG ( haftungsbeschraenkt )
Zum Sonnenberg 1-3
63571 Gelnhausen
HRB 93402 beim Amtsgericht H
47 matches
Mail list logo