I am pretty sure its impossible, but i would like to make it 1000% sure.
Is it possible to recover a deleted rbd image from a ceph cluster?
Nasos Pan
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://l
Hi,
I performed an upgrade of our ceph cluster from Giant to the latest
Hammer (hammer branch in git). And although it seemed to work fine at
first, looking at the graphs this morning, I've noticed a much
increased write activity on the drives, mostly the ones storing RGW
buckets (although that b
Hi Gregory, Matteo, list users
I have the exact same problem.
While the mds is in "rejoin" state, memory usage of the mds grows
slowly. It can take up to 20 min to reach 'active' state.
Is there something I can do to help ?
--
Thomas Lemarchand
Cloud Solutions SAS - Responsable des systèmes d'
Hi David,
Yes you are right, I should have worded it as "no performance benefit".
There are certainly benefits in regards to density.
We could have gone for 1x P3700 per 12 Spinners, but went 1x P3700 per 6
spinners due to fear of losing 12 OSD's at once.
Losing 12 OSD's at once may or may not be
Hi!
Just a quick question regarding mixed versions. So far a cluster is
running on 0.94.1-1trusty without Rados Gateway. Since the packets have
been updated in the meantime, installing radosgw now would entail
bringing a few updated dependencies along. OSDs and MONs on the nodes
that are to b
On 08-07-15 12:20, Daniel Schneller wrote:
> Hi!
>
> Just a quick question regarding mixed versions. So far a cluster is
> running on 0.94.1-1trusty without Rados Gateway. Since the packets have
> been updated in the meantime, installing radosgw now would entail
> bringing a few updated dependen
Thanks for the tip! It's weird -- on my first OSD, I get:
$ sudo ceph-osd -i 0 -f
...
2015-07-08 09:29:35.207874 7fbd3507d800 -1 ** ERROR: unable to open OSD
superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory
Indeed, /var/lib/ceph/osd/ceph-0/ is empty! Same problem on a
On 2015-07-08 10:34:14 +, Wido den Hollander said:
On 08-07-15 12:20, Daniel Schneller wrote:
Hi!
Just a quick question regarding mixed versions. So far a cluster is
running on 0.94.1-1trusty without Rados Gateway. Since the packets have
been updated in the meantime, installing radosgw n
On 08/07/15 10:30, Thomas Lemarchand wrote:
Hi Gregory, Matteo, list users
I have the exact same problem.
While the mds is in "rejoin" state, memory usage of the mds grows
slowly. It can take up to 20 min to reach 'active' state.
Is there something I can do to help ?
A region of MDS logs f
Ceph newbie here; ceph 0.94.2, CentOS 6.6 x86_64. Kernel 2.6.32.
Initial test cluster of five OSD nodes, 3 MON, 1 MDS. Working well. I was
testing the removal of two MONs, just to see how it works. The second MON
was stopped and removed: no problems. The third MON was stopped and
removed: appa
Hi All,
I’m perf testing a cluster again,
This time I have re-built the cluster and am filling it for testing.
on a 10 min run I get the following results from 5 load generators, each
writing though 7 iocontexts, with a queue depth of 50 async writes.
Gen1
Percentile 100 = 0.729775905609
Max
Hello,
I just wondered what unit the latency is stated in rados bench.
Total time run: 200.954688
Total writes made: 4989
Write size: 4194304
Bandwidth (MB/sec): 99.306
Stddev Bandwidth: 131.109
Max bandwidth (MB/sec): 464
Min bandwidth (MB/sec): 0
Average Laten
I think you're probably running into the internal PG/collection
splitting here; try searching for those terms and seeing what your OSD
folder structures look like. You could test by creating a new pool and
seeing if it's faster or slower than the one you've already filled up.
-Greg
On Wed, Jul 8,
Hello,
On Wed, 08 Jul 2015 12:31:20 + Steffen Tilsch wrote:
> Hello,
>
> I just wondered what unit the latency is stated in rados bench.
>
You should be able to tell that by just observing it (and monitoring
things via ceph -w and atop or such). ^o^
> Total time run: 200.954688
>
If I create a new pool it is generally fast for a short amount of time.
Not as fast as if I had a blank cluster, but close to.
Bryn
> On 8 Jul 2015, at 13:55, Gregory Farnum wrote:
>
> I think you're probably running into the internal PG/collection
> splitting here; try searching for those terms
On Tue, 23 Jun 2015, Samuel Just wrote:
> ObjectWriteOperations currently allow you to perform a clone_range from
> another object with the same object locator. Years ago, rgw used this
> as part of multipart upload. Today, the implementation complicates the
> OSD considerably, and it doesn't
On 8 July 2015 at 13:57, Christian Balzer wrote:
> On Wed, 08 Jul 2015 12:31:20 + Steffen Tilsch wrote:
>> I just wondered what unit the latency is stated in rados bench.
>>
> You should be able to tell that by just observing it (and monitoring
> things via ceph -w and atop or such). ^o^
Rat
Basically for each PG, there's a directory tree where only a certain
number of objects are allowed in a given directory before it splits into
new branches/leaves. The problem is that this has a fair amount of
overhead and also there's extra associated dentry lookups to get at any
given object.
Holy mackerel :-) The culprit is OSD automount, which is not working!
Finally I came across
https://www.mail-archive.com/ceph-users@lists.ceph.com/msg16597.html
After mounting and starting the 3 OSDs manually (here shown for the first
OSD)
# mount /dev/sdb2 /var/lib/ceph/osd/ceph-0
# start c
On Sun, Jul 5, 2015 at 5:37 AM, Michael Metz-Martini | SpeedPartner
GmbH wrote:
> Hi,
>
> after larger moves of serveral placement groups we tried to empty 3 of
> our 66 osds by slowly setting weight of them to 0 within the crushmap.
>
> After move completed we're still experiencing a large amount
On Wed, Jul 1, 2015 at 5:47 PM, Burkhard Linke
wrote:
> Hi,
>
>
> On 07/01/2015 06:09 PM, Gregory Farnum wrote:
>>
>> On Mon, Jun 29, 2015 at 1:44 PM, Burkhard Linke
>> wrote:
>>>
>>> Hi,
>>>
>>> I've noticed that a number of placement groups in our setup contain
>>> objects,
>>> but no actual da
Hi
We've faced with uneven distribution of data on our production cluster
running Hammer (0.94.2).
We have 1056 OSD which are running on 352 hosts in 10 racks. Failure domain
set to rack. All hosts (except several in ssd_root branch, which is not
used for RGW data placement) are in same configurat
Hi
We've faced with uneven distribution of data on our production cluster
running Hammer (0.94.2).
We have 1056 OSD which are running on 352 hosts in 10 racks. Failure domain
set to rack. All hosts (except several in ssd_root branch, which is not
used for RGW data placement) are in same configurat
Regarding using spinning disks for journals, before I was able to put SSDs
in my deployment I came up wit ha somewhat novel journal setup that gave my
cluster way more life than having all the journals on a single disk, or
having the journal on the disk with the OSD. I called it "interleaved
journa
The biggest thing to be careful of with this kind of deployment is that
now a single drive failure will take out 2 OSDs instead of 1 which means
OSD failure rates and associated recovery traffic go up. I'm not sure
that's worth the trade-off...
Mark
On 07/08/2015 11:01 AM, Quentin Hartman wr
Hi,
I have created a ceph file system on a cluster running ceph v9.0.1 and have
enable snapshots with the command:
ceph mds set allow_new_snaps true --yes-i-really-mean-it
On the top level of the ceph file system, I can cd into the hidden .snap
directory and I can create new directories with the
Have you tried “rmdir” instead of “rm -rf”?
Jan
> On 08 Jul 2015, at 19:17, Eric Eastman wrote:
>
> Hi,
>
> I have created a ceph file system on a cluster running ceph v9.0.1 and have
> enable snapshots with the command:
>
> ceph mds set allow_new_snaps true --yes-i-really-mean-it
>
> On th
Thank you!
That was the solution.
Eric
On Wed, Jul 8, 2015 at 12:02 PM, Jan Schermer wrote:
> Have you tried "rmdir" instead of "rm -rf"?
>
> Jan
>
> > On 08 Jul 2015, at 19:17, Eric Eastman
> wrote:
> >
> > Hi,
> >
> > I have created a ceph file system on a cluster running ceph v9.0.1 and
>
Is there a "classic" ceph cluster test matrix?? I'm wondering what's done for
releases ie sector sizes 4k,128k,1M,4M? sequential, random, 80/20 mix? #
concurrent IOs? I've seen some spreadsheets in the past, but can't find them.
Thanks,
Bruce
___
ceph-
Hi Cephers,
Here is the problem I am facing.
I would like to use striping feature with rbd.
When I create an image with striping arguments defined, I cannot map it:
[cephuser@node01 ~]$ rbd create ceph-client0-rbd0 --image-format 2 --size
10240 --stripe-unit 65536 --stripe-count 4
[cephuser@node0
Hi,
is there any way to replace an osd disk without removing the osd from
crush, auth, ...
Just recreate the same OSD?
Stefan
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Striping is not supported with kernel rbd yet..
Thanks & Regards
Somnath
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Hadi
Montakhabi
Sent: Wednesday, July 08, 2015 12:56 PM
To: ceph-users@lists.ceph.com
Subject: [ceph-users] Cannot map rbd image with striping!
Hi Ce
Hi Bruce,
There's a google doc that previously was public but when it got moved to
RH's google drive from Inktanks it got made private instead. It doesn't
appear that I can make it public now.
You can see the configuration in the CBT yaml files though up on github:
https://github.com/ceph/c
Thank you!
Is striping supported while using CephFS?
Peace,
Hadi
On Wed, Jul 8, 2015 at 2:59 PM, Somnath Roy wrote:
> Striping is not supported with kernel rbd yet..
>
>
>
> Thanks & Regards
>
> Somnath
>
>
>
> *From:* ceph-users [mailto:ceph-users-boun...@lists.ceph.com] *On Behalf
> Of *Hadi
Run 'ceph osd set noout' before replacing
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Stefan
Priebe
Sent: Wednesday, July 08, 2015 12:58 PM
To: ceph-users
Subject: [ceph-users] replace OSD disk without removing the osd from crush
Hi,
i
Hi,
Am 08.07.2015 um 22:03 schrieb Somnath Roy:
Run 'ceph osd set noout' before replacing
sure but that didn't worked since firefly for me.
I did:
# set noout
# ceph stop osd.5
# removed disk
# inserted new disk
# format disk and mount disk
# start mkjournal mkkey mkkfs
# remove old osd au
I don't see it as being any worse than having multiple journals on a single
drive. If your journal drive tanks, you're out X OSDs as well. It's
arguably better, since the number of affected OSDs per drive failure is
lower. Admittedly, neither deployment is ideal, but it an effective way to
get from
Yes, I am able to reproduce that too..Not sure if this is a bug or change.
Thanks & Regards
Somnath
-Original Message-
From: Stefan Priebe [mailto:s.pri...@profihost.ag]
Sent: Wednesday, July 08, 2015 1:09 PM
To: Somnath Roy; ceph-users
Subject: Re: [ceph-users] replace OSD disk without
Mark,
Thank you very much. We're focusing on block performance currently. All of my
object based testing has been done with rados bench so I've yet to do anything
through RGW, but will need to be doing that soon. I also want to revisit
COSBench. I exercised it ~ a year ago and then decided to fo
On 09/07/15 00:03, Steve Thompson wrote:
Ceph newbie here; ceph 0.94.2, CentOS 6.6 x86_64. Kernel 2.6.32.
Initial test cluster of five OSD nodes, 3 MON, 1 MDS. Working well. I
was testing the removal of two MONs, just to see how it works. The
second MON was stopped and removed: no problems. The
Hi,
We are planning to build a Ceph cluster with RGW/S3 as the interface for user
access. We have PB level of data in NFS share which needs to be moved to the
Ceph cluster and that's why I need your valuable input on how to efficiently do
that. I am sure this is a common problem that RGW users i
It's really about 10 minutes of work to write a python client to post
files into RGW/S3. (we use boto) Or you could use an S3 GUI client
such as Cyberduck.
The problem i am having and which you should look out for is that many
millions of objects in a single RGW bucket causes problems with
content
Thanks !
Yeah, I know bucket index was a problem for scaling but thought it is resolved
with sharded bucket index. We are yet to evaluate the performance with sharded
bucket index.
Regards
Somnath
-Original Message-
From: Ben Hines [mailto:bhi...@gmail.com]
Sent: Wednesday, July 08, 201
Also recently updated to 94.2. I am also seeing a large difference
between my 'ceph df' and 'size_kb_actual' in the bucket stats. I would
assume the difference is objects awaiting gc, but 'gc list' prints
very little.
ceph df:
NAME ID USED%USED MAX AVAIL
There is an initial prototype of the NFS layer to RGW using Ganesha. Yehuda
can probably give an update on its status. The use case for it is exactly
as you describe: to allow you to migrate data of NFS shares to the
S3-object store. It's not going to be high performance or be feature rich
but hope
Thanks Neil..Yes, that will be a great add to Ceph’s feature list..
For now, probably I should use the tool that Ben pointed out.
Regards
Somnath
From: Neil Levine [mailto:nlev...@redhat.com]
Sent: Wednesday, July 08, 2015 9:04 PM
To: Somnath Roy
Cc: ceph-users@lists.ceph.com; Yehuda Sadeh
Subjec
Hi again,
time is passing, so is my budget :-/ and I have to recheck the options
for a "starter" cluster. An expansion next year for may be an openstack
installation or more performance if the demands rise is possible. The
"starter" could always be used as test or slow dark archive.
At the beginn
47 matches
Mail list logo