Hello,
have imported a RAW image to rbd 'images' pool and created a snapshot
from that image...when I tried to set the protected for this snapshot,
its failing with below error.
==
bd --pool images snap protect --snap snap
a88c6600-5781-475c-8806-9723a976425c rbd: protecting snap failed: (38)
> Op 13 april 2016 om 9:05 schreef M Ranga Swami Reddy :
>
>
> Hello,
>
> have imported a RAW image to rbd 'images' pool and created a snapshot
> from that image...when I tried to set the protected for this snapshot,
> its failing with below error.
>
> ==
> bd --pool images snap protect --s
Hi,
As I am using the RAW format, is t format 1:
rbd info images/a88c6600-5781-475c-8806-9723a976425c
rbd image 'a88c6600-5781-475c-8806-9723a976425c':
size 31744 MB in 7936 objects
order 22 (4096 kB objects)
block_name_prefix: rb.0.26f756.238e1f29
format: 1
On We
Hello
On Mon, 11 Apr 2016 at 22:08 hp cre wrote:
> Hey James,
> Did you check my steps? What did you do differently and worked for your?
>
Your steps look OK to me; I did pretty much the same, but with three nodes
instead on a single node - but I'm scratching my head as to why I don't see
the s
Hi,
maybe you should clearify what is "standard" install for both of you.
Is it minimal ? Or some other selectable compilation ?
Did you install via Netinstall or via DVD ?
--
Mit freundlichen Gruessen / Best regards
Oliver Dzombic
IP-Interactive
mailto:i...@ip-interactive.de
Anschrift:
IP
Hi All,
We would like to deploy Ceph and we would need to use CephFS internally,
but of course without any compromise on data durability.
The setup would include five nodes, two monitors and three OSDs, so data
would be redundant (we would add the MDS for CephFS, of course).
I would like to unde
> Op 13 april 2016 om 9:25 schreef M Ranga Swami Reddy :
>
>
> Hi,
> As I am using the RAW format, is t format 1:
>
> rbd info images/a88c6600-5781-475c-8806-9723a976425c
> rbd image 'a88c6600-5781-475c-8806-9723a976425c':
> size 31744 MB in 7936 objects
> order 22 (4096 kB obje
I'll give it one more try.
I tried to inspect and follow the code for ceph-deploy, though i'm no
python developer.
It seems that in hosts/debian/__init__.py, line number 20, that if
distro.lower == ubuntu then it will return upstart.
maybe thats the problem ?
On 13 April 2016 at 10:19, James Pag
13.04.2016 11:31, Vincenzo Pii wrote:
The setup would include five nodes, two monitors and three OSDs, so
data would be redundant (we would add the MDS for CephFS, of course).
You need uneven number of mons. In your case I would setup mons on all 5
nodes, or at least on 3 of them.
___
To answer Oliver's question, a standard ubuntu server install is just an
iso of the server being installed in an automated mode. no software
specific options are made except in tasksel, i choose openssh server only.
On 13 April 2016 at 10:49, hp cre wrote:
> I'll give it one more try.
>
> I trie
On Wed, 13 Apr 2016 11:51:08 +0300 Oleksandr Natalenko wrote:
> 13.04.2016 11:31, Vincenzo Pii wrote:
> > The setup would include five nodes, two monitors and three OSDs, so
> > data would be redundant (we would add the MDS for CephFS, of course).
>
> You need uneven number of mons. In your case
OK - so my test was with the master branch of ceph-deploy which does work
correctly - specifically:
https://github.com/ceph/ceph-deploy/commit/b796a45b119afb9186301a868c25a9ba70642891
https://github.com/ceph/ceph-deploy/commit/374088665e43654e97512ab40f2cc141e238f21c
are post .31 so only the basi
> On 13 Apr 2016, at 10:55, Christian Balzer wrote:
>
> On Wed, 13 Apr 2016 11:51:08 +0300 Oleksandr Natalenko wrote:
>
>> 13.04.2016 11:31, Vincenzo Pii wrote:
>>> The setup would include five nodes, two monitors and three OSDs, so
>>> data would be redundant (we would add the MDS for CephFS,
I am using version .31 from either the ceph repo or the one you updated in
the ubuntu repo.
It seems that these commits were not "commited" in version .31 ?
On 13 April 2016 at 10:58, James Page wrote:
> OK - so my test was with the master branch of ceph-deploy which does work
> correctly - spec
Any direct experience with CephFS?
Haven't tried anything newer than Hammer, but in Hammer CephFS is unable
to back-press very active clients. For example, rsyncing lots of files
to Ceph mount could result in MDS log overflow and OSD slow requests,
especially if MDS log in located on SSD and
n Wed, 13 Apr 2016 at 10:09 hp cre wrote:
> I am using version .31 from either the ceph repo or the one you updated in
> the ubuntu repo.
> It seems that these commits were not "commited" in version .31 ?
>
Yes - they post-date the .31 release; a patched version of .31 should be in
Xenial shortl
Great! Please let me know when the patched version is up.
Thanks for your help, much appreciated.
On 13 April 2016 at 11:19, James Page wrote:
> n Wed, 13 Apr 2016 at 10:09 hp cre wrote:
>
>> I am using version .31 from either the ceph repo or the one you updated
>> in the ubuntu repo.
>> It s
>>Based on discussion with them at Ceph day in Tokyo JP, they have their own
>>frozen the Ceph repository.
>>And they've been optimizing codes by their own team to meet their
>>requirements.
>>AFAICT they had not done any do PR.
Thanks for the info
@cc bspark8.sk.com : maybe can you give us m
fyi, i tested ceph-deploy master branch and it worked fine.
On 13 April 2016 at 11:28, hp cre wrote:
> Great! Please let me know when the patched version is up.
>
> Thanks for your help, much appreciated.
>
> On 13 April 2016 at 11:19, James Page wrote:
>
>> n Wed, 13 Apr 2016 at 10:09 hp cre
On Wed, 13 Apr 2016, Christian Balzer wrote:
> > > Recently we discovered an issue with the long object name handling
> > > that is not fixable without rewriting a significant chunk of
> > > FileStores filename handling. (There is a limit in the amount of
> > > xattr data ext4 can store in the ino
Hi,
Am 13.04.2016 um 04:29 schrieb Christian Balzer:
> On Tue, 12 Apr 2016 09:00:19 +0200 Michael Metz-Martini | SpeedPartner
> GmbH wrote:
>> Am 11.04.2016 um 23:39 schrieb Sage Weil:
>>> ext4 has never been recommended, but we did test it. After Jewel is
>>> out, we would like explicitly recomm
On Tue, 12 Apr 2016, Jan Schermer wrote:
> Who needs to have exactly the same data in two separate objects
> (replicas)? Ceph needs it because "consistency"?, but the app (VM
> filesystem) is fine with whatever version because the flush didn't
> happen (if it did the contents would be the same).
Patched up version is now in Xenial - Alfredo is looking to produce a new
release soon...
On Wed, 13 Apr 2016 at 13:00 hp cre wrote:
> fyi, i tested ceph-deploy master branch and it worked fine.
>
> On 13 April 2016 at 11:28, hp cre wrote:
>
>> Great! Please let me know when the patched version
On Wed, 13 Apr 2016, Jan Schermer wrote:
> I apologise, I probably should have dialed down a bit.
> I'd like to personally apologise to Sage, for being so patient with my
> ranting.
No worries :)
> I just hope you don't forget about the measly RBD users like me (I'd
> guesstimate a silent 90%+
Hello,
On 11/04/2016 23:39, Sage Weil wrote:
> [...] Is this reasonable? [...]
Warning: I'm just a ceph user and definitively non-expert user.
1. Personally, if you see the documentation, read a little the maling list
and/or IRC, it seems to me _clear_ that ext4 is not recommended even if the
I have implemented a 171TB CephFS using Infernalis recently (it is set so I can
grow that to just under 2PB).
I tried using Jewel, but it had grief, so I will wait on that.
I am migrating data from a lustre filesystem and so far it seems ok. I have not
put it into production yet, but will be tes
Hi everyone,
The third (and likely final) Jewel release candidate is out. We have a
very small number of remaining blocker issues and a bit of final polish
before we publish Jewel 10.2.0, probably next week.
There are no known issues with this release that are serious enough to
warn about her
On Wed, Apr 13, 2016 at 3:02 PM, Sage Weil wrote:
> Hi everyone,
>
> The third (and likely final) Jewel release candidate is out. We have a
> very small number of remaining blocker issues and a bit of final polish
> before we publish Jewel 10.2.0, probably next week.
>
> There are no known issues
On Wed, 13 Apr 2016 08:30:52 -0400 (EDT) Sage Weil wrote:
> On Wed, 13 Apr 2016, Christian Balzer wrote:
> > > > Recently we discovered an issue with the long object name handling
> > > > that is not fixable without rewriting a significant chunk of
> > > > FileStores filename handling. (There is
Hello,
[reducing MLs to ceph-user]
On Wed, 13 Apr 2016 14:51:58 +0200 Michael Metz-Martini | SpeedPartner
GmbH wrote:
> Hi,
>
> Am 13.04.2016 um 04:29 schrieb Christian Balzer:
> > On Tue, 12 Apr 2016 09:00:19 +0200 Michael Metz-Martini | SpeedPartner
> > GmbH wrote:
> >> Am 11.04.2016 um 23:3
Hi
I am thinking to use CEPH for replicating some non-critical stats data of
our system for redundancy purpose. I have following questions
- we do not want to write data through CEPH but just use as hook to our
current DB to make it replicate data asynchronously at periodic interval on
few machin
31 matches
Mail list logo