Hi,
We had a problem to start the OSD because the startup script doesn't
know where find the key.
The default directory is /etc/ceph/dmcrypt-keys.
we left it by default and it worked.
I haven't tried, but may be it can be solved using /etc/crypttab.
Regards
Le 22/01/2016 21:35, Reno Rainz a é
For object storage I'm not sure there is a best each have their own
advantages and disadvantages.
If you need the space efficiency, then erasure is probably the best
candidate.
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> M Ranga Swami
Am 25.01.2016 um 08:54 schrieb Somnath Roy:
> ms_nocrc options is changed to the following in Hammer..
>
> ms_crc_data = false
> ms_crc_header = false
If i add those the osds / client can't comunicate any longer.
> Rest looks good , you need to tweak the shard/thread based on yo
If you are using kernel rbd clients, crc is mandatory. We have to keep the crc
on.
Varada
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Stefan Priebe - Profihost AG
> Sent: Monday, January 25, 2016 1:39 PM
> To: Somnath Roy ; ceph-user
Hi Cephers,
When an Ceph write made, does it write to all File Stores of Primary OSD
and Secondary OSD before sending ACK to client, or it writes to journal of
OSD and sending ACK without writing to File Store?
I think it would write to journal of all OSD, so using SSD journal will
increase write
Le 25/01/2016 11:04, Sam Huracan a écrit :
> Hi Cephers,
>
> When an Ceph write made, does it write to all File Stores of Primary OSD
> and Secondary OSD before sending ACK to client, or it writes to journal
> of OSD and sending ACK without writing to File Store?
>
> I think it would write to jou
On Mon, Jan 25, 2016 at 3:43 PM, Burkhard Linke
wrote:
> Hi,
>
> there's a rogue file in our CephFS that we are unable to remove. Access to
> the file (removal, move, copy, open etc.) results in the MDS starting to
> spill out the following message to its log file:
>
> 2016-01-25 08:39:09.623398 7
Thank you.
Yes, selection of an optimal pool type is not eas for object storage.
Thanks
Swami
On Mon, Jan 25, 2016 at 1:32 PM, Nick Fisk wrote:
> For object storage I'm not sure there is a best each have their own
> advantages and disadvantages.
>
> If you need the space efficiency, then erasur
Hello,
If a journal disk fails (with crash or power failure, etc), what
happens on OSD operations?
PS: Assume that journal and OSD is on a separate drive.
Thanks
Swami
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo
OSD stops.
And you pretty much lose all data on the OSD if you lose the journal.
Jan
> On 25 Jan 2016, at 14:04, M Ranga Swami Reddy wrote:
>
> Hello,
>
> If a journal disk fails (with crash or power failure, etc), what
> happens on OSD operations?
>
> PS: Assume that journal and OSD is on a
Hi,
On 01/25/2016 01:05 PM, Yan, Zheng wrote:
On Mon, Jan 25, 2016 at 3:43 PM, Burkhard Linke
wrote:
Hi,
there's a rogue file in our CephFS that we are unable to remove. Access to
the file (removal, move, copy, open etc.) results in the MDS starting to
spill out the following message to its l
Jan - Thanks for reply.
ok...OSD stop. Any reason why OSD stop ( I assume if journal disk
fails, OSD should work as no journal. Isn't it?)
Not understand, why the OSD data lost. You mean - data lost during the
traction time? or total OSD data lost?
Thanks
Swami
On Mon, Jan 25, 2016 at 7:06 PM, J
On Mon, Jan 25, 2016 at 9:43 PM, Burkhard Linke
wrote:
> Hi,
>
> On 01/25/2016 01:05 PM, Yan, Zheng wrote:
>>
>> On Mon, Jan 25, 2016 at 3:43 PM, Burkhard Linke
>> wrote:
>>>
>>> Hi,
>>>
>>> there's a rogue file in our CephFS that we are unable to remove. Access
>>> to
>>> the file (removal, move
As far as i know you will not lose data, but it will be unaccessable untill
you bring the journal back online.
2016-01-25 16:23 GMT+02:00 Daniel Schwager :
> Hi,
> > ok...OSD stop. Any reason why OSD stop ( I assume if journal disk
> > fails, OSD should work as no journal. Isn't it?)
>
> No. In m
Hi,
> ok...OSD stop. Any reason why OSD stop ( I assume if journal disk
> fails, OSD should work as no journal. Isn't it?)
No. In my understanding - if a journal fails, all the attached (to this Journal
HDD) OSD's fails also.
E.g. if you have 4 OSD's with the 4 journals's located on one SSD-hard
Hi,
On 01/25/2016 03:27 PM, Yan, Zheng wrote:
On Mon, Jan 25, 2016 at 9:43 PM, Burkhard Linke
wrote:
Hi,
On 01/25/2016 01:05 PM, Yan, Zheng wrote:
On Mon, Jan 25, 2016 at 3:43 PM, Burkhard Linke
wrote:
Hi,
there's a rogue file in our CephFS that we are unable to remove. Access
to
the file
Le 25/01/2016 15:28, Mihai Gheorghe a écrit :
As far as i know you will not lose data, but it will be unaccessable
untill you bring the journal back online.
http://www.sebastien-han.fr/blog/2014/11/27/ceph-recover-osds-after-ssd-journal-failure/
After this, we should be able to restart the OSD
Hi,
I have try this a month before.
Unfortunaly the Script Did Not worked out but if you Do the described steps
manualy it works.
The important thing is that /var/lib/ceph/osd.x/Journal (Not sure about the
path) should Show to right place where your Journal should be.
Am 25. Januar 2016 16:48:32
Yes, I think you should try with crc enabled as it is recommended for network
level corruption detection.
It will definitely add some cpu cost but it is ~5x lower with Intel new cpu
instruction set..
-Original Message-
From: Stefan Priebe - Profihost AG [mailto:s.pri...@profihost.ag]
Se
If we change the current (or failed journal path) to one the existing
journal and restart all failed/stop OSD.
Is the above will work? (not tested, assuming only)
Thanks
Swami
On Mon, Jan 25, 2016 at 9:18 PM, Loris Cuoghi wrote:
> Le 25/01/2016 15:28, Mihai Gheorghe a écrit :
>>
>> As far as i
I guess, erasure pool type with catch tier may give better performance
numbers for read and write IOs. (Not tested with catch tiering)
Thanks
Swami
On Mon, Jan 25, 2016 at 1:32 PM, Nick Fisk wrote:
> For object storage I'm not sure there is a best each have their own
> advantages and disadvantag
Hi,
We've run into a weird issue on our current test setup. We're currently
testing a small low-cost Ceph setup, with sata drives, 1gbps ethernet
and an Intel SSD for journaling per host. We've linked this to an
openstack setup. Ceph is the latest Hammer release.
We notice that when we do rbd ben
What release are you testing? You might be hitting this issue [1] where 'rbd
bench-write' would issue the same IO request repeatedly. With writeback cache
enabled, this would result in virtually no ops issued to the backend.
[1] http://tracker.ceph.com/issues/14283
--
Jason Dillaman
+ ceph-devel
On Mon, Jan 25, 2016 at 6:34 PM, M Ranga Swami Reddy
wrote:
> Hello,
>
> If a journal disk fails (with crash or power failure, etc), what
> happens on OSD operations?
>
> PS: Assume that journal and OSD is on a separate drive.
>
> Thanks
> Swami
__
Ceph package is 0.94.5, which is hammer. So yes it could very well be
this bug. Must I assume then that it only affects rbd bench and not the
general functionality of the client?
On 2016-01-25 1:59 PM, Jason Dillaman wrote:
> What release are you testing? You might be hitting this issue [1] whe
Correct -- it was a bug in 'rbd bench-write' only.
--
Jason Dillaman
- Original Message -
> From: "J-P Methot"
> To: "Jason Dillaman"
> Cc: ceph-users@lists.ceph.com
> Sent: Monday, January 25, 2016 2:10:19 PM
> Subject: Re: [ceph-users] Ceph RBD bench has a strange behaviour when
Greetings,
When I submit a request with "Transfer-Encoding: chunked", I get a 411
Length required error back. It's very similar to
http://tracker.ceph.com/issues/3297 except I am running the ceph version of
fastcgi. Ceph does not appear to produce apache2 2.4 versions, I am running
upstream Apache
Hi,
i switched now debugging to ms = 10
when starting the dd i can see in the logs of osd:
2016-01-26 00:47:16.530046 7f086f404700 1 -- 10.0.0.1:6806/49658 >> :/0
pipe(0x1f83 sd=292 :6806 s=0 pgs=0 cs=0 l=0 c=0x1dc2e9e0).accept
sd=292 10.0.0.91:56814/0
2016-01-26 00:47:16.530591 7f086f40470
On Mon, Jan 25, 2016 at 3:58 PM, Oliver Dzombic wrote:
> Hi,
>
> i switched now debugging to ms = 10
>
> when starting the dd i can see in the logs of osd:
>
> 2016-01-26 00:47:16.530046 7f086f404700 1 -- 10.0.0.1:6806/49658 >> :/0
> pipe(0x1f83 sd=292 :6806 s=0 pgs=0 cs=0 l=0 c=0x1dc2e9e0).a
On Saturday, January 23, 2016, 名花 wrote:
> Hi, I have a 4 ports 10gb ethernet card in my osd storage. I want to use
> 2 ports for cluster, the other 2 ports for private. But from my
> understanding, ceph osd just pick up the first IP address in
> cluster/private network in osd side. So looks to
On Mon, Jan 25, 2016 at 10:40 PM, Burkhard Linke
wrote:
> Hi,
>
>
> On 01/25/2016 03:27 PM, Yan, Zheng wrote:
>>
>> On Mon, Jan 25, 2016 at 9:43 PM, Burkhard Linke
>> wrote:
>>>
>>> Hi,
>>>
>>> On 01/25/2016 01:05 PM, Yan, Zheng wrote:
On Mon, Jan 25, 2016 at 3:43 PM, Burkhard Linke
>>>
Hi,
On 01/26/2016 07:58 AM, Yan, Zheng wrote:
*snipsnap*
I have a few questions
Which version of ceph are you using? When was the filesystem created?
Did you manually delete 10002af7f78. from pool 8?
Ceph version is 0.94.5. The filesystem was created about 1.5 years ago
using a Fir
hello, guys,I found data lost when flattening a cloned image on
giant(0.87.2). The problem can be easily reproduced by runing the following
script:ceph osd pool create wuxingyi 1 1rbd create --image-format 2
wuxingyi/disk1.img --size 8#writing "FOOBAR" at offset 0python writetooffset.py
dis
33 matches
Mail list logo