Satish,
Yes, that card support 'both'. You have to flash the IR firmware (IT
Firmware = JBOD only) and then you are able to create RAID1 sets in the
BIOS of the card and any ununsed disks will be seen by the OS as 'jbod'
Kind regards,
Caspar Smit
2018-07-23 20:43 GMT+02:00 Satish Patel :
> I
Hi Dan,
Just checked again : arggghhh...
# grep AUTO_RESTART /etc/sysconfig/ceph
CEPH_AUTO_RESTART_ON_UPGRADE=no
So no :'(
RPMs were upgraded, but OSD were not restarted as I thought. Or at least not
restarted with new 12.2.7 binaries (but since the skip digest option was
present in the runnin
Hi again,
Now with all OSDs restarted, I'm getting
health: HEALTH_ERR
777 scrub errors
Possible data damage: 36 pgs inconsistent
(...)
pgs: 4764 active+clean
36 active+clean+inconsistent
But from what I could read up to now, this is what's expec
> On 24 Jul 2018, at 13.22, Lothar Gesslein wrote:
>
>> On 07/24/2018 12:58 PM, Martin Overgaard Hansen wrote:
>> Creating a compat weight set manually with 'ceph osd crush weight-set
>> create-compat' gives me: Error EPERM: crush map contains one or more
>> bucket(s) that are not straw2
>>
>>
My cache pool seems affected by an old/closed bug... but I don't think this is
(directly ?) related to the current issue - but this won't help anyway :-/
http://tracker.ceph.com/issues/12659
Since I got promote issues, I tried to flush only the affected rbd image : I
got 6 unflush-able objects..
time got reduced when MDS from the same region became active
Each region we have a MDS. OSD nodes are in one region and active MDS is in
another region . So that this delay.
On Tue, Jul 17, 2018 at 6:23 PM, John Spray wrote:
> On Tue, Jul 17, 2018 at 8:26 AM Surya Bala
> wrote:
> >
> > Hi folk
On Wed, Jul 25, 2018 at 5:04 PM Daniel Carrasco wrote:
>
> Hello,
>
> I've attached the PDF.
>
> I don't know if is important, but I made changes on configuration and I've
> restarted the servers after dump that heap file. I've changed the
> memory_limit to 25Mb to test if stil with aceptable va
Hi
I'm wondering why LZ4 isn't built by default for newer Linux distros like
Ubuntu Xenial?
I understand that it wasn't built for Trusty because of too old lz4
libraries. But why isn't built for the newer distros?
Thanks,
Elias
___
ceph-users mailing li
On Wed, Jul 25, 2018 at 8:12 PM Yan, Zheng wrote:
>
> On Wed, Jul 25, 2018 at 5:04 PM Daniel Carrasco wrote:
> >
> > Hello,
> >
> > I've attached the PDF.
> >
> > I don't know if is important, but I made changes on configuration and I've
> > restarted the servers after dump that heap file. I've
>From this thread, I got how to move the meta data pool from the hdd's to
the ssd's.
https://www.spinics.net/lists/ceph-users/msg39498.html
ceph osd pool get fs_meta crush_rule
ceph osd pool set fs_meta crush_rule replicated_ruleset_ssd
I guess this can be done on a live system?
What would b
On Tue, 24 Jul 2018, Alfredo Deza wrote:
> Hi all,
>
> After the 12.2.6 release went out, we've been thinking on better ways
> to remove a version from our repositories to prevent users from
> upgrading/installing a known bad release.
>
> The way our repos are structured today means every single
Hello,
Thanks for all your help.
The dd is an option of any command?, because at least on Debian/Ubuntu is
an aplication to copy blocks, and then fails.
For now I cannot change the configuration, but later I'll try.
About the logs, I've not seen nothing about "warning", "error", "failed",
"messag
On 07/25/2018 08:39 AM, Elias Abacioglu wrote:
Hi
I'm wondering why LZ4 isn't built by default for newer Linux distros
like Ubuntu Xenial?
I understand that it wasn't built for Trusty because of too old lz4
libraries. But why isn't built for the newer distros?
Thanks,
Elias
_
Thanks. Yes, it turns out this was not an issue with Ceph, but rather an
issue with XenServer. Starting in version 7, Xenserver changed how they
manage LVM by adding a VHD layer on top of it. They did it to handle live
migrations but ironically broke live migrations when using any iSCSI
including i
What are you talking about when you say you have mds in a region, afaik
only radosgw supports multisite and regions.
it sounds like you have a cluster spread out over a geographical area.
and this will have a massive impact on latency
what is the latency between all servers in the cluster ?
ki
I've changed the configuration adding your line and changing the mds memory
limit to 512Mb, and for now looks stable (its on about 3-6% and sometimes
even below 3%). I've got a very high usage on boot:
1264 ceph 20 0 12,543g 6,251g 16184 S 2,0 41,1% 0:19.34 ceph-mds
but now looks accep
Hi,
We’re testing a full Intel SSD Ceph cluster on mimic with bluestore and I’m
currently trying to squeeze some better performances from it. We know that on
older storage solutions, sometimes increasing the queue_depth for the HBA can
speed up the IO. Is this also the case for CEPH? Is queue d
I am not sure this related to RBD, but in case it is, this would be an
important bug to fix.
Running LVM on top of RBD, XFS filesystem on top of that, consumed in RHEL 7.4.
When running a large read operation and doing LVM snapshots during
that operation, the block being read winds up all zeroes
On Wed, Jul 25, 2018 at 5:41 PM Alex Gorbachev
wrote:
> I am not sure this related to RBD, but in case it is, this would be an
> important bug to fix.
>
> Running LVM on top of RBD, XFS filesystem on top of that, consumed in RHEL
> 7.4.
>
> When running a large read operation and doing LVM snapsh
On Wed, Jul 25, 2018 at 5:51 PM, Jason Dillaman wrote:
>
>
> On Wed, Jul 25, 2018 at 5:41 PM Alex Gorbachev
> wrote:
>>
>> I am not sure this related to RBD, but in case it is, this would be an
>> important bug to fix.
>>
>> Running LVM on top of RBD, XFS filesystem on top of that, consumed in RH
On Wed, Jul 25, 2018 at 6:07 PM, Alex Gorbachev
wrote:
> On Wed, Jul 25, 2018 at 5:51 PM, Jason Dillaman wrote:
>>
>>
>> On Wed, Jul 25, 2018 at 5:41 PM Alex Gorbachev
>> wrote:
>>>
>>> I am not sure this related to RBD, but in case it is, this would be an
>>> important bug to fix.
>>>
>>> Runn
On Wed, Jul 25, 2018 at 7:07 PM, Alex Gorbachev
wrote:
> On Wed, Jul 25, 2018 at 6:07 PM, Alex Gorbachev
> wrote:
>> On Wed, Jul 25, 2018 at 5:51 PM, Jason Dillaman wrote:
>>>
>>>
>>> On Wed, Jul 25, 2018 at 5:41 PM Alex Gorbachev
>>> wrote:
I am not sure this related to RBD, but in
Dear Ceph community,
I am quite new to Ceph but trying to learn as much quick as I can. We are
deploying our first Ceph production cluster in the next few weeks, we choose
luminous and our goal is to have cephfs. One of the question I have been asked
by other members of our team is if there is
You can do it by exporting cephfs by samba. I don't think any other
way exists for cephfs.
On Thu, Jul 26, 2018 at 9:12 AM, Manuel Sopena Ballesteros
wrote:
> Dear Ceph community,
>
>
>
> I am quite new to Ceph but trying to learn as much quick as I can. We are
> deploying our first Ceph producti
24 matches
Mail list logo