Hi Nick and Udo,
thanks, very helpful, I tweaked some of the config parameters along the
line Udo suggests, but still only some 80 MB/s or so.
Kernel 4.3.4 running on the client machine and comfortable readahead
configured
$ sudo blockdev --getra /dev/rbd0
262144
Still not more than about
That was a poor example, because it was an older version of ceph and the
clock was not set correctly. But I don't think either of those things
causes the problem because I see it on multiple nodes:
root@node8:/var/log/ceph# grep hit_set_trim ceph-osd.2.log | wc -l
2524
root@node8:/var/log/ceph#
Hi cephers,
Had the same issue too. But the command "rbd feature disable" not
working to me.
Any comment will be appreciated.
$sudo rbd feature disable timg1 deep-flatten fast-diff object-map
exclusive-lock
rbd: failed to update image features: (22) Invalid argument
2016-04-21 15:53:10.260671
Hi Mike,
Am 21.04.2016 um 09:07 schrieb Mike Miller:
Hi Nick and Udo,
thanks, very helpful, I tweaked some of the config parameters along
the line Udo suggests, but still only some 80 MB/s or so.
this mean you have reached factor 3 (this are round about the value I
see with single thread on R
2016-04-07 0:18 GMT+08:00 Patrick McGarry :
> Hey cephers,
>
> I have all but one of the presentations from Ceph Day Sunnyvale, so
> rather than wait for a full hand I went ahead and posted the link to
> the slides on the event page:
>
> http://ceph.com/cephdays/ceph-day-sunnyvale/
thanks for sh
That's true for me too.
You can disable them via set in the conf file.
#ceph.conf
rbd_default_features = 3
#meens only enable layering and striping
2016-04-21 16:00 GMT+08:00 Mika c :
> Hi cephers,
> Had the same issue too. But the command "rbd feature disable" not
> working to me.
> Any com
Hi xizhiyong,
Thanks for your infomation. I am using Jewel right now(10.1.2), the
setting "rbd_default_features = 3" not working for me.
And this setting will enable "exclusive-lock, object-map, fast-diff,
deep-flatten" features.
Best wishes,
Mika
2016-04-21 16:56 GMT+08:00 席智勇 :
> That'
Hi cephalapods,
In our couple years of operating a large Ceph cluster, every single
inconsistency I can recall was caused by a failed read during
deep-scrub. In other words, deep scrub reads an object, the read fails
with dmesg reporting "Sense Key : Medium Error [current]", "Add.
Sense: Unrecover
Hi, my ceph cluster has two pools, ssd cache tier pool and SATA backend
pool. For this configuration, do I need use SSD as journal device? I do not
know whether cache tier take the journal role? thanks
___
ceph-users mailing list
ceph-users@lists.ceph.com
On Thu, Apr 21, 2016 at 1:23 PM, Dan van der Ster wrote:
> Hi cephalapods,
>
> In our couple years of operating a large Ceph cluster, every single
> inconsistency I can recall was caused by a failed read during
> deep-scrub. In other words, deep scrub reads an object, the read fails
> with dmesg r
Hi,
afaik cache does not have to do anything with journals.
So your OSD's need journals, and for performance, you will take SSD's.
The Cache should be something faster than your OSD's. Usually SSD or NVMe.
The Cache is an extra Space in front of your OSD's which is supposed to
speed up things b
On Thu, Apr 21, 2016 at 10:00 AM, Mika c wrote:
> Hi cephers,
> Had the same issue too. But the command "rbd feature disable" not
> working to me.
> Any comment will be appreciated.
>
> $sudo rbd feature disable timg1 deep-flatten fast-diff object-map
> exclusive-lock
> rbd: failed to update i
Hi ,
Any has used ceph with mainframes ?
If it is possible could you please point to example solutions .
regards
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
thanks Oliver, does the journal need be committed twice? One is for write
IO to the cache tier? the other is for write IO destaged to SATA backend
pool?
2016-04-21 19:38 GMT+08:00 Oliver Dzombic :
> Hi,
>
> afaik cache does not have to do anything with journals.
>
> So your OSD's need journals, a
On Thu, Apr 21, 2016 at 11:41 AM, Mika c wrote:
> Hi xizhiyong,
> Thanks for your infomation. I am using Jewel right now(10.1.2), the
> setting "rbd_default_features = 3" not working for me.
> And this setting will enable "exclusive-lock, object-map, fast-diff,
> deep-flatten" features.
Setti
Hello everyone!
I'm trying to test the bleeding edge Ceph configuration with ceph-10.1.2 on
Debian Stretch.
I've built ceph from git clone with dpkg-buildpackage and managed to start
it, but run into some issues:
- i've had to install ceph from debian packages, as ceph-deploy could not
install it p
Ok, weird problem,(s) if you want to call it that..
So i run a 10 OSD Ceph cluster on 4 hosts with SSDs (Intel DC3700) as journals.
I have a lot of mixed workloads running and the linux machines seem to get
somehow corrupted in a weird way and the performance kind of sucks.
First off:
All hosts
Hi Udo,
thanks, just to make sure, further increased the readahead:
$ sudo blockdev --getra /dev/rbd0
1048576
$ cat /sys/block/rbd0/queue/read_ahead_kb
524288
No difference here. First one is sectors (512 bytes), second one KB.
The second read (after drop cache) is somewhat faster (10%-20%) b
Hi min,
just like Paul already explained.
The cache is made out of OSD's ( which have just like any other OSD's
their own journal ).
So it depends on you what structure you will build. You can place all
journals of hot and cold storage ( hot = cache, cold = regular storage )
together on same SSD
Hello,
we want to disable readproxy cache tier but before doing so we would like
to make sure we won't loose data.
Is there a way to confirm that flush actually write objects to disk ?
We're using ceph version 0.94.6.
I tried that, with cephfs_data_ro_cache being the hot storage pool and
cephf
Hi,
I would like to install and test ceph jewel release.
My servers are rhel 7.2 but clients are rhel6.7.
Is it possible to install jewel release to server and use hammer
ceph-fuse rpms on clients?
Thanks,
Serkan
___
ceph-users mailing list
ceph-users@li
Hi,
yes, it should be.
If you want to do something good, try to use a recent kernel on the
centos 6.7 things. Then you could also complile something, that you dont
need fuse.
The speed might be awesome bad if you use centos 6.7 std. kernel with fuse.
--
Mit freundlichen Gruessen / Best regards
I cannot install a different kernel that is not supported by redhat to clients.
Any other way to increase fuse performance with default 6.7 kernel?
Maybe I can compile jewel ceph-fuse packages for rhel6, is this make a
difference?
On Thu, Apr 21, 2016 at 5:24 PM, Oliver Dzombic wrote:
> Hi,
>
> y
On Thu, Apr 21, 2016 at 8:04 PM, John Depp wrote:
> Hello everyone!
> I'm trying to test the bleeding edge Ceph configuration with ceph-10.1.2 on
> Debian Stretch.
> I've built ceph from git clone with dpkg-buildpackage and managed to start
> it, but run into some issues:
> - i've had to install c
Running this command
ceph-deploy install --stable jewel ceph00
And using the 1.5.32 version of ceph-deploy onto a redhat 7.2 system is failing
today (worked yesterday)
[ceph00][DEBUG ]
[ceph00][DEBUG ] Packag
Sorry about the mangled urls in there, these are all from download.ceph.com
rpm-jewel el7 xfs_64
Steve
> On Apr 21, 2016, at 1:17 PM, Stephen Lord wrote:
>
>
>
> Running this command
>
> ceph-deploy install --stable jewel ceph00
>
> And using the 1.5.32 version of ceph-deploy onto a re
This major release of Ceph will be the foundation for the next
long-term stable release. There have been many major changes since
the Infernalis (9.2.x) and Hammer (0.94.x) releases, and the upgrade
process is non-trivial. Please read these release notes carefully.
For the complete release notes,
Hi,
I'm sure I'm doing something wrong, I hope someone can enlighten me...
I'm encountering many issues when I restart a ceph server (any ceph server).
This is on CentOS 7.2, ceph-0.94.6-0.el7.x86_64.
Firt : I have disabled abrt. I don't need abrt.
But when I restart, I see these logs in the sys
Hi,
I'm sure I'm doing something wrong, I hope someone can enlighten me...
I'm encountering many issues when I restart a ceph server (any ceph server).
This is on CentOS 7.2, ceph-0.94.6-0.el7.x86_64.
Firt : I have disabled abrt. I don't need abrt.
But when I restart, I see these logs in the sys
Hello,
On Thu, 21 Apr 2016 15:35:52 +0300 Florian Rommel wrote:
> Ok, weird problem,(s) if you want to call it that..
>
> So i run a 10 OSD Ceph cluster on 4 hosts with SSDs (Intel DC3700) as
> journals.
>
Small number of OSDs (at replication 3 at best the sustained performance of
3 HDDs) in to
hi:
I am using the same verion with you.
This set only effects new created volumes.
2016-04-21 17:41 GMT+08:00 Mika c :
> Hi xizhiyong,
> Thanks for your infomation. I am using Jewel right now(10.1.2), the
> setting "rbd_default_features = 3" not working for me.
> And this setting will enable
Is it possible? Can I use fibre channel to interconnect my ceph OSDs?
Intuition tells me it should be possible, yet experience (Mostly with
fibre channel) tells me no. I don't know enough about how ceph works
to know for sure. All my googling returns results about using ceph as
a BACKEND for ex
Slight clarification: to disable these features on existing images, you
should run the following:
rbd feature disable deep-flatten,fast-diff,object-map,
exclusive-lock
(note the commas instead of the spaces when disabling multiple features at
once).
--
Jason
On Thu, Apr 21, 2016 at 4:48 AM, I
I have a ceph cluster and I will change my journal devices to new SSD's.
In some instructions of doing this they refer to a journal file (link to
UUID of journal )
In my OSD folder this journal don’t exist.
This instructions is renaming the UUID of new device to the old UUID not to
break anythin
This is a previous thread about journal disk replacement.
http://lists.opennebula.org/pipermail/ceph-users-ceph.com/2014-May/039434.html
I hope this would be helpful for you.
Cheers,
S
- Original Message -
From: "Martin Wilderoth"
To: ceph-us...@ceph.com
Sent: Friday, April 22, 2016 1:
I could only see it being done using FCIP as the OSD processes use IP to
communicate.
I guess it would depend on why you are looking to use something like FC instead
of Ethernet or IB.
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Sc
My primary motivations are:
Most of my systems that I want to use with ceph already have fibre
Chantel cards and infrastructure, and more infrastructure is
incredibly cheap compared to infiniband or {1,4}0gbe cards and
infrastructure
Most of my systems are expansion slot constrained, and I'd be for
So it looks like because of reply to going to the user instead of the
list (Seriously, somebody needs to fix the list headers) by default,
the thread got kinda messed up, so I apologize if you're using a
threaded reader. That said, here goes.
from the responses I've gotten, it looks like there's
On Apr 21, 2016, at 11:10 PM, Schlacta, Christ
mailto:aarc...@aarcane.org>> wrote:
Would it be worth while development effort to establish a block
protocol between the nodes so that something like fibre channel could
be used to communicate internally?
With 25/100 Ethernet & IB becoming available
> from the responses I've gotten, it looks like there's no viable option to use
> fibre channel as an interconnect between the nodes of the cluster.
> Would it be worth while development effort to establish a block protocol
> between the nodes so that something like fibre channel could be used to
>
40 matches
Mail list logo