Hello, CEPH users,
having upgraded my CEPH cluster to Luminous, I plan to add new OSD hosts,
and I am looking for setup recommendations.
Intended usage:
- small-ish pool (tens of TB) for RBD volumes used by QEMU
- large pool for object-based cold (or not-so-hot :-) data,
write-on
Do you then get these types of error messages?
packet_write_wait: Connection to 192.168.10.43 port 22: Broken pipe
rsync: connection unexpectedly closed (2345281724 bytes received so far)
[receiver]
rsync error: error in rsync protocol data stream (code 12) at io.c(226)
[receiver=3.1.2]
rsync:
Is the cephfs mount on the same machine that run OSD?
On Wed, Dec 5, 2018 at 2:33 PM NingLi wrote:
>
> Hi all,
>
> We found that some process writing cephfs will hang for a long time (> 120s)
> when uploading(scp/rsync) large files(totally 50G ~ 300G)to the app node's
> cephfs mountpoint.
>
>
Dear all,
i am running a cephfs cluster (jewel 10.2.10) with a ec + cache pool. I plan on
upgrading to luminous around the end of December and wanted to know if this is
fine regarding the issues around 12.2.9. It should be fine since 12.2.10 is
released I guess ?
Cheers,
Markus
___
On Tue, Dec 4, 2018 at 6:44 PM Matthew Pounsett wrote:
>
>
>
> On Tue, 4 Dec 2018 at 18:31, Vasu Kulkarni wrote:
>>>
>>>
>>> Is there a way we can easily set that up without trying to use outdated
>>> tools? Presumably if ceph still supports this as the docs claim, there's a
>>> way to get it
* Bluestore. It's so much better than Filestore. The latest versions
add some more control over the memory usage with the cache autotuning,
check out the latest Luminous release notes
* ~15 HDDs per SSD is usually too much. note that you will lose all
the HDDs if the SSD dies, an OSD without its b
Hi Zheng,
Thanks for replying.
Not the same machine. Our problem is other writing cephfs process will hung
for a long time.
The application node and storage nodes are independent . The cephfs mounts on
application node.
We copy large files to the application node’s cephfs mount point from
Hi Rishabh,
You might want to check out these examples for python boto3 which include SSE-C:
https://github.com/boto/boto3/blob/develop/boto3/examples/s3.rst
As already noted use 'radosgw-admin' to retrieve access key and secret
key to plug into your client. If you are not an administrator on yo
I have had some ec backed Mimic RBD's mounted via the kernel module on a
Ubuntu 14.04 VM, these have been running no issues after updating the
kernel to 4.12 to support EC features.
Today I run an apt dist-upgrade which upgraded from 12.2.9 to 12.2.10,
since then I have been getting the following
Hi,
I have a strange issue
I configured 2 identical iSCSI gateways but one of them is complaining
about negotiations although gwcli reports the correct auth and status (
logged-in)
Any help will be truly appreciated
Here are some details
ceph-iscsi-config-2.6-42.gccca57d.el7.noarch
ceph-iscsi-cl
Hi Mark,
On 04/12/2018 04:41, Mark Kirkwood wrote:
> Hi,
>
> I've set up a Luminous RGW with Keystone integration, and subsequently set
>
> rgw keystone implicit tenants = true
>
> So now all newly created users/tenants (or old ones that never accessed
> RGW) get their own namespaces. However t
This capability is stable and should merge to master shortly.
Matt
On Wed, Dec 5, 2018 at 11:24 AM Florian Haas wrote:
>
> Hi Mark,
>
> On 04/12/2018 04:41, Mark Kirkwood wrote:
> > Hi,
> >
> > I've set up a Luminous RGW with Keystone integration, and subsequently set
> >
> > rgw keystone implici
Hi Florian,
Thanks for the help. I did further testing and narrowed it down to objects
that have been uploaded when the bucket has versioning enabled.
Objects created before that are not affected: all metadata operations are
still possible.
Here is a simple way to reproduce this:
http://paste.ope
On 05/12/2018 17:35, Maxime Guyot wrote:
> Hi Florian,
>
> Thanks for the help. I did further testing and narrowed it down to
> objects that have been uploaded when the bucket has versioning enabled.
> Objects created before that are not affected: all metadata operations
> are still possible.
>
>
Hi
I have the same issue a few months ago. One of my client hung on waiting
for a file writing.
And other clients seems not being effected by it. However, if other clients
access to the same
hanged direcotry, it would hang there as well.
My cluster is 12.2.8 and I use kernel client on other serve
On 12/05/2018 09:43 AM, Steven Vacaroaia wrote:
> Hi,
> I have a strange issue
> I configured 2 identical iSCSI gateways but one of them is complaining
> about negotiations although gwcli reports the correct auth and status (
> logged-in)
>
> Any help will be truly appreciated
>
> Here are some
Thanks for taking the trouble to respond
I noticed some xfs error on the /var partition so I have rebooted the
server in order to force xfs_repair to run
It is now working
Steven
On Wed, 5 Dec 2018 at 11:47, Mike Christie wrote:
> On 12/05/2018 09:43 AM, Steven Vacaroaia wrote:
> > Hi,
> > I
Agree, please file a tracker issue with the info, we'll prioritize
reproducing it.
Cheers,
Matt
On Wed, Dec 5, 2018 at 11:42 AM Florian Haas wrote:
>
> On 05/12/2018 17:35, Maxime Guyot wrote:
> > Hi Florian,
> >
> > Thanks for the help. I did further testing and narrowed it down to
> > objects
On Wed, Dec 5, 2018 at 3:48 PM Ashley Merrick wrote:
>
> I have had some ec backed Mimic RBD's mounted via the kernel module on a
> Ubuntu 14.04 VM, these have been running no issues after updating the kernel
> to 4.12 to support EC features.
>
> Today I run an apt dist-upgrade which upgraded fr
Hello Everyone,
I have a newly re-created CEPH cluster and cannot create a new pool. I'm
using the following syntax, which has previously worked without issue in
the past:
ceph osd pool create rbd 1024 1024
The resulting error is:
"Error ERANGE: For better initial performance on pools expec
I think it's new in 12.2.10, but it should only show up when using
Filestore OSDs. Since you mention that the cluster is new: are you not
using Bluestore?
That being said: the default crush rule name is "replicated_rule", so
"ceph osd pool createreplicated_rule
" is the right way to create a p
Hi, another question relating to multi tenanted RGW.
Let's do the working case 1st. For a user that still uses the global
namespace, if I set a bucket as world readable (header
"X-Container-Read: .r:*") then I can fetch objects from the bucket via a
url like (e.g bucket0, object0):
http://host/sw
On 6/12/18 5:24 AM, Florian Haas wrote:
> Hi Mark,
>
> On 04/12/2018 04:41, Mark Kirkwood wrote:
>> Hi,
>>
>> I've set up a Luminous RGW with Keystone integration, and subsequently set
>>
>> rgw keystone implicit tenants = true
>>
>> So now all newly created users/tenants (or old ones that never ac
re-adding mailing list.
I've had a quick look at the code and the logic for the
expected_num_objects seems broken, it uses the wrong way to detect
filestore OSDs.
I've opened an issue: http://tracker.ceph.com/issues/37532
The new error is just that you probably didn't restart your mons after
set
Hi guys
I faced strange behavior of crushmap change. When I change crush
weight osd I sometimes get increment osdmap(1.2MB) which size is
significantly bigger than size of osdmap(0.4MB)
I use luminois 12.2.8. Cluster was installed a long ago, I suppose
that initially it was firefly
How can I view
I have Mimic Ceph clusters that are hundreds of miles apart. I want to use
them in a multisite configuration. Will the latency between them cause any
problems?
Regards
R
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listin
On Wed, Dec 5, 2018 at 3:32 PM Sergey Dolgov wrote:
> Hi guys
>
> I faced strange behavior of crushmap change. When I change crush
> weight osd I sometimes get increment osdmap(1.2MB) which size is
> significantly bigger than size of osdmap(0.4MB)
>
This is probably because when CRUSH changes,
Hello,
As mentioned earlier the cluster is seperatly running on the latest mimic.
Due to 14.04 only supporting up to Luminous I was running the 12.2.9
version of ceph-common for the rbd binary.
This is what was upgraded when I did the dist-upgrade on the VM mounting
the RBD.
The cluster it self
Hi Benjeman.
Thank You for much needed help.
Best Regards,
Rishabh
> On 05-Dec-2018, at 7:50 PM, Benjeman Meekhof wrote:
>
> Hi Rishabh,
>
> You might want to check out these examples for python boto3 which include
> SSE-C:
> https://github.com/boto/boto3/blob/develop/boto3/examples/s3.rst
On Wed, Dec 5, 2018 at 2:33 PM NingLi wrote:
>
> Hi all,
>
> We found that some process writing cephfs will hang for a long time (> 120s)
> when uploading(scp/rsync) large files(totally 50G ~ 300G)to the app node's
> cephfs mountpoint.
>
> This problem is not always reproduciable. But when thi
Thanks for your expertise sharing.
We have already been using blue store and SSD for journal actually.
The mds_cache_memory_limit I will have a try.
—
Best Regards
Li, Ning
> On Dec 6, 2018, at 00:44, Zhenshi Zhou wrote:
>
> Hi
>
> I have the same issue a few months ago. One of my client
Hi,
Overload on the storage nodes that run OSDs is normal during the whole copy.
The memory pressure is on the client side. I think it may not work to turn the
dirty_ratio and dirty_background_ratio to small on the ceph storage server side.
Anyway,I will have a try.
—
Best Regards
Li, Ning
On Thu, Dec 6, 2018 at 2:24 PM Li,Ning wrote:
>
> Hi,
>
> Overload on the storage nodes that run OSDs is normal during the whole copy.
>
> The memory pressure is on the client side. I think it may not work to turn
> the dirty_ratio and dirty_background_ratio to small on the ceph storage
> serv
Hi all cephers.
I don't know if this is the right place to ask this kind of questions, but
I'll give it a try.
I'm getting interested in ceph and deep dived into the technical details of
it but I'm struggling to understand few things.
When I execute a ceph osd map on an hypothetic object that do
34 matches
Mail list logo