Hi All
Just wondering if anyone can help me out here. Small home cluster with 1
mon, the next phase of the plan called for more but I hadn't got there yet.
I was trying to setup Cephfs and I ran "ceph fs new" without having an MDS
as I was having issues with rank 0 immediately being degraded. My
Hi All
Just wondering if anyone can help me out here. Small home cluster with 1
mon, the next phase of the plan called for more but I hadn't got there yet.
I was trying to setup Cephfs and I ran "ceph fs new" without having an MDS
as I was having issues with rank 0 immediately being degraded. My
Thanks a lot.
On Fri, Oct 7, 2016 at 2:19 PM Daleep Singh Bais
wrote:
> Hi,
>
> Ceph uses other ports also for other daemons like OSD and MDS. Please
> refer to
>
> http://docs.ceph.com/docs/jewel/rados/configuration/network-config-ref/
>
> This might help you to resolve the issue.
>
> Thanks.
>
Hi,
Ceph uses other ports also for other daemons like OSD and MDS. Please
refer to
http://docs.ceph.com/docs/jewel/rados/configuration/network-config-ref/
This might help you to resolve the issue.
Thanks.
Daleep Singh Bais
On 10/07/2016 10:14 AM, Jaemyoun Lee wrote:
> Dear Laizer,
>
> Oh, I g
Hello all,
We have a small 160TB Ceph cluster used only as a test s3 storage repository
for media content.
Problem
Since upgrading from Firefly to Hammer we are experiencing very high OSD memory
use of 2-3 GB per TB of OSD storage - typical OSD memory 6-10GB.
We have had to increase swap space
Dear Laizer,
Oh, I got it. the rbd was created successfully after I disabled the
firewall.
However, when the firewall is enabled and the 6789 port is allowed, the
authentication error is occurred.
Is there any other port?
On Fri, Oct 7, 2016 at 1:23 PM Lomayani S. Laizer
wrote:
> Hello Lee,
>
Hello Lee,
Yes check can be firewall. Make sure port used by ceph are open
--
Lomayani
On Fri, Oct 7, 2016 at 6:44 AM, Jaemyoun Lee wrote:
> Dear Laizer,
>
> I did deploy the configuration and the key by '$ ceph-deploy admin
> client-node' on admin-node.
> jae@client-node$ ls /etc/ceph
>
Dear Laizer,
I did deploy the configuration and the key by '$ ceph-deploy admin
client-node' on admin-node.
jae@client-node$ ls /etc/ceph
ceph.client.admin.keyring ceph.conf rbdmap tmpoWLFTb
On Fri, Oct 7, 2016 at 12:33 PM Lomayani S. Laizer
wrote:
Hello Lee,
Make sure you have co
Hello Lee,
Make sure you have copied configuration and key for authentication
(ceph.client.admin.keyring) to client node. Looks it is an authentication
issue due to missing client key
--
Lomayani
On Fri, Oct 7, 2016 at 5:25 AM, Jaemyoun Lee wrote:
> Hi,
> I would like to create a rbd on client
Hi,
I would like to create a rbd on client-node.
I created a cluster on admin-node successfully.
jae@admin-node$ ceph health
HEALTH_OK
A rbd was created on admin-node successfully.
jae@admin-node$ rbd created foo --size 1024
However, when I created a rbd on client-node, the error wa
And - I just saw another recent thread - http://tracker.ceph.com/
issues/17177 - can be an explanation of most/all of the above ?
Next question(s) would then be:
- How would one deal with duplicate stray(s)
- How would one deal with mismatch between head items and
fnode.fragstat, ceph da
Hi,
context (i.e. what we're doing): We're migrating (or trying to) migrate off
of an nfs server onto cephfs, for a workload that's best described as "big
piles" of hardlinks. Essentially, we have a set of "sources":
foo/01/
foo/0b/<0b>
.. and so on
bar/02/..
bar/0c/..
.. and so on
foo/bar/friend
Hi Graham,
Yeah, I am not sure why no one else is having the same issues. Anyway, had a
chat on irc and got a link that helped me:
https://www.mail-archive.com/ceph-users@lists.ceph.com/msg31764.html
I've followed what it said, even though the errors i got were different, but it
helped me to s
Thanks, that's just what I needed.
For others: I found some information in the Red Hat Ceph Storage 2
documentation. This includes the command "radosgw-admin rename"
which I was unaware of for single-site to multi-site.
(It doesn't seem very encouraging about Hammer to Jewel multisite
transitions
That's interesting, as I am getting the exact same errors after
upgrading from Hammer 0.94.9 to Jewel 10.2.3 (on ubuntu 14.04).
I wondered if it was the issue referred to a few months ago here, but
I'm not so sure, since the error returned from radosgw-admin commands is
different:
http://list
Thanks Kefu!
Downgrading the mons to 0.94.6 got us out of this situation. I appreciate
you tracking this down!
Bryan
On 10/4/16, 1:18 AM, "ceph-users on behalf of kefu chai"
wrote:
>hi ceph users,
>
>If user upgrades the cluster from a prior release to v0.94.7 or up by
>following the steps:
>
On Thu, Oct 6, 2016 at 8:28 AM, Dennis Kramer (DBS) wrote:
> Hi all,
>
> I have an issue that when I copy a specific file with ceph-fuse on
> cephfs (within the same directory) it stalls after a couple of GB of
> data. Nothing happens. No error, it just "hangs".
>
> When I copy the same file with
On Thu, Oct 6, 2016 at 4:08 AM, James Norman wrote:
> Hi there,
>
> I am developing a web application that supports browsing, uploading,
> downloading, moving files in Ceph Rados pool. Internally to write objects we
> use rados_append, as it's often too memory intensive for us to have the full
> f
Generally you need to create a new realm, and add the 'default'
zonegroup into it. I think you can achieve this via the 'radosgw-admin
zonegroup modify' command.
The zonegroup and zone can be renamed (their id will still be
'default', but you can change their names).
Yehuda
On Thu, Oct 6, 2016 at
Thanks for your response. But yes, the netwerk is OK.
But i will double check to be sure.
Then again, If I copy other (big) files from the same client everything
works without any issues. The problem is isolated to a specific file.
With a mis-configured network I would see this kind of issues cons
Hi,
are you sure, that the network is OK ?
Stuff like this can happen if somewhere in between the MTU size is
different.
So make sure, that all public ceph nic's have the same MTU as the
client, and also every switchport has the same MTU in between.
--
Mit freundlichen Gruessen / Best regards
Hi all,
I have an issue that when I copy a specific file with ceph-fuse on
cephfs (within the same directory) it stalls after a couple of GB of
data. Nothing happens. No error, it just "hangs".
When I copy the same file with the cephfs kernel client it works without
issues.
I'm running Jewel 10.
On Wed, Oct 5, 2016 at 2:32 PM, Patrick McGarry wrote:
> Hey guys,
>
> Starting to buckle down a bit in looking at how we can better set up
> Ceph for VMWare integration, but I need a little info/help from you
> folks.
>
> If you currently are using Ceph+VMWare, or are exploring the option,
> I'd
Hi,
Faced with the similar problems on the CentOS7 - looks like condition
race with parted.
Update to 3.2 solve my problem (from 3.1 from the CentOS7 base):
rpm -Uhv
ftp://195.220.108.108/linux/fedora/linux/updates/22/x86_64/p/parted-3.2-16.fc22.x86_64.rpm
Stas
On Mon, Oct 3, 2016 at 6:39 PM,
how did you set the parameter ?
editing ceph.conf only works when you restart the osd nodes.
but running something like
ceph tell osd.* injectargs '--osd-max-backfills 6'
would set all osd's max backfill dynamically without restarting the osd.
and you should fairly quickly afterwards see more
God morning,
>> * 2 x SN2100 100Gb/s Switch 16 ports
> Which incidentally is a half sized (identical HW really) Arctica 3200C.
really never heart from them :-) (and didn't find any price EUR/$
region)
>> * 10 x ConnectX 4LX-EN 25Gb card for hypervisor and OSD nodes
[...]
> You haven't commen
hello
I have a few osd's in my cluster that are regularly crashing.
in the log of them i can see
osd.7
-1> 2016-10-06 08:09:18.869687 7ffaa037f700 -1 osd.7 pg_epoch:
128840 pg[5.3as0( v 84797'30080 (67219'27080,84797'30080]
local-les=128834 n=13146 ec=61149 les/c 128834/127358
128829/128
Hello,
I'm new to Ceph, and currently evaluating a deployment strategy.
I'm planning to create a sort of home-hosting (web and compute hosting,
database, etc.), distributed on various locations (cities), extending
the "commodity hardware" concept to "commodity data-center" and
"commodity connecti
Hi there,
I am developing a web application that supports browsing, uploading,
downloading, moving files in Ceph Rados pool. Internally to write objects we
use rados_append, as it's often too memory intensive for us to have the full
file in memory to do a rados_write_full.
We do not control ou
Upgraded from Hammer 0.94.9 to Jewel 10.2.3 while all RGW data survived in
a realm-less, setup (no realm, "default" zonegroup, "default" zone).
Is it possible to "move" this into a single realm/zonegroup/zone in
preparation for multisite (i.e before add the 2nd zone).
I don't need more than one r
Hi,
maybe, in fact, a clean iscsi implementation would be better, because
more useable in general.
So the MS hyper-V people could use it too.
For me, when it comes to iSCSI ( we tested so far the tgtd module ), the
problem is at most on the reliability part when it comes to resilence in
ca
Is there any way to repair pgs/cephfs gracefully?
-Mykola
From: Yan, Zheng
Sent: Thursday, 6 October 2016 04:48
To: Mykola Dvornik
Cc: John Spray; ceph-users
Subject: Re: [ceph-users] CephFS: No space left on device
On Wed, Oct 5, 2016 at 2:27 PM, Mykola Dvornik wrote:
> Hi Zheng,
>
> Many than
SOLVED!
Thanks to a very kind person from this list who helped me debug, we found that
when I created the VLAN on the switch I didn't set it allow jumbo packets. This
was preventing the OSDs from activating because some traffic was being blocked.
Once I fixed that everything started working. Somet
33 matches
Mail list logo