R ] self.gateway._send(Message.CHANNEL_DATA,
self.id, dumps_internal(item))
[ceph_deploy][ERROR ] File
"/usr/lib/python2.6/site-packages/ceph_deploy/lib/vendor/remoto/lib/vendor/execnet/gateway_base.py",
line 953, in _send
[ceph_deploy][ERROR ] raise IOError("cannot send (a
Hi Alfredo,
Now all works fine. Thank you!
Hi Roman,
This was a recent change in ceph-deploy to enable Ceph services on
CentOS/RHEL/Fedora distros after deploying a daemon (an OSD in your
case).
There was an issue where the remote connection was closed before being
able to enable a service
--
Thanks,
Roman.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
6802/2284
192.168.33.143:6803/2284 exists,up dccd6b99-1885-4c62-864b-107bd9ba0d84
osd.1 up in weight 1 up_from 8 up_thru 0 down_at 0
last_clean_interval [0,0) 192.168.33.142:6800/2399
192.168.33.142:6801/2399 192.168.33.142:6802/2399
192.168.33.142:6803/2399 exists,up 4d4adf4b-ae8e-4e26-866
Yes of course...
iptables -F (no rules) = the same as disabled
SELINUX=disabled
As a testing ground, I use VBox. But I think it should not be a problem.
Firewall? Disable iptables, set SELinux to Permissive.
On 15 Oct, 2014 5:49 pm, "Roman" <mailto:intra...@gmail.com>>
Hi all,
We would like to implement the following setup.
Our cloud nodes (CNs) for virtual machines have two 10 Gbps NICs:
10.x.y.z/22 (routed through the backbone) and 172.x.y.z/24 (available
only on servers within single rack). CNs and ceph nodes are in the same
rack. Ceph nodes have two 10
Hi,
Are there any interesting papers about running Ceph in aws in regards
what to expect in terms of performance, instance sizing, recommended
architecture etc?
We're planning to use it for shared storage on web servers.
--Roman
___
ceph-
SDs in the cluster?
And another question about ceph and aws.
Is anybody built anything decent on aws using Ceph, I mean storage
itself? Or its not worth the effort since amazon might be already using
ceph-alike system on the backend.
Thanks,
--Roman Naumenko
Juicemobile
___
Thank you, Paulo.
Metadata = mds, so metadata server should have cpu power.
--Roman
On 14-11-28 05:34 PM, Paulo Almeida wrote:
On Fri, 2014-11-28 at 16:37 -0500, Roman Naumenko wrote:
And if I understand correctly, monitors are the access points to the
cluster, so they should provide enough
r the osd and
1 for the journal.
Greetings,
Roman
Am 24.05.2013 18:56, schrieb Abel Lopez:
> Hello all,
> New to the list.
>
> I'm trying to form a repeatable deployment method for our environments.
>
> Using Ubuntu 12.04, and trying to install Cuttlefish.
> We
for the suggestion. I tried adding shortnames to /etc/hosts (I
> already have DNS setup), but gatherkeys still fails.
>
> I really want to use ceph for my cinder backend, and also for my
> glance, but if I can't deploy it, I won't use it.
>
>
> On Fri, May 24, 2013
t tested so far.
This took the main time debugging because this error doesn't show up
clearly in the logs.
Regards,
Roman
On Fri, 24 May 2013 15:43:15 -0700, John Wilkins
wrote:
> I ran into a few issues like this when writing the procedure up. One
> problem with gatherkeys had to do with
Hi,
perhaps a silly question, but am I right that the osd have to be mounted
via fstab?
Today I started my testcluster and it worked after mounting the
OSD-partitions manually.
Greetings,
Roman
___
ceph-users mailing list
ceph-users@lists.ceph.com
HI all.
Im trying to deploy openstack with ceph kraken bluestore osds.
Deploy went well, but then i execute ceph osd tree i can see wrong weight on
bluestore disks.
ceph osd tree | tail
-3 0.91849 host krk-str02
23 0.00980 osd.23 up 1.0 1.0
24 0.90869
Hi all,
I'm trying to set up ceph logging into graylog.
For that I've set the following options in ceph.conf:
log_to_graylog = true
err_to_graylog = true
log_to_graylog_host = graylog.service.consul
log_to_graylog_port = 12201
mon_cluster_log_to_graylog = true
mon_cluster_log_to_graylog_host = gra
Hi,
thanks for your reply.
May I ask which type of input do you use in graylog?
"GELF UDP" or another one?
And which version of graylog/ceph do you use?
Thanks,
Roman
On Aug 9 2018, at 7:47 pm, Rudenko Aleksandr wrote:
>
> Hi,
>
> All our settings for this:
>
>
a problem for us. Ceph runs despite this message without
further problems.
It's just a bit annoying that every time the error occurs our monitoring
triggers a big alarm because Ceph is in ERROR status. :)
Thanks in advance,
Roman
___
c
Hi everyone!
As I noticed, ceph-volume lacks Ubuntu Trusty compatibility
https://tracker.ceph.com/issues/23496
So, I can't follow this instruction
http://docs.ceph.com/docs/mimic/rados/operations/bluestore-migration/
Do I have any other option to migrate my Filestore osds (Luminous 12.2.9) t
Ok, thx, I'll try ceph-disk.
От: Alfredo Deza
Отправлено: 15 ноября 2018 г. 20:16
Кому: Klimenko, Roman
Копия: ceph-users@lists.ceph.com
Тема: Re: [ceph-users] Migration osds to Bluestore on Ubuntu 14.04 Trusty
On Thu, Nov 15, 2018 at 8:57 AM Kli
Hi everyone!
On the old prod cluster
- baremetal, 5 nodes (24 cpu, 256G RAM)
- ceph 0.80.9 filestore
- 105 osd, size 114TB (each osd 1.1T, SAS Seagate ST1200MM0018) , raw used 60%
- 15 journals (eash journal 0.4TB, Toshiba PX04SMB040)
- net 20Gbps
- 5 pools, size 2, min_size 1
we have dis
Hi everyone. Yesterday i found that on our overcrowded Hammer ceph cluster (83%
used in HDD pool) several osds were in danger zone - near 95%.
I reweighted them, and after several moments I got pgs stuck in
backfill_toofull.
After that, I reapplied reweight to osds - no luck.
Currently, all re
?Ok, I'll try these params. thx!
От: Maged Mokhtar
Отправлено: 12 декабря 2018 г. 10:51
Кому: Klimenko, Roman; ceph-users@lists.ceph.com
Тема: Re: [ceph-users] ceph pg backfill_toofull
There are 2 relevant params
mon_osd_full_ratio
22 matches
Mail list logo