es.
Would be nice if you could enlightne me :)
- Mehmet
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi Dimitri,
what is the output of
- ceph osd tree?
Perhaps you have a initials crush weight of 0 and in this case there
wouldnt be any change in the PGs till you change the weight.
- Mehmet
Am 2018-07-10 11:58, schrieb Dimitri Roschkowski:
Hi,
is it possible to use just a partition
hello guys,
in my production cluster i've many objects like this
"#> rados -p rbd ls | grep 'benchmark'"
... .. .
benchmark_data_inkscope.example.net_32654_object1918
benchmark_data_server_26414_object1990
... .. .
Is it safe to run "rados -p rbd cleanup" or is there any risk for my
images?
_
Hey Ceph people,
need advise on how to move a ceph-cluster from one datacenter to another
without any downtime :)
DC 1:
3 dedicated MON-Server (also MGR on this Servers)
4 dedicated OSD-Server (3x12 OSD, 1x 23 OSDs)
3 Proxmox Nodes with connection to our Ceph-Storage (not managed from
proxmo
Sage Wrote( Tue, 2 Jan 2018 17:57:32 + (UTC)):
Hi Stefan, Mehmet,
Hi Sage,
Sorry for the *extremly late* response!
Are these clusters that were upgraded from prior versions, or fresh
luminous installs?
My Cluster was initialy installed with jewel (10.2.1) have seen some
minor updates
Hello John,
Am 22. Oktober 2017 13:58:34 MESZ schrieb John Spray :
>On Fri, Oct 20, 2017 at 10:10 AM, Mehmet wrote:
>> Hello,
>>
>> yesterday i've upgraded my "Jewel"-Cluster (10.2.10) to "Luminous"
>(12.2.1).
>> This went realy smooth
Am 2017-10-20 13:00, schrieb Mehmet:
Am 2017-10-20 11:10, schrieb Mehmet:
Hello,
yesterday i've upgraded my "Jewel"-Cluster (10.2.10) to "Luminous"
(12.2.1). This went realy smooth - Thanks! :)
Today i wanted to enable the BuildIn Dasboard via
#> vi ceph.
Am 2017-10-20 11:10, schrieb Mehmet:
Hello,
yesterday i've upgraded my "Jewel"-Cluster (10.2.10) to "Luminous"
(12.2.1). This went realy smooth - Thanks! :)
Today i wanted to enable the BuildIn Dasboard via
#> vi ceph.conf
[...]
[mgr]
mgr_modules = dashboard
[...]
Hello,
yesterday i've upgraded my "Jewel"-Cluster (10.2.10) to "Luminous"
(12.2.1). This went realy smooth - Thanks! :)
Today i wanted to enable the BuildIn Dasboard via
#> vi ceph.conf
[...]
[mgr]
mgr_modules = dashboard
[...]
#> ceph-deploy --overwrite-conf config push monserver1 monserver
Hey guys,
Does this mean we have to do additional when upgrading from Jewel 10.2.10 to
luminous 12.2.1?
- Mehmet
Am 9. Oktober 2017 04:02:14 MESZ schrieb kefu chai :
>On Mon, Oct 9, 2017 at 8:07 AM, Joao Eduardo Luis wrote:
>> This looks a lot like a bug I fixed a week or so ago
.. use the HDD for RAW and the nvme for Wal and db?
Hope you (and others) understand what i mean :)
- Mehmet
Am 16. August 2017 19:01:30 MESZ schrieb David Turner :
>Honestly there isn't enough information about your use case. RBD usage
>with small IO vs ObjectStore with large file
Which ssds are used? Are they in production? If so how is your PG Count?
Am 17. August 2017 20:04:25 MESZ schrieb M Ranga Swami Reddy
:
>Hello,
>I am using the Ceph cluster with HDDs and SSDs. Created separate pool
>for each.
>Now, when I ran the "ceph osd bench", HDD's OSDs show around 500 MB/s
Hey Mark :)
Am 16. August 2017 21:43:34 MESZ schrieb Mark Nelson :
>Hi Mehmet!
>
>On 08/16/2017 11:12 AM, Mehmet wrote:
>> :( no suggestions or recommendations on this?
>>
>> Am 14. August 2017 16:50:15 MESZ schrieb Mehmet :
>>
>> Hi friends,
>>
:( no suggestions or recommendations on this?
Am 14. August 2017 16:50:15 MESZ schrieb Mehmet :
>Hi friends,
>
>my actual hardware setup per OSD-node is as follow:
>
># 3 OSD-Nodes with
>- 2x Intel(R) Xeon(R) CPU E5-2603 v3 @ 1.60GHz ==> 12 Cores, no
>Hyper-Threading
>
I am Not Sure but perhaps nodown/out could help to Finish?
- Mehmet
Am 15. August 2017 16:01:57 MESZ schrieb Andreas Calminder
:
>Hi,
>I got hit with osd suicide timeouts while deep-scrub runs on a
>specific pg, there's a RH article
>(https://access.redhat.com/solutions/21
?
I Would setup the disks via "ceph-deploy".
Thanks in advance for your suggestions!
- Mehmet
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
I guess this is Related to
"debug_mgr": "1/5"
But not Sure.. . Give it a try.
Hth
Mehmet
Am 8. August 2017 16:28:21 MESZ schrieb Konrad Riedel :
>Hi Ceph users,
>
>my luminous (ceph version 12.1.1) testcluster is doing fine, except
>that
>one Monitor is
ceph-objectstore-tool before use!
- Mehmet
Am 1. Juli 2017 01:53:49 MESZ schrieb Mark Guz :
>Hi all
>
>I have two osds that are asserting , see
>https://pastebin.com/raw/xmDPg84a
>
>I am running kraken 11.2.0 and am kinda blocked by this. Anything i
>try
>to do with these
Hi,
We actually using 3xIntel Server with 12 osds and One supermicro with 24 osds
in One ceph Cluster Journals on nvme per server. Did not seeing any issues jet.
Best
Mehmet
Am 9. Juni 2017 19:24:40 MESZ schrieb Deepak Naidu :
>Thanks David for sharing your experience, appreciate
Perhaps openATTIC is also an alternativ to admin Ceph. Actually i prefer
ceph-deploy
Am 18. Mai 2017 15:33:52 MESZ schrieb Shambhu Rajak :
>Let me explore the code to my needs. Thanks Chris
>Regards,
>Shambhu
>
>From: Bitskrieg [mailto:bitskr...@bitskrieg.net]
>Sent: Thursday, May 18, 2017 6:40
Hi,
I thought that Clients do also reads from ceph replicas. Sometimes i Read in
the web that this does only happens from the primary pg like how ceph handle
writes... so what is True?
Greetz
Mehmet___
ceph-users mailing list
ceph-users@lists.ceph.com
Also i would set
osd_crush_initial_weight = 0
In ceph.conf an decrease the Crush weight via
Ceph osd Crush reweight osd.36 0.05000
Step by step
Am 25. April 2017 23:19:08 MESZ schrieb Reed Dier :
>Others will likely be able to provide some better responses, but I’ll
>take a shot to see if anyt
).
Hope you understand what i mean :)
HTH
Mehmet
Am 20. April 2017 09:19:32 MESZ schrieb "Stolte, Felix"
:
>Hello cephers,
>
>is anyone using Fujitsu Hardware for Ceph OSDs with the PRAID EP400i
>Raidcontroller in JBOD Mode? We are having three identical servers with
>
Perhaps ceph-deploy can Work when you disable the "epel" Repo?
Purge all and try it again.
Am 7. April 2017 04:27:59 MESZ schrieb Travis Eddy :
>Here is what I tried: (several times)
>Nothing works
>The best I got was following the Ceph guide and adding
>sudo yum install centos-release-ceph-jew
long /dev/sdX
If not repeat the above with new found defekt lba.
Ive done this three time successfully - but not with an error on a
primary pg.
After that you can start the osd with
# systemctl start ceph-osd@32
# ceph osd in osd.32
HTH
- Mehmet
Am 2017-03-17 20:08, schrieb Shain Miley:
uth_client_required = cephx
osd_crush_initial_weight = 0
mon_osd_full_ratio = 0.90
mon_osd_nearfull_ratio = 0.80
[mon]
mon_allow_pool_delete = false
[osd]
#osd_journal_size = 20480
osd_journal_size = 15360
Please ask if you need more information.
Thanks so far.
- Mehmet
___
Perhaps a deep scrub will cause a scrub error Which you can try to ceph pg
repair?
Btw. It seems that you use 2 replicas Which is not recommendet except for dev
environments.
Am 24. Januar 2017 22:58:14 MEZ schrieb Richard Bade :
>Hi Everyone,
>I've got a strange one. After doing a reweight of
I guess this is cause you are using always the Same root tree.
Am 23. Januar 2017 10:50:16 MEZ schrieb Sascha Spreitzer :
>Hi all
>
>I reckognized ceph is rebalancing the whole crush map when i add osd's
>that should not affect any of my crush rulesets.
>
>Is there a way to add osd's to the crush
Hi Andras,
Iam not the experienced User but i guess you could have a look on this object
on each related osd for the pg, compare them and delete the Different object. I
assume you have size = 3.
Then again pg repair.
But be carefull iirc the replica will be recovered from the primary pg.
Hth
I would try to Set pgp for your pool equal to 300
#Ceph osd pool yourpool set pgp 300
...not sure about the command...
If that did not help try to restart osd 7 and 15
Hth
- Mehmet
Am 2. November 2016 14:15:09 MEZ, schrieb Vlad Blando :
>I have a 3 Cluster Giant setup with 8 OSD e
.@elchaka.de<mailto:c...@elchaka.de> wrote:
>Hello Ronny,
>
>if it is possible for you, try to Reboot all OSD Nodes.
>
>I had this issue on my test Cluster and it become healthy after
>rebooting.
>
>Hth
>- Mehmet
>
>Am 1. November 2016 19:55:07 MEZ, schrieb Ronny
Hey Alexey,
sorry - it seems that the log files does not contain the debug message
which i got @ the command line
here it is
- http://slexy.org/view/s20A6m2Tfr
Mehmet
Am 2016-09-12 15:48, schrieb Alexey Sheplyakov:
Hi,
This is the actual logfile for osd.10
> - http://slexy.org/v
://slexy.org/view/s2vrUnNBEW
Thanky you for investigations :)
HTH
kind regards,
- Mehmet
Am 2016-09-12 15:48, schrieb Alexey Sheplyakov:
Hi,
This is the actual logfile for osd.10
> - http://slexy.org/view/s21lhpkLGQ [5]
Unfortunately this log does not contain any new data -- for some
reason the
:
I have done "ceph osd set noout" before stop and flushing.
Hope this is useful for you!
- Mehmet
Best regards,
Alexey
On Wed, Sep 7, 2016 at 6:48 PM, Mehmet wrote:
Hey again,
now i have stopped my osd.12 via
root@:~# systemctl stop ceph-osd@12
and when i am flush the journa
of the executable, or `objdump -rdS ` is
needed to interpret this.
Segmentation fault
The logfile with further information
- http://slexy.org/view/s2T8AohMfU
I guess i will get same message when i flush the other journals.
- Mehmet
Am 2016-09-07 13:23, schrieb Mehmet:
Hello ceph peo
this will be changed to 2x 10GB Fibre perhaps with LACP when
possible.
- We do not use Jumbo Frames yet..
- Public and Cluster-Network related Ceph traffic is actualy going
through this one active 1GB Interface on each Server.
hf
- Mehmet
ery much for your patience and great help!
Now lets play a bit with ceph ^^
Best regards,
- Mehmet
Am 2016-08-30 00:02, schrieb Jean-Charles Lopez:
How Mehmet
OK so it does come from a rados put.
As you were able to check the VM device objet size is 4 MB.
So we'll see after you have r
it see :) - i hope that my next eMail will close this issue.
Thank you very much for your help!
Best regards,
- Mehmet
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hello JC,
in short for the records:
What you can try doing is to change the following settings on all the
OSDs that host this particular PG and see if it makes things better
[osd]
[...]
osd_scrub_chunk_max = 5 #
maximum number of chunks the scrub will
Hey JC,
thank you very much! - My answers inline :)
Am 2016-08-26 19:26, schrieb LOPEZ Jean-Charles:
Hi Mehmet,
what is interesting in the PG stats is that the PG contains around
700+ objects and you said that you are using RBD only in your cluster
if IIRC. With the default RBD order (4MB
the acting set
for pg 0.223.
Thank you, your help is very appreciated!
- Mehmet
Am 2016-08-25 13:58, schrieb c...@elchaka.de:
Hey JC,
Thank you very much for your mail!
I will provide the Informations tomorrow when i am at work again.
Hope that we will find a solution :)
- Mehmet
Am 2
ceph version 10.2.2 (45107e21c568dd033c2f0a3107dec8f0b0e58374)
Kernel: 4.4.0-31-generic #50-Ubuntu
Any ideas?
- Mehmet
Am 2016-08-02 17:57, schrieb c:
Am 2016-08-02 13:30, schrieb c:
Hello Guys,
this time without the original acting-set osd.4, 16 and 28. The issue
still exists...
[...]
For the record, th
42 matches
Mail list logo