.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi,
I am designing an infrastructure using Ceph.
The client will fetch data though HTTP.
I saw the radosgw, that is made for that, it has, however, some weakness
for me : as far as I understood, when a client want to fetch a file, it
connects to the radosgw, which will connect to the right OSD
Hi,
I have a Ceph cluster, used through radosgw.
In that cluster, I write files each seconds: input files are known,
predictible and stable, there is always the same number of new
fiexd-size files, each second.
Theses files are kept a few days, then remove after a fixed duration.
And thus, I
u are trying to say.
>>
>> Wheezy was released with kernel 3.2 and bugfixes are applied to 3.2 by
>> Debian throughout Wheezy's support cycle.
>>
>> But by using the Wheezy backports repository one can use kernel 3.16,
>> including the ceph code which is incl
23:32, Chris Armstrong wrote:
> Hi folks,
>
> Calling on the collective Ceph knowledge here. Since upgrading to
> Hammer, we're now seeing:
>
> health HEALTH_WARN
> too many PGs per OSD (1536 > max 300)
>
> We have 3 OSDs, so we have used the pg_n
ned etc), instead of using the whole raw disk
I'm using these steps:
- create a partition
- mkfs.xfs
- mkdir & mount
- ceph-deploy osd prepare host:/path/to/mounted-fs
Dunno if it's the right way, seems to work so far
On 21/05/2014 16:05, 10 minus wrote:
Hi,
I have just start
IO cache may be handled be the kernel, not userspace
Are you sure it is not already in use ? Do not look for userspace memory
On 07/09/2015 23:19, Vickey Singh wrote:
> Hello Experts ,
>
> I want to increase my Ceph cluster's read performance.
>
> I have several OSD nodes ha
gt;
>Thanks and regards.
>
>
>
>
>___
>ceph-users mailing list
>ceph-users@lists.ceph.com
>http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
__
Hi,
>
> I saw this tip in the troubleshooting section:
>
> DO NOT mount kernel clients directly on the same node as your Ceph Storage
> Cluster,
> because kernel conflicts can arise. However, you can mount kernel clients
> within
> virtual machines (VMs) on a sing
che because of
>> HTTPS config.
>
> Civetweb should be able to handle ssl just fine:
>
> rgw_frontends = civetweb port=7480s ssl_certificate=/path/to/some_cert.pem
>
>
>
> _______
> ceph-users mailing list
> ceph-users@lists.
obably forgo the separate
>> cluster network and just run them over the same IP, as after running the
>> cluster, I don't see any benefit from separate networks when taking into
>> account the extra complexity. Something to consider.
>>
>>> -Original Messag
and sparse/thinly-provisioned LUNs, but it is
> off by default until sufficient testing has been done.
On 16/06/2016 12:24, Zhongyan Gu wrote:
> Hi,
> it seems using resize2fs on rbd image would generate lots of garbage
> objects in ceph.
> The experiment is:
> 1. use resize2fs to e
/crush-map/#crush-map-bucket-types
and
http://docs.ceph.com/docs/hammer/rados/configuration/pool-pg-config-ref/)
On 01/07/2016 13:49, Ashley Merrick wrote:
> Hello,
>
> Looking at setting up a new CEPH Cluster, starting with the following.
>
> 3 x CEPH OSD Servers
>
> Eac
40Gbps can be used as 4*10Gbps
I guess welcome feedbacks should not be stuck by "usage of a 40Gbps
ports", but extented to "usage of more than a single 10Gbps port, eg
20Gbps etc too"
Is there people here that are using more than 10G on an ceph server ?
On 13/07/2016 14:27
per per
> port, and won¹t be so hard to drive. 40Gbe is very hard to fill. I
> personally probably would not do 40 again.
>
> Warren Wang
>
>
>
> On 7/13/16, 9:10 AM, "ceph-users on behalf of Götz Reinicke - IT
> Koordinator" goetz.reini...@filmakad
e (near-full cluster that cannot handle an
OSD failure)
- too many OSDs died at the same time, making autohealing unefficient:
you will have "some" objects missing (if all copies were on missing
OSDs, there is no way to recreate them)
On 15/08/2016 11:18, kp...@freenet.de wrote:
> hell
Simple solution that always works : purge systemd
Tested and approved on all my ceph nodes, and all my servers :)
On 20/08/2016 19:35, Marcus wrote:
> Blablabla systemd blablabla
___
ceph-users mailing list
ceph-users@lists.ceph.com
h
that device
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On 28/06/2017 13:42, Wido den Hollander wrote:
> Honestly I think there aren't that many IPv6 deployments with Ceph out there.
> I for sure am a big fan a deployer of Ceph+IPv6, but I don't know many around
> me.
I got that !
Because IPv6 is so much better than IPv4 :dance:
g in to
> luminous+btrfs territory.
> Is that good enough?
>
> sage
This seems sane to me
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
; * 6x NVMe U2 for OSD
>> * 2x 100Gib ethernet cards
>>
>> We have yet not sure about which Intel and how much RAM we should put on
>> it to avoid CPU bottleneck.
>> Can you help me to choose the right couple of CPU?
>> Did you see any issue on the configur
my guess would be the 8 NVME drives +
> 2x100Gbit would be too much for
> the current Xeon generation (40 PCIE lanes) to fully utilize.
>
> Cheers,
> Robert van Leeuwen
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.
matters for latency so you
>> probably want to up that.
>>
>> You can also look at the DIMM configuration.
>> TBH I am not sure how much it impacts Ceph performance but having just 2
>> DIMMS slots populated will not give you max memory bandwidth.
>> Having some
gt;
>> ,Ashley
>>
>>
>> From: Henrik Korkuc
>> Sent: 06 September 2017 06:58:52
>> To: Ashley Merrick; ceph-us...@ceph.com
>> Subject: Re: [ceph-users] Luminous Upgrade KRBD
>>
>> On 17-09-06 07:33, Ashley Merr
data from journal to definitive storage
Bluestore:
- write data to definitive storage (free space, not overwriting anything)
- write metadata
>
> I find a topic on reddit that told by journal, ceph avoid buffer cache, is
> it true? is is drawback of POSIX?
> https://ww
r answer.
>
>Regards,
>Erik
>
>
>
>
>___
>ceph-users mailing list
>ceph-users@lists.ceph.com
>http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi Stefan,
Am 14. Dezember 2017 09:48:36 MEZ schrieb Stefan Kooman :
>Hi,
>
>We see the following in the logs after we start a scrub for some osds:
>
>ceph-osd.2.log:2017-12-14 06:50:47.180344 7f0f47db2700 0
>log_channel(cluster) log [DBG] : 1.2d8 scrub starts
>ceph-osd.2
I assume you have size of 3 then divide your expected 400 with 3 and you are
not far Away from what you get...
In Addition you should Never use Consumer grade ssds for ceph as they will be
reach the DWPD very soon...
Am 4. Januar 2018 09:54:55 MEZ schrieb "Rafał Wądołowski"
:
s is absolutely in production Environment :)
On Client side we using proxmox qemu/kvm to Access the rbds (librbd).
Actually 12.2.2
- Mehmet
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
I don't understand how all of this is related to Ceph
Ceph runs on a dedicated hardware, there is nothing there except Ceph,
and the ceph daemons have already all power on ceph's data.
And there is no random-code execution allowed on this node.
Thus, spectre & meltdown are me
Well, if a stranger have access to my whole Ceph data (this, all my VMs
& rgw's data), I don't mind if he gets root access too :)
On 01/12/2018 10:18 AM, Van Leeuwen, Robert wrote:
Ceph runs on a dedicated hardware, there is nothing there except Ceph,
and the ceph daemons ha
)
Please welcome the cli interfaces.
- USER taks: create new images, increase images size, sink images size,
check daily status and change broken disks whenever is needed.
Who does that ?
For instance, Ceph can be used for VMs. Your VMs system create images,
resizes images, whate
I think I was not clear
There are VMs management system, look at
https://fr.wikipedia.org/wiki/Proxmox_VE,
https://en.wikipedia.org/wiki/Ganeti, probably
https://en.wikipedia.org/wiki/OpenStack too
Theses systems interacts with Ceph.
When you create a VM, a rbd volume is created
When you
On 01/23/2018 04:33 PM, Massimiliano Cuttini wrote:
With Ceph you have to install an orchestrator 3rd party in order to have
a clear picture of what is going on.
Which can be ok, but not alway pheasable.
Just as with everything
As said wikipedia, for instance, "Proxmox VE supports
Am 9. Februar 2018 11:51:08 MEZ schrieb Lenz Grimmer :
>Hi all,
>
>On 02/08/2018 11:23 AM, Martin Emrich wrote:
>
>> I just want to thank all organizers and speakers for the awesome Ceph
>> Day at Darmstadt, Germany yesterday.
>>
>> I learned of some cool stu
ars ago (2/3) i had a look in calamarie and there was
a flag which could be set to stop client io only...
Someone out there who is using calamarie Today and could give a short response?
- Mehmet
>On Mon, Feb 12, 2018 at 1:56 PM Reed Dier
>wrote:
>
>> I do know that there is a
ched txt files with
>output of your pg query of the pg 0.223?
>output of ceph -s
>output of ceph df
>output of ceph osd df
>output of ceph osd dump | grep pool
>output of ceph osd crush rule dump
>
>Thank you and I’ll see if I can get something to ease your pain.
>
>As
degraded, with 25
>unfound objects.
>
># ceph health detail
>HEALTH_WARN 2 pgs degraded; 2 pgs recovering; 2 pgs stuck degraded; 2
>pgs stuck unclean; 2 pgs stuck undersized; 2 pgs undersized; recovery
>294599/149522370 objects degraded (0.197%); recovery 640073/149522370
>obje
>
> Thanks!
> Chad.
> _______
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http:
As always: ceph status
On 22/12/2016 11:53, Stéphane Klein wrote:
> Hi,
>
> When I shutdown one osd node, where can I see the block movement?
> Where can I see percentage progression?
>
> Best regards,
> Stéphane
>
>
>
> ___
That's correct :)
On 22/12/2016 12:12, Stéphane Klein wrote:
> HEALTH_WARN 43 pgs degraded; 43 pgs stuck unclean; 43 pgs undersized;
> recovery 24/70 objects degraded (34.286%); too few PGs per OSD (28 < min
> 30); 1/3 in osds are down;
>
> Here Ceph say there
group number from 3072 to 8192 or better
> to 165336 and I think doing it without client operations will be much faster.
>
> Thanks
> Regards
> Matteo
>
>
>
>
> ___
> ceph-users mailing list
> ceph-users@l
d-only?
>>>
>>> I need to quickly upgrade placement group number from 3072 to 8192 or
>>> better to 165336 and I think doing it without client operations will be
>>> much faster.
>>>
>>> Thanks
>>> Regards
>>> Matteo
>
Sound like cephfs to me
On 08/01/2018 09:33 AM, Will Zhao wrote:
> Hi:
>I want to use ceph rbd, because it shows better performance. But I dont
> like kernal module and isci target process. So here is my requirments:
>I dont want to map it and mount it , But I still want
y predictions when the 12.2.8 release will be available?
>
>
>Micha Krause
>___
>ceph-users mailing list
>ceph-users@lists.ceph.com
>http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph
Am 1. August 2018 10:33:26 MESZ schrieb Jake Grimmett :
>Dear All,
Hello Jake,
>
>Not sure if this is a bug, but when I add Intel Optane 900P drives,
>their device class is automatically set to SSD rather than NVME.
>
AFAIK ceph actually difference only between hdd and ssd
quot;, "file_number": 4350}
> -1> 2018-08-03 12:12:53.146753 7f12c38d0a80 0 osd.154 89917 load_pgs
> 0> 2018-08-03 12:12:57.526910 7f12c38d0a80 -1 *** Caught signal
>(Segmentation fault) **
> in thread 7f12c38d0a80 thread_name:ceph-osd
> ceph version 10.2.11 (e4b
uld be increased First -
not sure which one, but the docs and Mailinglist history should be helpfull.
Hope i could give a Bit usefull hints
- Mehmet
>Thanks,
>
>John
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Am 20. August 2018 17:22:35 MESZ schrieb Mehmet :
>Hello,
Hello me,
>
>AFAIK removing of big RBD-Images would lead ceph to produce blocked
>requests - I dont mean caused by poor disks.
>
>Is this still the case with "Luminous (12.2.4)"?
>
To answer my qu
age Weil" wrote:
>> >Hi everyone,
>> >
>> >Please help me welcome Mike Perez, the new Ceph community manager!
>> >
>> >Mike has a long history with Ceph: he started at DreamHost working
>on
>> >OpenStack and Ceph back in the early da
"event": "done"
>}
>]
>}
>},
>
>Seems like I have an operation that was delayed over 2 seconds in
>queued_for_pg state.
>What does that mean? What was it waiting for?
>
>Regards,
>*Ronnie Lazar*
>*R&D*
>
>T: +972 77 556-1727
>E: ron...@stratoscale.com
>
>
>Web <http://www.stratoscale.com/> | Blog
><http://www.stratoscale.com/blog/>
> | Twitter <https://twitter.com/Stratoscale> | Google+
><https://plus.google.com/u/1/b/108421603458396133912/108421603458396133912/posts>
> | Linkedin <https://www.linkedin.com/company/stratoscale>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi karri,
Am 4. September 2018 23:30:01 MESZ schrieb Pardhiv Karri
:
>Hi,
>
>I created a ceph cluster manually (not using ceph-deploy). When I
>reboot
>the node the osd's doesn't come backup because the OS doesn't know that
>it
>need to bring up the OSD.
Hi,
I assume that you are speaking of rbd only
Taking snapshot of rbd volumes and keeping all of them on the cluster is
fine
However, this is no backup
A snapshot is only a backup if it is exported off-site
On 09/18/2018 11:54 AM, ST Wong (ITSC) wrote:
> Hi,
>
> We're newbie to
For cephfs & rgw, it all depends on your needs, as with rbd
You may want to trust blindly Ceph
Or you may backup all your data, just in case (better safe than sorry,
as he said)
To my knowledge, there is no (or few) impact of keeping a large number
of snapshot on a cluster
With rbd, you
As of today, there is no such feature in Ceph
Best regards,
On 09/27/2018 04:34 PM, Gaël THEROND wrote:
> Hi folks!
>
> As I’ll soon start to work on a new really large an distributed CEPH
> project for cold data storage, I’m checking out a few features availability
> and status
Hello Vikas,
Could you please provide us which Commands you have uses to Setup rbd-mirror?
Would be Great if you could Provide a short howto :)
Thanks in advise
- Mehmet
Am 2. Oktober 2018 22:47:08 MESZ schrieb Vikas Rana :
>Hi,
>
>We have a CEPH 3 node cluster at primary site. We
2018 at 4:25 AM Massimo Sgaravatto <
>massimo.sgarava...@gmail.com> wrote:
>
>> Hi
>>
>> I have a ceph cluster, running luminous, composed of 5 OSD nodes,
>which is
>> using filestore.
>> Each OSD node has 2 E5-2620 v4 processors, 64 GB of RAM, 10x6TB SATA
>
Hello Roman,
I am Not sure if i could be a help but perhaps this Commands can help to find
the objects in question...
Ceph Heath Detail
rados list-inconsistent-pg rbd
rados list-inconsistent-obj 2.10d
I guess it is also interresting to know you use bluestore or filestore...
Hth
- Mehmet
Am
does it do if this feature is disabled ?
Why is `whole-object` an option, and not the default behavior ?
Regards,
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
s it do if this feature is disabled ?
>
> It behaves the same way -- it will export the full object if at least
> one byte has changed.
>
>> Why is `whole-object` an option, and not the default behavior ?
>
> ... because it can result in larger export-diffs.
>
w one. I'm running
>Luminous 12.2.2 on Ubuntu 16.04 and everything was created with
>ceph-deploy.
>
>What is the best course of action for moving these drives? I have read
>some
>posts that suggest I can simply move the drive and once the new OSD
>node
>sees the drive it will
The stock kernel from Debian is perfect
Spectre / meltdown mitigations are worthless for a Ceph point of view,
and should be disabled (again, strictly from a Ceph point of view)
If you need the luminous features, using the userspace implementations
is required (librbd via rbd-nbd or qemu
e uses unsupported features: 0x38
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
il".
>> rbd: map failed: (6) No such device or address
>> # dmesg | tail -1
>> [1108045.667333] rbd: image truc: image uses unsupported features: 0x38
>
> Those are rbd image features. Your email also mentioned "libcephfs via
> fuse", so I assumed you
I would Check with Tools like atop the utilization of your Disks also. Perhaps
something Related in dmesg or dorthin?
- Mehmet
Am 24. März 2018 08:17:44 MEZ schrieb Sam Huracan :
>Hi guys,
>We are running a production OpenStack backend by Ceph.
>
>At present, we are meet
Am 24. März 2018 00:05:12 MEZ schrieb Thiago Gonzaga :
>Hi All,
>
>I'm starting with ceph and faced a problem while using object-map
>
>root@ceph-mon-1:/home/tgonzaga# rbd create test -s 1024 --image-format
>2
>--image-feature exclusive-lock
>root@ceph-mon-1:/home
SQL is a no brainer really)
>-- Ursprungligt meddelande--Från: Sam HuracanDatum: lör 24 mars
>2018 19:20Till: c...@elchaka.de;Kopia:
>ceph-users@lists.ceph.com;Ämne:Re: [ceph-users] Fwd: High IOWait Issue
>This is from iostat:
>I'm using Ceph jewel, has no HW error.Cep
This is an extra package: rbd-nbd
On 03/26/2018 04:41 PM, Thiago Gonzaga wrote:
> It seems the bin is not present, is it part of ceph packages?
>
> tgonzaga@ceph-mon-3:~$ sudo rbd nbd map test
> /usr/bin/rbd-nbd: exec failed: (2) No such file or directory
> rbd: rbd-nbd failed w
Am 4. April 2018 20:58:19 MESZ schrieb Robert Stanford
:
>I read a couple of versions ago that ceph-deploy was not recommended
>for
>production clusters. Why was that? Is this still the case? We have a
I cannot Imagine that. Did use it Now a few versions before 2.0 and it works
Hi Marc,
Am 7. April 2018 18:32:40 MESZ schrieb Marc Roos :
>
>How do you resolve these issues?
>
In my Case i could get rid of this by deleting the existing Snapshots.
- Mehmet
>
>Apr 7 22:39:21 c03 ceph-osd: 2018-04-07 22:39:21.928484 7f0826524700
>-1
>osd.13 pg_epo
o:c...@elchaka.de]
>Sent: zondag 8 april 2018 10:44
>To: ceph-users@lists.ceph.com
>Subject: Re: [ceph-users] Ceph scrub logs: _scan_snaps no head for
>$object?
>
>Hi Marc,
>
>Am 7. April 2018 18:32:40 MESZ schrieb Marc Roos
>:
>>
>>How do you resolve these i
der, the snapshot id is now gone.
>
Hmm... that makes me curious...
So when i have a vm-image (rbd) on ceph and am doing One or more Snapshots
from this Image i *must have* to delete the snapshot(s) at First completely
before i delete the origin Image?
How can we then get rid of this orph
join us for the next one on May 28:
>
>https://www.meetup.com/Ceph-Berlin/events/qbpxrhyxhblc/
>
>The presented topic is "High available (active/active) NFS and CIFS
>exports upon CephFS".
>
>Kindest Regards
>--
>Robert Sander
>Heinlein Support GmbH
>S
FYI
De: "Abhishek" À: "ceph-devel"
, "ceph-users" ,
ceph-maintain...@ceph.com, ceph-annou...@ceph.com Envoyé: Vendredi 1
Juin 2018 14:11:00 Objet: v13.2.0 Mimic is out
We're glad to announce the first stable release of Mimic, the next long
term release se
Just a bet: have you inconsistant MTU across your network ?
I already had your issue when OSD and client was using jumbo frames, but
MON did not (or something like that)
On 06/07/2018 05:12 AM, Tracy Reed wrote:
>
> Hello all! I'm running luminous with old style non-bluestore
overhead,
> - everywhere 10Gb is recommended because of better latency. (I even
> posted here something to make ceph better performing with 1Gb eth,
> disregarded because it would add complexity, fine, I can understand)
>
> And then because of some start-up/automation issues (because th
Great,
Am 5. Juni 2018 17:13:12 MESZ schrieb Robert Sander
:
>Hi,
>
>On 27.05.2018 01:48, c...@elchaka.de wrote:
>>
>> Very interested to the Slides/vids.
>
>Slides are now available:
>https://www.meetup.com/Ceph-Berlin/events/qbpxrhyxhblc/
Thanks you very much
Hello Paul,
Am 5. Juni 2018 22:17:15 MESZ schrieb Paul Emmerich :
>Hi,
>
>If anyone wants to play around with Ceph on Debian: I just made our
>mirror
>for our
>dev/test image builds public:
>
>wget -q -O- 'https://static.croit.io/keys/release.asc'
Hi yao,
IIRC there is a *sleep* Option which is usefull when delete Operation is being
done from ceph sleep_trim or something like that.
- Mehmet
Am 7. Juni 2018 04:11:11 MESZ schrieb Yao Guotao :
>Hi Jason,
>
>
>Thank you very much for your reply.
>I think the RBD tras
Hi Paul,
Am 14. Juni 2018 00:33:09 MESZ schrieb Paul Emmerich :
>2018-06-13 23:53 GMT+02:00 :
>
>> Hi yao,
>>
>> IIRC there is a *sleep* Option which is usefull when delete Operation
>is
>> being done from ceph sleep_trim or something like that.
>>
>
ions that compression is available in Kraken for
> bluestore OSDs, however, I can find almost nothing in the documentation
> that indicates how to use it.
>
> I've found:
> - http://docs.ceph.com/docs/master/radosgw/compression/
> - http://ceph.com/releases/v11-2-0-kraken-released/
&
upkeep than because of real failures) you're suddenly drastically
> increasing the risk of data-loss. So I find myself wondering if there is a
> way to tell Ceph I want an extra replica created for a particular PG or set
> thereof, e.g., something that would enable the functional equivale
et awareness, i.e., secondary
>> disk
>>> failure based purely on chance of any 1 of the remaining 99 OSDs failing
>>> within recovery time). 5 nines is just fine for our purposes, but of
>> course
>>> multiple disk failures are only part of the story.
>>
Hi,
I have try this a month before.
Unfortunaly the Script Did Not worked out but if you Do the described steps
manualy it works.
The important thing is that /var/lib/ceph/osd.x/Journal (Not sure about the
path) should Show to right place where your Journal should be.
Am 25. Januar 2016 16:48
Hi,
I'm questionning myself about leveldb-based KVstore
Is it a full drop-in for filestore ?
I mean, can I store multiple TB in a levelDB OSD ?
Is there any restriction for leveldb ? (object size etc)
Is leveldb-based KVstore considered by the ceph community as "stable" ?
Can
mance of
> cluster. Something like Linux page cache for OSD write operations.
>
> I assume that by default Linux page cache can use free memory to improve
> OSD read performance ( please correct me if i am wrong). But how about OSD
> write improvement , How to improve that with free
; On a few servers, updated from Hammer to Infernalis, and from Debian
>>> Wheezy to Jessie, I can see that it seems to have some mixes between old
>>> sysvinit "ceph" script and the new ones on systemd.
>>>
>>> I always have an /etc/init.d/ceph old s
Hi,
As the docs said, mon, then osd, then rgw
Restart each daemon after upgrade the code
Works fine
On 03/03/2016 22:11, Andrea Annoè wrote:
> Hi to all,
> An architecture of Ceph have:
> 1 RGW
> 3 MON
> 4 OSD
>
> Someone have tested procedure for upgrade Ceph architec
Without knowing proxmox specific stuff ..
#1: just create an OSD the regular way
#2: it is safe; However, you may, either create a spoof
(osd_crush_chooseleaf_type = 0), or underuse your cluster
(osd_crush_chooseleaf_type = 1)
On 09/04/2016 14:39, Mad Th wrote:
> We have a 3 node proxmox/c
On Tue, 12 Apr 2016, Jan Schermer wrote:
> I'd like to raise these points, then
>
> 1) some people (like me) will never ever use XFS if they have a choice
> given no choice, we will not use something that depends on XFS
Huh ?
> 3) doesn't majority of Ceph users only c
ey have a choice
>>> given no choice, we will not use something that depends on XFS
>>>
>>> 2) choice is always good
>>
>> Okay!
>>
>>> 3) doesn't majority of Ceph users only care about RBD?
>>
>> Probably that's true now. We
On 12/04/2016 22:33, Jan Schermer wrote:
> I don't think it's apples and oranges.
> If I export two files via losetup over iSCSI and make a raid1 swraid out of
> them in guest VM, I bet it will still be faster than ceph with bluestore.
> And yet it will provide the same guar
> Regards
> Dominik
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
IIRC there is a Command like
Ceph osd Metadata
Where you should be able to find Information like this
Hab
- Mehmet
Am 21. Oktober 2018 19:39:58 MESZ schrieb Robert Stanford
:
> I did exactly this when creating my osds, and found that my total
>utilization is about the same as the sum
Isn't this a mgr variable ?
On 10/31/2018 02:49 PM, Steven Vacaroaia wrote:
> Hi,
>
> Any idea why different value for mon_max_pg_per_osd is not "recognized" ?
> I am using mimic 13.2.2
>
> Here is what I have in /etc/ceph/ceph.conf
>
>
Hi,
I have some wild freeze using cephfs with the kernel driver
For instance:
[Tue Dec 4 10:57:48 2018] libceph: mon1 10.5.0.88:6789 session lost,
hunting for new mon
[Tue Dec 4 10:57:48 2018] libceph: mon2 10.5.0.89:6789 session established
[Tue Dec 4 10:58:20 2018] ceph: mds0 caps stale
haw), but
> preceeded by an fstrim. With virtio-scsi, using fstrim propagates the
> discards from within the VM to Ceph RBD (if qemu is configured
> accordingly),
> and a lot of space is saved.
>
> We have yet to observe these hangs, we are running this with ~5 VMs with
> ~10 d
in Nautilus, we'll be doing it for Octopus.
>
> Are there major python-{rbd,cephfs,rgw,rados} users that are still Python
> 2 that we need to be worried about? (OpenStack?)
>
> sage
> ___
> ceph-users mailin
ct, I believe you have to implement that
on top of Ceph
For instance, let say you simply create a pool, with a rbd volume in it
You then create a filesystem on that, and map it on some server
Finally, you can push your files on that mountpoint, using various
Linux's user, acl or whatever : beyond
>on
>the rbd image it is using for the vm?
>
>I have already a vm running connected to the rbd pool via
>protocol='rbd', and rbd snap ls is showing for snapshots.
>
>
>
>
>
>___
>ceph-users mailing list
>cep
1 - 100 of 317 matches
Mail list logo