Hi,
I have recently started working with Ceph Nautilus release and I have
realized that you have to start working with LVM to create OSD instead of
the "old fashioned" ceph-disk.
In terms of performance and best practices, as far as I must use LVM I can
create volume groups that joins or extends
t the behavior of the system when some pieces could
fail.
Thanks a lot!
Óscar
El mar., 16 jul. 2019 18:23, Janne Johansson escribió:
> Den tis 16 juli 2019 kl 18:15 skrev Oscar Segarra >:
>
>> Hi Paul,
>> That is the initial question, is it possible to recover my ceph clus
a key store
2.- There is an electric blackout and all nodes of my cluster goes down and
all data in my etcd is lost (but muy osd disks have useful data)
Thanks a lot
Óscar
El mar., 16 jul. 2019 17:58, Oscar Segarra
escribió:
> Thanks a lot Janne,
>
> Well, maybe I'm missunderstand
--privileged=true \
--pid=host \
-v /dev/:/dev/ \
-e OSD_DEVICE=/dev/vdd \
-e KV_TYPE=etcd \
-e KV_IP=192.168.0.20 \
ceph/daemon osd
Thanks a lot for your help,
Óscar
El mar., 16 jul. 2019 17:34, Janne Johansson escribió:
> Den mån 15 juli 2019 kl 23:05 skrev Oscar Segarra >:
&g
admin keyring (eg. a mon node) against a running cluster with mons in
> quorum.
>
> Best regards,
>
> =
> Frank Schilder
> AIT Risø Campus
> Bygning 109, rum S14
>
>
> From: ceph-users on behalf of Oscar
>
Hi,
I'm planning to deploy a ceph cluster using etcd as kv store.
I'm planning to deploy a stateless etcd docker to store the data.
I'd like to know if ceph cluster will be able to boot when etcd container
restarts (and looses al data written in it)
If the etcd container restarts when the ceph
I have executed:
yum upgrade -y ceph
On each node and everything has worked fine...
2017-12-05 16:19 GMT+01:00 Florent B :
> Upgrade procedure is OSD or MON first ?
>
> There was a change on Luminous upgrade about it.
>
>
> On 01/12/2017 18:34, Abhishek Lekshmanan wrote:
> > We're glad to annou
lustered file system (or
> similar) on top of the block device. For the vast majority of cases, you
> shouldn't enable this in libvirt.
>
> [1] https://libvirt.org/formatdomain.html#elementsDisks
>
> On Tue, Nov 14, 2017 at 10:49 AM, Oscar Segarra
> wrote:
>
>> Hi J
an see in the KVM. I'd like to know the suggested
configuration for rbd images and live migration
[image: Imágenes integradas 1]
Thanks a lot.
2017-11-14 16:36 GMT+01:00 Jason Dillaman :
> On Tue, Nov 14, 2017 at 10:25 AM, Oscar Segarra
> wrote:
> > In my environment, I
ration. I
> have
> > a small virtualization cluster backed by ceph/rbd and I can migrate all
> the
> > VMs which RBD image have exclusive-lock enabled without any issue.
> >
> >
> >
> > Em 11/14/2017 9:47 AM, Oscar Segarra escreveu:
> >
> &g
Hi,
I include Jason Dillaman, the creator of this post
http://tracker.ceph.com/issues/15000 in this thread
Thanks a lot
2017-11-14 12:47 GMT+01:00 Oscar Segarra :
> Hi Konstantin,
>
> Thanks a lot for your advice...
>
> I'm specially interested in feature "Exclusiv
he same rbd image at the same time
It looks that enabling Exlucisve locking you can enable some other
interessant features like "Object map" and/or "Fast diff" for backups.
Thanks a lot!
2017-11-14 12:26 GMT+01:00 Konstantin Shalygin :
> On 11/14/2017 06:19 PM, Oscar Segarra
;
>
> On 11/14/2017 05:39 PM, Oscar Segarra wrote:
>
>> In this moment, I'm deploying and therefore I can upgrade every
>> component... I have recently executed "yum upgrade -y" in order to update
>> all operating system components.
>>
>> And pleas
possible to upgrade librbd on host (or any other component), the features
> can be customized for the present librbd.
>
> On 11/14/2017 04:41 PM, Oscar Segarra wrote:
>
> Yes, but looks lots of features like snapshot, fast-diff require some
> other features... If I enable exclusive-
Hi Anthony,
o I think you might have some misunderstandings about how Ceph works. Ceph
is best deployed as a single cluster spanning multiple servers, generally
at least 3. Is that your plan?
I want to deply servers for 100VDI Windows 10 each (at least 3 servers). I
plan to sell servers depend
Hi,
Yes, but looks lots of features like snapshot, fast-diff require some other
features... If I enable exclusive-locking or journaling, live migration
will be possible too?
Is it recommended to set KVM disk "shareable" depending on the activated
features?
Thanks a lot!
2017-11-14 4:52 GMT+01:
2017 18:40, "Brady Deetz" escribió:
On Nov 13, 2017 11:17 AM, "Oscar Segarra" wrote:
Hi Brady,
Thanks a lot again for your comments and experience.
This is a departure from what I've seen people do here. I agree that 100
VMs on 24 cores would be potentially over con
RAID5 + 1 Ceph daemon as 8 CEPH
daemons.
I appreciate a lot your comments!
Oscar Segarra
2017-11-13 15:37 GMT+01:00 Marc Roos :
>
> Keep in mind also if you want to have fail over in the future. We were
> running a 2nd server and were replicating via DRBD the raid arrays.
> Ex
t; tempted to do that probably.
>
> But for some workloads, like RBD, ceph doesn't balance out the workload
> very evenly for a specific client, only many clients at once... raid might
> help solve that, but I don't see it as worth it.
>
> I would just software RAID1 the OS and m
Hi,
I'm designing my infraestructure. I want to provide 8TB (8 disks x 1TB
each) of data per host just for Microsoft Windows 10 VDI. In each host I
will have storage (ceph osd) and compute (on kvm).
I'd like to hear your opinion about theese two configurations:
1.- RAID5 with 8 disks (I will hav
Hi,
Anybody has experience with live migration features?
Thanks a lot in advance.
Óscar Segarra
El 7 nov. 2017 14:02, "Oscar Segarra" escribió:
> Hi,
>
> In my environment I'm working with a 3 node ceph cluster based on Centos 7
> and KVM. My VM is a clone of
Hi,
In my environment I'm working with a 3 node ceph cluster based on Centos 7
and KVM. My VM is a clone of a protected snapshot as is suggested in the
following document:
http://docs.ceph.com/docs/luminous/rbd/rbd-snapshot/#getting-started-with-layering
I'd like to use the live migration featur
:00 Richard Hesketh :
> On 16/10/17 03:40, Alex Gorbachev wrote:
> > On Sat, Oct 14, 2017 at 12:25 PM, Oscar Segarra
> wrote:
> >> Hi,
> >>
> >> In my VDI environment I have configured the suggested ceph
> >> design/arquitecture:
> >&
at you
> might be using the documentation for an older version of Ceph:
>
> On 10/14/2017 12:25 PM, Oscar Segarra wrote:
> >
> > http://docs.ceph.com/docs/giant/rbd/rbd-snapshot/
> >
>
> If you're not using the 'giant' version of Ceph (which has rea
Hi,
In my VDI environment I have configured the suggested ceph
design/arquitecture:
http://docs.ceph.com/docs/giant/rbd/rbd-snapshot/
Where I have a Base Image + Protected Snapshot + 100 clones (one for each
persistent VDI).
Now, I'd like to configure a backup script/mechanism to perform backup
Hi,
For VDI (Windows 10) use case... is there any document about the
recommended configuration with rbd?
Thanks a lot!
2017-08-18 15:40 GMT+02:00 Oscar Segarra :
> Hi,
>
> Yes, you are right, the idea is cloning a snapshot taken from the base
> image...
>
> And yes, I&
.11,10.1.40.12,10.1.40.13:/cephfs1
>
>
>
>
>
>
>
>
>
> *From: *ceph-users on behalf of LOPEZ
> Jean-Charles
> *Date: *Monday, August 28, 2017 at 3:40 PM
> *To: *Oscar Segarra
> *Cc: *"ceph-users@lists.ceph.com"
> *Subject: *Re: [ceph-users] CephFS
Hi,
In Ceph, by design there is no single point of failure I terms of server
roles, nevertheless, from the client point of view, it might exist.
In my environment:
Mon1: 192.168.100.101:6789
Mon2: 192.168.100.102:6789
Mon3: 192.168.100.103:6789
Client: 192.168.100.104
I have created a line in
that you clone each time you need to let someone log in.
Is that what you're planning?
On Thu, Aug 17, 2017, 9:51 PM Christian Balzer wrote:
>
> Hello,
>
> On Fri, 18 Aug 2017 03:31:56 +0200 Oscar Segarra wrote:
>
> > Hi Christian,
> >
> > Thanks a lot for
> On Thu, Aug 17, 2017, 7:41 PM Oscar Segarra
> wrote:
>
>> Thanks a lot David!!!
>>
>> Let's wait the opinion of Christian about the suggested configuration for
>> VDI...
>>
>> Óscar Segarra
>>
>> 2017-08-18 1:03 GMT+02:00 David Turner
Hi,
Sorry guys, during theese days I'm asking a lot about how to distribute my
data.
I have two kinds of VM:
1.- Management VMs (linux) --> Full SSD dedicated disks
2.- Windows VM --> SSD + HHD (with tiering).
I'm working on installing two clusters on the same host but I'm
encountering lots of
Hi,
As ceph-deploy utility does not work properly with named clusters (other
than the default ceph) In order to have a named cluster I have created the
monitor using the manual procedure:
http://docs.ceph.com/docs/master/install/manual-deployment/#monitor-bootstrapping
In the end, it starts up p
Hi,
After adding a new monitor cluster I'm getting an estrange error:
vdicnode02/store.db/MANIFEST-86 succeeded,manifest_file_number is 86,
next_file_number is 88, last_sequence is 8, log_number is 0,prev_log_number
is 0,max_column_family is 0
2017-08-15 22:00:58.832599 7f6791187e40 4 rocks
> third and nothing funky happened.
>>
>> Most ways to deploy a cluster allow you to create the cluster with 3+
>> mons at the same time (inital_mons). What are you doing that only allows
>> you to add one at a time?
>>
>> On Tue, Aug 15, 2017 at 12:22 PM Osc
hat only allows you to
> add one at a time?
>
> On Tue, Aug 15, 2017 at 12:22 PM Oscar Segarra
> wrote:
>
>> Hi,
>>
>> I'd like to test and script the adding monitors process adding one by one
>> monitors to the ceph infrastructure.
>>
>>
Hi,
I'd like to test and script the adding monitors process adding one by one
monitors to the ceph infrastructure.
Is it possible to have two mon's running on two servers (one mon each) -->
I can assume that mon quorum won't be reached until both servers are up.
Is this right?
I have not been a
n Dillaman :
> Personally, I didn't quite understand your use-case. You only have a
> single host and two drives (one for live data and the other for DR)?
>
> On Mon, Aug 14, 2017 at 4:09 PM, Oscar Segarra
> wrote:
> > Hi,
> >
> > Anybody has been able to wor
Hi,
Anybody has been able to work with mirroring?
does has any sense the scenario I'm proposing?
Thanks a lot.
2017-08-08 20:05 GMT+02:00 Oscar Segarra :
> Hi,
>
> I'd like to use the mirroring feature
>
> http://docs.ceph.com/docs/master/rbd/rbd-mirroring/
>
>
Hi,
I'd like to use the mirroring feature
http://docs.ceph.com/docs/master/rbd/rbd-mirroring/
In my environment I have just one host (at the moment for testing purposes
before production deployment).
I want to dispose:
/dev/sdb for standard operatoin
/dev/sdc for mirror
Of course, I'd like to
r /etc/hosts in this state. I don't know if it
would work though. It's really not intended for any communication to happen
on this subnet other than inter-OSD traffic.
On Thu, Jul 27, 2017 at 6:31 PM Oscar Segarra
wrote:
> Sorry! I'd like to add that I want to use the cluster netw
Sorry! I'd like to add that I want to use the cluster network for both
purposes:
ceph-deploy --username vdicceph new vdicnode01 --cluster-network
192.168.100.0/24 --public-network 192.168.100.0/24
Thanks a lot
2017-07-28 0:29 GMT+02:00 Oscar Segarra :
> Hi,
>
> ¿Do you mean tha
et for added
> security. Therefore it will not work with ceph-deploy actions.
> Source: http://docs.ceph.com/docs/master/rados/
> configuration/network-config-ref/
>
>
> On Thu, Jul 27, 2017 at 3:53 PM Oscar Segarra
> wrote:
>
>> Hi,
>>
>> In my environme
isk.
> The osds can start without it.
>
> On Thu, Jul 27, 2017 at 3:36 PM Oscar Segarra
> wrote:
>
>> Hi,
>>
>> First of all, my version:
>>
>> [root@vdicnode01 ~]# ceph -v
>> ceph version 12.1.1 (f3e663a190bf2ed12c7e3cda288b9a159572c800) luminous
&g
Hi,
In my environment I have 3 hosts, every host has 2 network interfaces:
public: 192.168.2.0/24
cluster: 192.168.100.0/24
The hostname "vdicnode01", "vdicnode02" and "vdicnode03" are resolved by
public DNS through the public interface, that means the "ping vdicnode01"
will resolve 192.168.2.1.
Hi,
First of all, my version:
[root@vdicnode01 ~]# ceph -v
ceph version 12.1.1 (f3e663a190bf2ed12c7e3cda288b9a159572c800) luminous (rc)
When I boot my ceph node (I have an all in one) I get the following message
in boot.log:
*[FAILED] Failed to start Ceph disk activation: /dev/sdb2.*
*See 'syst
ootstrap-mgr/ceph.keyring
> auth get-or-create mgr.nuc1 mon allow profile mgr osd allow * mds allow *
> -o /var/lib/ceph/mgr/ceph-nuc1/keyring
> [nuc1][ERROR ] 2017-07-23 14:51:13.413218 7f62943cc700 0 librados:
> client.bootstrap-mgr authentication error (22) Invalid argument
> [nuc1]
Hi,
I have upgraded from kraken version with a simple "yum upgrade command".
Later the upgrade, I'd like to deploy the mgr daemon on one node of my ceph
infrastrucute.
But, for any reason, It gets stuck!
Let's see the complete set of commands:
[root@vdicnode01 ~]# ceph -s
cluster:
id:
at the
> docs, make sure you are looking at the proper version of the docs for your
> version. Replace master, jewel, luminous, etc with kraken in the URL.
>
> On Thu, Jul 20, 2017, 5:33 AM Oscar Segarra
> wrote:
>
>> Hi,
>>
>> Thanks a lot for your answers.
that pretty much Calamari is dead.
>>
>> On Thu, Jul 20, 2017 at 4:28 AM, Oscar Segarra
>> wrote:
>>
>>> Hi,
>>>
>>> Anybody has been able to setup Calamari on Centos7??
>>>
>>> I've done a lot of Google
Hi,
Anybody has been able to setup Calamari on Centos7??
I've done a lot of Google but I haven't found any good documentation...
The command "ceph-deploy calamari connect" does not work!
Thanks a lot for your help!
___
ceph-users mailing list
ceph-us
Hi,
I have created a VM called vdiccalamari where I'm trying to install the
calamari server in order to view ceph status from a gui:
[vdicceph@vdicnode01 ceph]$ sudo ceph status
cluster 656e84b2-9192-40fe-9b81-39bd0c7a3196
health HEALTH_OK
monmap e2: 1 mons at {vdicnode01=192.168.10
ow profile mgr osd allow * mds allow *
>> -o /var/lib/ceph/mgr/ceph-nuc2/keyring
>> [nuc2][ERROR ] 2017-07-14 17:17:21.800166 7fe344f32700 0 librados:
>> client.bootstrap-mgr authentication error (22) Invalid argument
>> [nuc2][ERROR ] (22, 'error connecting to the c
Hi,
I'm following the instructions of the web (
http://docs.ceph.com/docs/master/start/quick-ceph-deploy/) and I'm trying
to create a manager on my first node.
In my environment I have 2 nodes:
- vdicnode01 (mon, mgr and osd)
- vdicnode02 (osd)
Each server has to NIC, the public and the private
Hi,
My lab environment has just one node for testing purposes.
As user ceph (with sudo privileges granted) I have executed the following
commands in my environment:
ceph-deploy install vdicnode01
ceph-deploy --cluster vdiccephmgmtcluster new vdicnode01 --cluster-network
192.168.100.0/24 --public
54 matches
Mail list logo