d be filesystem for ceph?
3. Basically, this is first time so want to have detailed
information/guidance for confidence.
Regards
Gaurav Goyal
+1647-685-3000
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Dear All,
I need your kind help please. I am new and want to understand the ceph
installation concept as per my lab setup.
Regards
Gaurav Goyal
On 02-Jul-2016 7:27 pm, "Gaurav Goyal" wrote:
> Dear Ceph Users,
>
> I am very new to Ceph product and want to gain some knowledge
d be filesystem for ceph?
3. Basically, this is first time so want to have detailed
information/guidance for confidence.
Regards
Gaurav Goyal
+1647-685-3000
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi,
I am installing ceph hammer and integrating it with openstack Liberty for
the first time.
My local disk has only 500 GB but i need to create 600 GB VM. SO i have
created a soft link to ceph filesystem as
lrwxrwxrwx 1 root root 34 Jul 6 13:02 instances ->
/var/lib/ceph/osd/ceph-0/instances [r
pool=backups
client.glance
key: AQCVAHxXupPdLBAA7hh1TJZnvSmFSDWbQiaiEQ==
caps: [mon] allow r
caps: [osd] allow class-read object_prefix rbd_children, allow rwx
pool=images
Regards
Gaurav Goyal
On Thu, Jul 7, 2016 at 2:54 AM, Kees Meijs wrote:
> Hi Gaurav,
>
> Unfortunately I'm not completely sure abo
ello,
>
> Are you configured these two paremeters in cinder.conf?
>
> rbd_user
> rbd_secret_uuid
>
> Regards.
>
> 2016-07-07 15:39 GMT+02:00 Gaurav Goyal :
>
>> Hello Mr. Kees,
>>
>> Thanks for your response!
>>
>> My setup is
>>
>&g
9-d629-497c-a4cd-d240c3e6c225 - - - - -] Periodic task is
updating the host stat, it is trying to get disk instance-0006, but
disk file was removed by concurrent operations such as resize.
2016-07-07 16:22:54.236 31909 INFO nova.compute.resource_tracker
[req-05a653d9-d629-497c-a4cd-d240
execute following commands?
"pvcreate /dev/rbd1" &
"vgcreate cinder-volumes /dev/rbd1"
Regards
Gaurav Goyal
On Thu, Jul 7, 2016 at 10:02 PM, Jason Dillaman wrote:
> These lines from your log output indicates you are configured to use LVM
> as a cinder backend.
ected| False|
| size | 7181697024 |
| status | active |
| tags | [] |
| updated_at | 2016-07-06T17:44:13Z |
|
c-9a8c-9293d545c337 --base64 $(cat client.cinder.key)
&& rm client.cinder.key secret.xml
Moreover, i do not find libvirtd group.
[root@OSKVM1 ceph]# chown qemu:libvirtd /var/run/ceph/guests/
chown: invalid group: ‘qemu:libvirtd’
Regards
Gaurav Goyal
On Fri, Jul 8, 2016 at 9:40 AM, Kee
usable vcpus:
40, total allocated vcpus: 0
2016-07-08 11:25:26.478 86007 INFO nova.compute.resource_tracker
[req-4b7eccc8-0bf5-4f55-a941-4c93e97ef5df - - - - -] Final resource view:
name=controller phys_ram=193168MB used_ram=1024MB phys_disk=8168GB
used_disk=1GB total_vcpus=40 used_vcpus=0 pci_stat
ilable | cinder-ceph-vol1 | 10
| - | |
+--+---+--+--+-+--+
On Fri, Jul 8, 2016 at 11:33 AM, Gaurav Goyal
wrote:
> Hi Kees,
>
>
| []
|
| updated_at | 2016-07-08T20:27:10Z
|
On Fri, Jul 8, 2016 at 12:25 PM, Gaurav Goyal
wrote:
> [root@OSKVM1 ~]# grep -v "^#" /etc/nova/nova.conf|grep -v ^$
>
> [DEFAULT]
>
> instance_usage_audit = True
>
> insta
in active active 1 selected first 2 luns as osd on node1 and
last 2 as osd on node 2.
Is it ok to have this configuration specially when and node will be down or
considering live migration.
Regards
Gaurav Goyal
On 10-Jul-2016 9:02 pm, "Christian Balzer" wrote:
>
> Hello,
>
&g
1.0 1.0
-3 3.97998 host host2
2 1.98999 osd.2up 1.0 1.0
3 1.98999 osd.3up 1.0 1.0
Is it ok? or i must change my ceph design?
Regards
Gaurav Goyal
___
ceph-users mai
if this parameter is
removed from Liberty. If that is the case please update the documentation.
KILO
Enable discard support for virtual machine ephemeral root disk:
[libvirt]
...
hw_disk_discard = unmap # enable discard support (be careful of performance)
Regards
Gaurav Goyal
On Mon, Jul 11
Thanks!
I need to create a VM having qcow2 image file as 6.7 GB but raw image as
600GB which is too big.
Is there a way that i need not to convert qcow2 file to raw and it works
well with rbd?
Regards
Gaurav Goyal
On Mon, Jul 11, 2016 at 11:46 AM, Kees Meijs wrote:
> Glad to hear it wo
.
Specially when Organization know ceph functionality, why dont they create
raw images along with qcow2.
Regards
Gaurav Goyal
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
It will be smooth installation. I have recently installed hammer on centos
7.
Regards
Gaurav Goyal
On Fri, Jul 22, 2016 at 7:22 AM, Ruben Kerkhof
wrote:
> On Thu, Jul 21, 2016 at 7:26 PM, Manuel Lausch
> wrote:
> > Hi,
>
> Hi,
> >
> > I try to install ceph h
wrong with my ceph configuration?
can i take snapshots with ceph storage?
Regards
Gaurav Goyal
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
something wrong with my ceph configuration?
can i take snapshots with ceph storage? How?
Regards
Gaurav Goyal
On Wed, Jul 13, 2016 at 9:44 AM, Jason Dillaman wrote:
> The RAW file will appear to be the exact image size but the filesystem
> will know about the holes in the image and it will be sp
Dear Ceph Team,
I need your guidance on this.
Regards
Gaurav Goyal
On Wed, Jul 27, 2016 at 4:03 PM, Gaurav Goyal
wrote:
> Dear Team,
>
> I have ceph storage installed on SAN storage which is connected to
> Openstack Hosts via iSCSI LUNs.
> Now we want to get rid of SAN storag
Hi David,
Thanks for your comments!
Can you please help to share the procedure/Document if available?
Regards
Gaurav Goyal
On Tue, Aug 2, 2016 at 11:24 AM, David Turner wrote:
> Just add the new storage and weight the old storage to 0.0 so all data
> will move off of the old storage
Hello David,
Thanks a lot for detailed information!
This is going to help me.
Regards
Gaurav Goyal
On Tue, Aug 2, 2016 at 11:46 AM, David Turner wrote:
> I'm going to assume you know how to add and remove storage
> http://docs.ceph.com/docs/hammer/rados/operations/add-or-rm-
Hello David,
Can you help me with steps/Procedure to uninstall Ceph storage from
openstack environment?
Regards
Gaurav Goyal
On Tue, Aug 2, 2016 at 11:57 AM, Gaurav Goyal
wrote:
> Hello David,
>
> Thanks a lot for detailed information!
>
> This is going to help me.
>
&g
openstack.
Regards
Gaurav Goyal
On 03-Aug-2016 4:59 pm, "David Turner"
wrote:
> If I'm understanding your question correctly that you're asking how to
> actually remove the SAN osds from ceph, then it doesn't matter what is
> using the storage (ie openstack, cephf
Please suggest a procedure for this uninstallation process?
Regards
Gaurav Goyal
On Wed, Aug 3, 2016 at 5:58 PM, Gaurav Goyal
wrote:
> Thanks for your prompt
> response!
>
> Situation is bit different now. Customer want us to remove the ceph
> storage configuration from
2 on Host 2 2T X 2 on Host 3
12TB in total. replication factor 2 should make it 6 TB?
Regards
Gaurav Goyal
On Thu, Aug 4, 2016 at 1:52 AM, Bharath Krishna
wrote:
> Hi Gaurav,
>
> There are several ways to do it depending on how you deployed your ceph
> cluster. Easiest way to
Dear Ceph Users,
Can you please address my scenario and suggest me a solution.
Regards
Gaurav Goyal
On Tue, Aug 16, 2016 at 11:13 AM, Gaurav Goyal
wrote:
> Hello
>
>
> I need your help to redesign my ceph storage network.
>
> As suggested in earlier discussions, i must not u
t 3
12TB in total. replication factor 2 should make it 6 TB?
Regards
On Tue, Aug 2, 2016 at 11:16 AM, Gaurav Goyal
wrote:
>
> Hello Jason/Kees,
>
> I am trying to take snapshot of my instance.
>
> Image was stuck up in Queued state and instance is stuck up in Image
> Pend
Dear Ceph Users,
I need your help to redesign my ceph storage network.
As suggested in earlier discussions, i must not use SAN storage. So we have
decided to removed it.
Now we are ordering Local HDDs.
My Network would be
Host1 --> Controller + Compute1 Host 2--> Compute2 Host 3 --> Compute3
either use all ssd osds or consider
> having more spinning osds per node backed by nvme or ssd journals..
>
>
>
> On Wed, Aug 17, 2016 at 1:14 PM, Gaurav Goyal
> wrote:
> > Dear Ceph Users,
> >
> > Can you please address my scenario and suggest me a solution.
> >
an we split rbd kernel
driver and OSD process? Should it be like rbd kernel driver on controller
and OSD processes on compute hosts?
Since my host 1 is controller + Compute1, Can you please share the steps to
split it up using VMs and suggested by you.
Regards
Gaurav Goyal
On Wed, Aug 17, 201
Dear Ceph Users,
Awaiting some suggestion please!
On Wed, Aug 17, 2016 at 11:15 AM, Gaurav Goyal
wrote:
> Hello Mart,
>
> Thanks a lot for the detailed information!
> Please find my response inline and help me to get more knowledge on it
>
>
> Ceph works best with mor
Hello,
Awaiting any suggestion please!
Regards
On Wed, Aug 17, 2016 at 9:59 AM, Gaurav Goyal
wrote:
> Hello Brian,
>
> Thanks for your response!
>
> Can you please elaborate on this.
>
> Do you mean i must use
>
> 4 x 1TB HDD on each nodes rather than 2 x 2TB?
&g
As it is a lab environment, can i install the setup in a way to achieve
less redundancy (replication factor) and more capacity?
How can i achieve that?
On Wed, Aug 17, 2016 at 7:47 PM, Gaurav Goyal
wrote:
> Hello,
>
> Awaiting any suggestion please!
>
>
>
>
> Re
e differences. Flooding the
> mail-list won't help
>
> see below,
>
>
>
> On 08/18/2016 01:39 AM, Gaurav Goyal wrote:
>
> Dear Ceph Users,
>
> Awaiting some suggestion please!
>
>
>
> On Wed, Aug 17, 2016 at 11:15 AM, Gaurav Goyal
> wrote:
>
>
10.org.openstack:volume-e13d0ffc-3ed4-4a22-b270-987e81b1ca8f,t,0x1
/dev/sde
On Thu, Aug 18, 2016 at 12:39 PM, Vasu Kulkarni wrote:
> Also most of the terminology looks like from Openstack and SAN, Here
> are the right terminology that should be used for Ceph
> http://docs.ceph.com/docs
Dear Ceph users,
Any suggestion on this please.
Regards
Gaurav Goyal
On Wed, Sep 14, 2016 at 2:50 PM, Gaurav Goyal
wrote:
> Dear Ceph Users,
>
> I need you help to sort out following issue with my cinder volume.
>
> I have created ceph as backend for cinder. Since i was us
39 matches
Mail list logo