Hi,
I'm building a storage structure for OpenStack cloud System, input:
- 700 VM
- 150 IOPS per VM
- 20 Storage per VM (boot volume)
- Some VM run database (SQL or MySQL)
I want to ask a sizing plan for Ceph to satisfy the IOPS requirement, I
list some factors considered:
- Amount of OSD (SAS Disk
Hi,
I'm building a storage structure for OpenStack cloud System, input:
- 700 VM
- 150 IOPS per VM
- 20 Storage per VM (boot volume)
- Some VM run database (SQL or MySQL)
I want to ask a sizing plan for Ceph to satisfy the IOPS requirement, I
list some factors considered:
- Amount of OSD (SAS Disk
get
> required IOPS, based on this we can calculate the bandwidth and design the
> solution.
>
> Thanks
> Srinivas
>
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Nick Fisk
> Sent: Wednesday, December 02, 2015 9:2
Hi,
My Mon quorum includes 3 nodes, if 2 nodes fail out incidently. How could I
recover system from 1 node left?
Thanks and regards.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
ed by 3? *
2015-12-03 7:09 GMT+07:00 Sam Huracan :
> IO size is 4 KB, and I need a Minimum sizing, cost optimized
> I intend use SuperMicro Devices
> http://www.supermicro.com/solutions/storage_Ceph.cfm
>
> What do you think?
>
> 2015-12-02 23:17 GMT+07:00 Srinivasul
luster :)
> Best of luck. I think you¹re still starting with better, and more info
> than some of us did years ago.
>
> Warren Wang
>
>
>
>
> From: Sam Huracan
> Date: Thursday, December 3, 2015 at 4:01 AM
> To: Srinivasula Maram
> Cc: Nick Fisk , "ceph
Hi,
I have a question about Ceph's performance
I've built a Ceph cluster with 3 OSD host, each host's configuration:
- CPU: 1 x Intel Xeon E5-2620 v4 2.1GHz
- Memory: 2 x 16GB RDIMM
- Disk: 2 x 300GB 15K RPM SAS 12Gbps (RAID 1 for OS)
4 x 800GB Solid State Drive SATA (non-RAID for
Hi Cephers,
I've read about new BlueStore and have 2 questions:
1. The purpose of BlueStore is eliminating the drawbacks of POSIX when
using FileStore. These drawbacks also cause journal, result in double write
penalty. Could you explain me more detail about POSIX fails when using in
FileStore? a
So why do not journal write only metadata?
As I've read, it is for ensure consistency of data, but I do not know how
to do that in detail? And why BlueStore still ensure consistency without
journal?
2017-09-20 16:03 GMT+07:00 :
> On 20/09/2017 10:59, Sam Huracan wrote:
> &
Hi,
I'm reading this document:
http://storageconference.us/2017/Presentations/CephObjectStore-slides.pdf
I have 3 questions:
1. BlueStore writes both data (to raw block device) and metadata (to
RockDB) simultaneously, or sequentially?
2. From my opinion, performance of BlueStore can not compar
Anyone can help me?
On Oct 2, 2017 17:56, "Sam Huracan" wrote:
> Hi,
>
> I'm reading this document:
> http://storageconference.us/2017/Presentations/CephObjectStore-slides.pdf
>
> I have 3 questions:
>
> 1. BlueStore writes both data (to raw
Hi Cephers,
I'm testing RadosGW in Luminous version. I've already installed done in
separate host, service is running but RadosGW did not accept any my
configuration in ceph.conf.
My Config:
[client.radosgw.gateway]
host = radosgw
keyring = /etc/ceph/ceph.client.radosgw.keyring
rgw socket path =
u give it
> corresponds to the name you give in the ceph.conf. Also, do not forget
> to push the ceph.conf to the RGW machine.
>
> On Wed, Nov 8, 2017 at 11:44 PM, Sam Huracan
> wrote:
> >
> >
> > Hi Cephers,
> >
> > I'm testing RadosGW in Lumi
I checked ceph pools, cluster has some pools:
[ceph-deploy@ceph1 cluster-ceph]$ ceph osd lspools
2 rbd,3 .rgw.root,4 default.rgw.control,5 default.rgw.meta,6
default.rgw.log,
2017-11-09 11:25 GMT+07:00 Sam Huracan :
> @Hans: Yes, I tried to redeploy RGW, and ensure client.radosgw.gateway
Thanks Hans, I've fixed it.
Ceph luminous auto create an user client.rgw, I did't know and make a new
user client.radowgw.
On Nov 9, 2017 17:03, "Hans van den Bogert" wrote:
> On Nov 9, 2017, at 5:25 AM, Sam Huracan wrote:
>
> root@radosgw system]# ceph --admin
Hi,
We intend build a new Ceph cluster with 6 Ceph OSD hosts, 10 SAS disks
every host, using 10Gbps NIC for client network, object is replicated 3.
So, how could I sizing the cluster network for best performance?
As i have read, 3x replicate means 3x bandwidth client network = 30 Gbps,
is it true
Today, one of our Ceph OSDs was down, I've check syslog and see this OSD
process was killed by OMM
Nov 17 10:01:06 ceph1 kernel: [2807926.762304] Out of memory: Kill process
3330 (ceph-osd) score 7 or sacrifice child
Nov 17 10:01:06 ceph1 kernel: [2807926.763745] Killed process 3330
(ceph-osd) to
HighMem/MovableOnly
Nov 17 10:47:17 ceph1 kernel: [2810698.553790] 158182 pages reserved
Nov 17 10:47:17 ceph1 kernel: [2810698.553791] 0 pages cma reserved
Is it relate to page caches?
2017-11-18 7:22 GMT+07:00 Sam Huracan :
> Today, one of our Ceph OSDs was down, I've check syslog and
t;
> end
>
> sysctl_param 'vm.dirty_background_ratio' do
>
> value 2
>
> end
>
> sysctl_param 'vm.min_free_kbytes' do
>
> value 4194304
>
> end
>
>
>
> On Fri, Nov 17, 2017 at 4:24 PM, Sam Huracan
> wrote:
>
>>
Hi Mike,
Could you show system log at moment osd down and up?
On Jan 10, 2018 12:52, "Mike O'Connor" wrote:
> On 10/01/2018 3:52 PM, Linh Vu wrote:
> >
> > Have you checked your firewall?
> >
> There are no ip tables rules at this time but connection tracking is
> enable. I would expect errors
Hi all,
I'm trying to use LIBRBD (Python)
http://docs.ceph.com/docs/jewel/rbd/librbdpy/
Is there a way to find real size of RBD Image through LIBRBD??
I saw I can get it by CMD:
http://ceph.com/planet/real-size-of-a-ceph-rbd-image/
Thanks
___
ceph-user
Hi Cephers,
We intend to upgrade our Cluster from Jewel to Luminous (or Mimic?)
Our model is currently using OSD File Store with SSD Journal (1 SSD for 7
SATA 7.2K)
My question are:
1.Should we change to BlueStore with DB/WAL put in SSD and data in HDD? (we
want to keep the model using journal
Hi,
Anyone can help us answer these questions?
2018-08-03 8:36 GMT+07:00 Sam Huracan :
> Hi Cephers,
>
> We intend to upgrade our Cluster from Jewel to Luminous (or Mimic?)
>
> Our model is currently using OSD File Store with SSD Journal (1 SSD for 7
> SATA 7.2K)
>
>
oid the double write
> penalty of filestore.
>
>
>
> Cheers!
>
>
>
> Saludos Cordiales,
>
> Xavier Trilla P.
>
> Clouding.io <https://clouding.io/>
>
>
>
> ¿Un Servidor Cloud con SSDs, redundado
>
> y disponible en menos de 30 segundos?
>
>
Hi,
Anyone who has real experiences on this case, could you give me more
information and estimation?
Thanks.
2018-08-05 15:00 GMT+07:00 Sam Huracan :
> Thanks Saludos!
>
> As far as I know, we should keep the FileStore SSD Journal after
> upgrading, because BlueStore will affe
Hi guys,
We are running a production OpenStack backend by Ceph.
At present, we are meeting an issue relating to high iowait in VM, in some
MySQL VM, we see sometime IOwait reaches abnormal high peaks which lead to
slow queries increase, despite load is stable (we test with script simulate
real lo
t
>
> Am 24. März 2018 08:17:44 MEZ schrieb Sam Huracan <
> nowitzki.sa...@gmail.com>:
>>
>>
>> Hi guys,
>> We are running a production OpenStack backend by Ceph.
>>
>> At present, we are meeting an issue relating to high iowait in VM, in
>> s
t parameters were you using when you've ran the
> iostat command.
>
> Unfortunately it's difficult to help you without knowing more about your
> system.
>
> Kind regards,
> Laszlo
>
> On 24.03.2018 20:19, Sam Huracan wrote:
> > This is from iostat:
> >
> > I&
8:133 0 16.6G 0 part
> > ├─sdi6 8:134 0 16.6G 0 part
> > └─sdi7 8:135 0 16.6G 0 part
> >
> > Could you give me some idea to continue check?
> >
> >
> > 2018-03-25 1
Hi,
We are using Raid cache mode Writeback for SSD journal, I consider this is
reason of utilization of SSD journal is so low.
Is it true? Anybody has experience with this matter, plz confirm.
Thanks
2018-03-26 23:00 GMT+07:00 Sam Huracan :
> Thanks for your information.
> Here is resul
Hi Khang,
What file system do you use in OSD node?
XFS always use Memory for caching data before writing to disk.
So, don't worry, it always holds memory in your system as much as possible.
2017-02-15 10:35 GMT+07:00 Khang Nguyễn Nhật
:
> Hi all,
> My ceph OSDs is running on Fedora-server24
Hi everybody,
My OpenStack System use Ceph as backend for Glance, Cinder, Nova. In the
future, we intend build a new Ceph Cluster.
I can re-connect current OpenStack with new Ceph systems.
After that, I have tried export rbd images and import to new Ceph, but VMs
and Volumes were clone of Glance
-- Forwarded message --
From: Sam Huracan
Date: 2015-12-18 1:03 GMT+07:00
Subject: Enable RBD Cache
To: ceph-us...@ceph.com
Hi,
I'm testing OpenStack Kilo with Ceph 0.94.5, install in Ubuntu 14.04
To enable RBD cache, I follow this tutorial:
http://docs.ceph.com/docs/m
Hi,
I'm testing OpenStack Kilo with Ceph 0.94.5, install in Ubuntu 14.04
To enable RBD cache, I follow this tutorial:
http://docs.ceph.com/docs/master/rbd/rbd-openstack/#configuring-nova
But when I check /var/run/ceph/guests in Compute nodes, there isn't have
any asok file.
How can I enable RBD
Hi,
I think the ratio is based on SSD max throughput/HDD max throughput
For example: one 400 Mbps SSD could be journal for 4 100 Mbps SAS.
This is my idea, I'm also building a Ceph Storage for Openstack.
Could you guys give some experiences?
On Dec 23, 2015 03:04, "Pshem Kowalczyk" wrote:
> Hi,
Hi Ceph-users.
I have an issue with my new Ceph Storage, which is backend for OpenStack
(Cinder, Glance, Nova).
When I test random write in VMs with fio, there is a long delay (60s)
before fio begin running.
Here is my script test:
fio --directory=/root/ --direct=1 --rw=randwrite --bs=4k --size=1
gt;
>
> On 31 Dec 2015, at 04:49, Sam Huracan wrote:
>
> Hi Ceph-users.
>
> I have an issue with my new Ceph Storage, which is backend for OpenStack
> (Cinder, Glance, Nova).
> When I test random write in VMs with fio, there is a long delay (60s)
> before fio begin running
Hi,
I intend to add some config, but how to apply it in an production system.
[Osd]
osd journal size = 0
osd mount options xfs = "rw,noatime,inode64,logbufs=8,logbsize=256k"
filestore min sync interval = 5
filestore max sync interval = 15
filestore queue max ops = 2048
filestore queue max bytes =
Hi,
How could I use Ceph as Backend for Swift?
I follow these git:
https://github.com/stackforge/swift-ceph-backend
https://github.com/enovance/swiftceph-ansible
I try to install manually, but I am stucking in configuring entry for ring.
What device I use in 'swift-ring-builder account.builder ad
Hi Cephers,
When an Ceph write made, does it write to all File Stores of Primary OSD
and Secondary OSD before sending ACK to client, or it writes to journal of
OSD and sending ACK without writing to File Store?
I think it would write to journal of all OSD, so using SSD journal will
increase write
Thanks Loris,
So after client receiving ACK, if client makes a read request to this
object immediately , does it have to wait for object written to file store,
or read directly from journal?
2016-01-25 17:12 GMT+07:00 Loris Cuoghi :
>
> Le 25/01/2016 11:04, Sam Huracan a écrit :
> >
Hi everybody,
We've runned a 50TB Cluster with 3 MON services on the same nodes with OSDs.
We are planning to upgrade to 200TB, I have 2 questions:
1. Should we separate MON services to dedicated hosts?
2. From your experiences, how size of cluster we shoud consider to put
MON on dedicated hos
8492
> Com. register: Amtsgericht Munich HRB 231263
>
> Web: https://croit.io
> YouTube: https://goo.gl/PGE1Bx
> Am Mo., 17. Dez. 2018 um 10:10 Uhr schrieb Sam Huracan
> :
> >
> > Hi everybody,
> >
> > We've runned a 50TB Cluster with 3 MON services on t
43 matches
Mail list logo