Thanks Christian, and all of ceph users
Your guidance was very helpful, appreciate !
Regards
Jack Makenz
On Mon, May 30, 2016 at 11:08 AM, Christian Balzer wrote:
>
> Hello,
>
> you may want to read up on the various high-density node threads and
> conversations here.
>
> You most certainly do
Hi Yehuda,
What is the difference between two ? Aren't the static websites same
as s3 as s3 hosts static websites only .
Hi Robin,
I am using the master only.
The document would be great. I am thinking it is a config issue only.
With your document, it should get cleared up.
How does the resolv
The other option is to scale out rather than scale up. I'm currently building
nodes based on a fast Xeon E3 with 12 Drives in 1U. The MB/CPU is very
attractively priced and the higher clock gives you much lower write latency if
that is important. The density is slightly lower, but I guess you ga
Hi all,
I create a image on my Centos 7 client.
Then map the device and format to ext4, and mount in /mnt/ceph-hd
and I have added many files to /mnt/ceph-hd.
and I didn't not set rbd start on boot.
then after the server reboot, I can't find the image.
no any rbd devices in /dev/
modprobe rbd
ra
Hi Jack,
any raid controller support JBOD mode.
So you wont build a raid, even you can.
But you will leave this to ceph to build the redundancy softwarebased.
Or, if you have high needs of availbility, you can let the raid
controller build raid's of raid level's where the raw loose of capacity
Hallo,
in my OpenStack Mitaka, I have installed the additional service "Manila" with a
CephFS backend. Everything is working. All shares are created successfully:
manila show 9dd24065-97fb-4bcd-9ad1-ca63d40bf3a8
+-+--
Hi,
After jewel released fs product ready version, I upgrade the old
hammer cluster, but iops droped a lot
I made a test, with 3 nodes, each one have 8c 16G 1osd, the osd
device got 15000 iops
I found ceph-fuse client has better performance on hammer than jewel.
fio randwrit
Hi All
I'm having an issue with slow writes over NFS (v3) when cephfs is mounted
with the kernel driver. Writing a single 4K file from the NFS client is
taking 3 - 4 seconds, however a 4K write (with sync) into the same folder
on the server is fast as you would expect. When mounted with ceph-fuse,
Hello,
On Mon, 30 May 2016 09:40:11 +0100 Nick Fisk wrote:
> The other option is to scale out rather than scale up. I'm currently
> building nodes based on a fast Xeon E3 with 12 Drives in 1U. The MB/CPU
> is very attractively priced and the higher clock gives you much lower
> write latency if t
Hi,
E3 CPUs have 4 Cores, with HT Unit. So 8 logical Cores. And they are not
multi CPU.
That means you will be naturally ( fastly ) limited in the number of
OSD's you can run with that.
Because no matter how much Ghz it has, the OSD process occupy a cpu core
for ever. Not for 100%, but still eno
On Mon, May 30, 2016 at 4:12 PM, Jens Offenbach wrote:
> Hallo,
> in my OpenStack Mitaka, I have installed the additional service "Manila" with
> a CephFS backend. Everything is working. All shares are created successfully:
>
> manila show 9dd24065-97fb-4bcd-9ad1-ca63d40bf3a8
> +-
On Mon, May 30, 2016 at 10:54 AM, dbgong wrote:
> Hi all,
>
> I create a image on my Centos 7 client.
> Then map the device and format to ext4, and mount in /mnt/ceph-hd
> and I have added many files to /mnt/ceph-hd.
>
> and I didn't not set rbd start on boot.
> then after the server reboot, I can
Hi,
I'm having problems with CEPH v10.2.1 Jewel when create user. My cluster is
used CEPH Jewel, including: 3 OSD, 2 monitors and 1 RGW.
- Here is the list of *cluster pools*:
.rgw.root
ap-southeast.rgw.control
ap-southeast.rgw.data.root
ap-southeast.rgw.gc
ap-southeast.rgw.users.uid
ap-southeast.r
13 matches
Mail list logo