Hello,
I tried to install Ceph 12.2.2 (Luminous) on Ubuntu 16.04.3 LTS (kernel
4.4.0-104-generic), but I am having trouble starting radosgw service:
# systemctl status ceph-rado...@rgw.ceph-rgw1
รข ceph-rado...@rgw.ceph-rgw1.service - Ceph rados gateway
Loaded: loaded (/lib/systemd/system/ceph-
I followed the exact steps of the following page:
http://ceph.com/rgw/new-luminous-rgw-metadata-search/
"us-east-1" zone is serviced by host "ceph-rgw1" on port 8000, no issue,
the service runs successfully.
"us-east-es" zone is serviced by host "ceph-rgw2" on port 8002, the service
was unable t
Hi Yehuda,
Thanks for replying.
>radosgw failed to connect to your ceph cluster. Does the rados command
>with the same connection params work?
I am not quite sure what to do by running rados command to test.
So I tried again, could you please take a look and check what could have
gone wrong?
H
So I did the exact same thing using Kraken and the same set of VMs, no
issue. What is the magic to make it work in Luminous? Anyone lucky enough
to have this RGW ElasticSearch working using Luminous?
On Mon, Jan 8, 2018 at 10:26 AM, Youzhong Yang wrote:
> Hi Yehuda,
>
> Thanks for
gw related error that says that it
> failed to reach the rados (ceph) backend. You can try bumping up the
> messenger log (debug ms =1) and see if there's any hint in there.
>
> Yehuda
>
> On Fri, Jan 12, 2018 at 12:54 PM, Youzhong Yang
> wrote:
> > So I did the
7;re seeing there don't look like related to
> elasticsearch. It's a generic radosgw related error that says that it
> failed to reach the rados (ceph) backend. You can try bumping up the
> messenger log (debug ms =1) and see if there's any hint in there.
>
> Yehud
ssue? I
understand it may sound obvious to experienced users ...
http://ceph.com/rgw/new-luminous-rgw-metadata-search/
Thanks a lot.
On Tue, Jan 16, 2018 at 3:59 PM, Yehuda Sadeh-Weinraub
wrote:
> On Tue, Jan 16, 2018 at 12:20 PM, Youzhong Yang
> wrote:
> > Hi Yehuda,
> >
>
One month ago when I first started evaluating ceph, I chose Debian 9.3 as
the operating system. I saw random OS hang so I gave up and switched to
Ubuntu 16.04. Every thing works well using Ubuntu 16.04.
Yesterday I tried Ubuntu 17.10, again I saw random OS hang, no matter it's
mon, mgr, osd, or rg
I don't think it's hardware issue. All the hosts are VMs. By the way, using
the same set of VMWare hypervisors, I switched back to Ubuntu 16.04 last
night, so far so good, no freeze.
On Fri, Jan 19, 2018 at 8:50 AM, Daniel Baumann
wrote:
> Hi,
>
> On 01/19/18 14:46,
I enabled compression by a command like this:
radosgw-admin zone placement modify --rgw-zone=coredumps
--placement-id=default-placement --compression=zlib
Then once the object was uploaded, elasticsearch kept dumping the following
messages:
[2018-01-20T23:13:43,587][DEBUG][o.e.a.b.TransportShard
.
On Sat, Jan 20, 2018 at 7:03 PM, Brad Hubbard wrote:
> On Fri, Jan 19, 2018 at 11:54 PM, Youzhong Yang
> wrote:
> > I don't think it's hardware issue. All the hosts are VMs. By the way,
> using
> > the same set of VMWare hypervisors, I switched back to Ubuntu
eph-users [mailto:ceph-users-boun...@lists.ceph.com] *On Behalf
> Of *Youzhong Yang
> *Sent:* 21 January 2018 19:50
> *To:* Brad Hubbard
> *Cc:* ceph-users
> *Subject:* Re: [ceph-users] Ubuntu 17.10 or Debian 9.3 + Luminous =
> random OS hang ?
>
>
>
> As someone sugge
This is what I did:
# rbd import /var/tmp/debian93-raw.img images/debian93
# rbd info images/debian93
rbd image 'debian93':
size 81920 MB in 20480 objects
order 22 (4096 kB objects)
block_name_prefix: rbd_data.384b74b0dc51
format: 2
features: layering, exclusive-lock, object-map, fast-diff, d
anks.
On Wed, May 9, 2018 at 11:52 AM, Jason Dillaman wrote:
> On Wed, May 9, 2018 at 11:39 AM, Youzhong Yang wrote:
> > This is what I did:
> >
> > # rbd import /var/tmp/debian93-raw.img images/debian93
> > # rbd info images/debian93
> > rbd image 'debian93
NFS v4 works like a charm, no issue for Linux clients, but when trying to
mount on MAC OS X client, it doesn't work - likely due to 'mountd' not
registered in rpc by ganesha when it comes to v4.
So I tried to set up v3, no luck:
# mount -t nfs -o rw,noatime,vers=3 ceph-dev:/ceph /mnt/ceph
mount.n
, but
> only subdirectories due to limitations in the inode size)
>
>
> Paul
>
> 2018-06-26 18:13 GMT+02:00 Youzhong Yang :
>
>> NFS v4 works like a charm, no issue for Linux clients, but when trying to
>> mount on MAC OS X client, it doesn't work - likely due to
For RGW, compression works very well. We use rgw to store crash dumps, in
most cases, the compression ratio is about 2.0 ~ 4.0.
I tried to enable compression for cephfs data pool:
# ceph osd pool get cephfs_data all | grep ^compression
compression_mode: force
compression_algorithm: lz4
compressio
Thanks Richard. Yes, it seems working by perf dump:
osd.6
"bluestore_compressed": 62622444,
"bluestore_compressed_allocated": 186777600,
"bluestore_compressed_original":373555200,
It's very interesting that bluestore_compressed_allocated is approxima
18 matches
Mail list logo