Issue solved.
I deleted the problematic pool (rbd) and the problem is now gone. RBD pool
is empty by the way.
On Wed, Nov 9, 2016 at 9:31 PM, Vlad Blando wrote:
> Hi Mehmet,
>
> It won't let me adjust the PGs because there are "creating" tasks not done
> yet.
>
> ---
> [root@avatar0-ceph0 ~]
I've been struggling with a broken ceph node and I have very limited ceph
knowledge. With 3-4 days of actually using it, I was tasked with upgrading
it. Everything seemed to go fine, at first, but it didn't last.
The next day I was informed people were unable to create volumes (we
successfully cre
Hi Orit,
Many thanks. I will try that over the weekend and let you know.
Are you sure removing the pool will not destroy my data, user info and buckets?
Thanks
- Original Message -
> From: "Orit Wasserman"
> To: "andrei"
> Cc: "Yoann Moulin" , "ceph-users"
>
> Sent: Friday, 11 Novem
Hi,
Yes, I specifically wanted to make sure the disk part of the infrastructure
didn't affect the results, the main aims were to reduce
the end to end latency in the journals and Ceph code by utilising fast CPU's
and NVME journals. SQL transaction logs are a good
example where this low latency,
Nice article on write latency. If i understand correctly, this latency is
measured while there is no overflow of the journal caused by long
sustained writes else you will start hitting the HDD latency. Also queue
depth you use is 1 ?
Will be interested to see your article on hardware.
/Maged
Here's another: http://termbin.com/smnm
On Fri, Nov 11, 2016 at 1:28 PM, Sage Weil wrote:
> On Fri, 11 Nov 2016, bobobo1...@gmail.com wrote:
>> Any more data needed?
>>
>> On Wed, Nov 9, 2016 at 9:29 AM, bobobo1...@gmail.com
>> wrote:
>> > Here it is after running overnight (~9h): http://ix.io/1
On Fri, 11 Nov 2016, bobobo1...@gmail.com wrote:
> Any more data needed?
>
> On Wed, Nov 9, 2016 at 9:29 AM, bobobo1...@gmail.com
> wrote:
> > Here it is after running overnight (~9h): http://ix.io/1DNi
I'm getting a 500 on that URL...
sage
> >
> > On Tue, Nov 8, 2016 at 11:00 PM, bobobo1...@
Worth considering OpenStack and Ubuntu cloudarchive release cycles
here. Mitaka is the release where all Ubuntu OpenStack users need to
upgrade from Trusty to Xenial - so far Mitaka and now Newton
deployments are still in the minority (see the OpenStack
user/deployment survey for the data) and I ex
Hi,
I would prefer option 1 please. It wouldn't be the end of the world if
14.04 support went away, but definitely inconvenient. EOL for Ubuntu 14.04
is April 2019 - I would expect to see many people still running it for
quite some time.
Thanks,
Randy
On Fri, Nov 11, 2016 at 12:43 PM, Sage Weil
Hi All,
I've recently put together some articles around some of the performance testing
I have been doing.
The first explores the high level theory behind latency in a Ceph
infrastructure and what we have managed to achieve.
http://www.sys-pro.co.uk/ceph-write-latency/
The second explores som
Hi Matteo,
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Matteo Dacrema
> Sent: 11 November 2016 10:57
> To: Christian Balzer
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] 6 Node cluster with 24 SSD per node:
> Hardwarepl
Any more data needed?
On Wed, Nov 9, 2016 at 9:29 AM, bobobo1...@gmail.com
wrote:
> Here it is after running overnight (~9h): http://ix.io/1DNi
>
> On Tue, Nov 8, 2016 at 11:00 PM, bobobo1...@gmail.com
> wrote:
>> Ah, I was actually mistaken. After running without Valgrind, it seems
>> I just es
Currently the distros we use for upstream testing are
centos 7.x
ubuntu 16.04 (xenial)
ubuntu 14.04 (trusty)
We also do some basic testing for Debian 8 and Fedora (some old version).
Jewel was the first release that had native systemd and full xenial
support, so it's helpful to have both 14.
I'm curious what the relationship is with python_ceph_cfg[0] and DeepSea,
which have some overlap in contributors and functionality (and supporting
organizations?).
[0] https://github.com/oms4suse/python-ceph-cfg
Bill Sanders
On Wed, Nov 2, 2016 at 10:52 PM, Tim Serong wrote:
> Hi All,
>
> I t
> Op 11 november 2016 om 14:23 schreef Trygve Vea
> :
>
>
> Hi,
>
> We recently experienced a problem with a single OSD. This occurred twice.
>
> The problem manifested itself thus:
>
> - 8 placement groups stuck peering, all of which had the problematic OSD as
> one of the acting OSDs in
Hello,
I have a 1GB file and 2 pools, one replicated and one EC 8+2, and I want to
make a copy of this file through the radosgw with s3.
I'd like to know how this file will be split into PGs in both pools.
Some details for my use case :
12 hosts
10 OSDs per Host
failure domain set to Host
PG=10
Hi,
We recently experienced a problem with a single OSD. This occurred twice.
The problem manifested itself thus:
- 8 placement groups stuck peering, all of which had the problematic OSD as one
of the acting OSDs in the set.
- The OSD had a lot of active placement groups
- The OSD were blockin
Hello,
I am using the ceph volumes with a VM. Details are below:
VM:
OS: Ubuntu 14.0.4
CPU: 12 Cores
RAM: 40 GB
Volumes:
Size: 1 TB
No: 6 Volumes
With above, VM got hung without any read/write operation.
Any suggestions..
Thanks
Swami
_
On Fri, Nov 11, 2016 at 12:24 PM, Orit Wasserman wrote:
> I have a workaround:
>
> 1. Use zonegroup and zone jsons you have from before (default-zg.json
> and default-zone.json)
> 2. Make sure the realm id in the jsons is ""
> 3. Stop the gateways
> 4. Remove .rgw.root pool(you can back it up if
I have a workaround:
1. Use zonegroup and zone jsons you have from before (default-zg.json
and default-zone.json)
2. Make sure the realm id in the jsons is ""
3. Stop the gateways
4. Remove .rgw.root pool(you can back it up if you want to by using
mkpool and cppool commands):
rados rm .rgw.r
Hi,
after your tips and consideration I’ve planned to use this hardware
configuration:
- 4x OSD ( for starting the project):
1x Intel E5-1630v4 @ 4.00 Ghz with turbo 4 core, 8 thread , 10MB cache
128GB RAM ( does frequency matter in terms of performance ? )
4x Intel P3700 2TB NVME
2x Mellanox C
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On
> Behalf Of Mio Vlahović
> Hello,
>
[CUT]
One more thing I have tried, linking the bucket to the new user:
radosgw-admin bucket link --bucket test --bucket-id
--uid yyy
Now
The problem is fixed by commit 51c926a74e5ef478c11ccbcf11c351aa520dde2a
The commit message has detailed explanation
Thanks
On Fri, Nov 11, 2016 at 3:21 PM Yutian Li wrote:
> I found there is an option `mds_health_summarize_threshold` so it could
> show the clients that are lagging.
>
> I incre
Hi Goncalo,
Thank you for those links. It appears that that fix was already in the
10.2.3 mds, which we are running. I've just upgraded the mds's to the
current jewel gitbuilder (10.2.3-358.g427f357.x86_64) and the problem
is still there.
(BTW, in testing this I've been toggling the mds caps betw
Hello,
We are using ceph version 10.2.0 (3a9fba20ec743699b69bd0181dd6c54dc01c64b9) and
radosgw for our object storage.
Everything is in production and running fine, but now i got a request from a
customer that they need a new s3 user, but with full_control access to some of
the existing buckets
25 matches
Mail list logo