Hi Orit
Executing period update resolved issue. Thanks for help.
Kind regards,
Marko
On 1/15/17 08:53, Orit Wasserman wrote:
On Wed, Jan 11, 2017 at 2:53 PM, Marko Stojanovic wrote:
Hello all,
I have issue with radosgw-admin regionmap update . It doesn't update map.
With zone configur
Hi Maxime,
Given your remark below, what kind of SATA SSD do you recommend for OSD
usage?
Thanks!
Regards,
Kees
On 15-01-17 21:33, Maxime Guyot wrote:
> I don’t have firsthand experience with the S3520, as Christian pointed out
> their endurance doesn’t make them suitable for OSDs in most case
Hi,
I have two OSD and Mon nodes.
I'm going to add third osd and mon on this cluster but before I want to fix
this error:
```
# ceph -s
cluster 8461e3b5-abda-4471-98c0-913e56aec890
health HEALTH_WARN
64 pgs degraded
64 pgs stuck unclean
64 pgs undersi
In documentation I read here:
http://docs.ceph.com/docs/master/rados/troubleshooting/troubleshooting-pg/?highlight=stuck%20inactive#fewer-osds-than-replicas
« You can make the changes at runtime. If you make the changes in your Ceph
configuration file, you may need to restart your cluster. »
but
Hello,
Le 16/01/2017 à 11:50, Stéphane Klein a écrit :
Hi,
I have two OSD and Mon nodes.
I'm going to add third osd and mon on this cluster but before I want to
fix this error:
>
> [SNIP SNAP]
You've just created your cluster.
With the standard CRUSH rules you need one OSD on three differen
Hello Stephane,
Try this .
$ceph osd pool get size -->> it will prompt the "
osd_pool_default_size "
$ceph osd pool get min_size-->> it will prompt the "
osd_pool_default_min_size "
if you want to change in runtime, trigger below command
$ceph osd pool set size
$ceph osd pool set m
2017-01-16 12:47 GMT+01:00 Jay Linux :
> Hello Stephane,
>
> Try this .
>
> $ceph osd pool get size -->> it will prompt the "
> osd_pool_default_size "
> $ceph osd pool get min_size-->> it will prompt the "
> osd_pool_default_min_size "
>
> if you want to change in runtime, trigger below
Hi Kees,
Assuming 3 replicas and collocated journal each RBD write will trigger 6 SSD
writes (excluding FS overhead and occasional re-balance).
Intel has 4 tiers of Data center SATA SSD (other manufacturers may have fewer):
- S31xx: ~0.1 DWPD (counted on 3 years): Very read intensive
- S35xx: ~1
2017-01-16 12:24 GMT+01:00 Loris Cuoghi :
> Hello,
>
> Le 16/01/2017 à 11:50, Stéphane Klein a écrit :
>
>> Hi,
>>
>> I have two OSD and Mon nodes.
>>
>> I'm going to add third osd and mon on this cluster but before I want to
>> fix this error:
>>
> >
> > [SNIP SNAP]
>
> You've just created your c
Hello Marius Vaitiekunas, Chris Jones,
Thank you for your contributions.
I was looking for this information.
I'm starting to use Ceph, and my concern is about monitoring.
Do you have any scripts for this monitoring?
If you can help me. I will be very grateful to you.
(Excuse me if there is misi
Hey cephers,
Please bear with us as we migrate ceph.com as there may be some
outages. They should be quick and over soon. Thanks!
--
Best Regards,
Patrick McGarry
Director Ceph Community || Red Hat
http://ceph.com || http://community.redhat.com
@scuttlemonkey || @ceph
__
I see my mistake:
```
osdmap e57: 2 osds: 1 up, 1 in; 64 remapped pgs
flags sortbitwise,require_jewel_osds
```
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
give this a try
ceph osd set noout
On Jan 16, 2017 9:08 AM, "Stéphane Klein"
wrote:
> I see my mistake:
>
> ```
> osdmap e57: 2 osds: 1 up, 1 in; 64 remapped pgs
> flags sortbitwise,require_jewel_osds
> ```
>
> ___
> ceph-users mailin
On Mon, Jan 16, 2017 at 3:54 PM, Andre Forigato
wrote:
> Hello Marius Vaitiekunas, Chris Jones,
>
> Thank you for your contributions.
> I was looking for this information.
>
> I'm starting to use Ceph, and my concern is about monitoring.
>
> Do you have any scripts for this monitoring?
> If you c
Ok, the new website should be up and functional. Shout if you see
anything that is still broken.
As for the site itself, I'd like to highlight a few things worth checking out:
* Ceph Days -- The first two Ceph Days have been posted, as well as
the historical events for all of last year.
http://ce
FYI, our ipv6 is lagging a bit behind ipv4 (and the red hat
nameservers may take a bit to catch up), so you may see the old site
for just a little bit longer.
On Mon, Jan 16, 2017 at 10:03 AM, Patrick McGarry wrote:
> Ok, the new website should be up and functional. Shout if you see
> anything
Hello,
Le 16/01/2017 à 16:03, Patrick McGarry a écrit :
> Ok, the new website should be up and functional. Shout if you see
> anything that is still broken.
Minor typos:
"It replicates and re-balance data within the cluster
dynamically—elminating this tedious task"
-> re-balances
-> elimin
On Sun, Jan 15, 2017 at 2:56 PM, Shawn Edwards wrote:
> If I, say, have 10 rbd attached to the same box using librbd, all 10 of the
> rbd are clones of the same snapshot, and I have caching turned on, will each
> rbd be caching blocks from the parent snapshot individually, or will the 10
> rbd pro
On Mon, Jan 16, 2017 at 10:11 AM Jason Dillaman wrote:
> On Sun, Jan 15, 2017 at 2:56 PM, Shawn Edwards
> wrote:
> > If I, say, have 10 rbd attached to the same box using librbd, all 10 of
> the
> > rbd are clones of the same snapshot, and I have caching turned on, will
> each
> > rbd be caching
The site looks great! Good job!
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
In fact, we can reproduce the problem from VM with CentOS 6.7, 7.2 or 7.3.
We can reproduce it each time with this config : one VM (here in CentOS
6.7) with 16 RBD volumes of 100GB attached. When we launch in serial
mkfs.ext4 on each of these volumes, we allways encounter the problem on one
of them
Are you using krbd directly within the VM or librbd via
virtio-blk/scsi? Ticket #9071 is against krbd.
On Mon, Jan 16, 2017 at 11:34 AM, Vincent Godin wrote:
> In fact, we can reproduce the problem from VM with CentOS 6.7, 7.2 or 7.3.
> We can reproduce it each time with this config : one VM (her
Patrick,
I’m probably overlooking something, but when I follow the ceph days link
there are no 2017 events only past. The cephalocon link goes to a 404 page
not found.
Bruce
On 1/16/17, 7:03 AM, "ceph-devel-ow...@vger.kernel.org on behalf of
Patrick McGarry" wrote:
>Ok, the new website should b
Ignore that last post. After another try or 2 I got to the new site with
the updates as described. Looks great!
On 1/16/17, 9:12 AM, "ceph-devel-ow...@vger.kernel.org on behalf of
McFarland, Bruce" wrote:
>Patrick,
>I’m probably overlooking something, but when I follow the ceph days link
>there
We are using librbd on a host with CentOS 7.2 via virtio-blk. This server
hosts the VMs on which we are doing our tests. But we have exactly the same
behaviour than #9071. We try to follow the thread to the bug 8818 but we
didn't reproduce the issue with a lot of DD. Each time we try with
mkfs.ext4
Can you ensure that you have the "admin socket" configured for your
librbd-backed VM so that you can do the following when you hit that
condition:
ceph --admin-daemon objecter_requests
That will dump out any hung IO requests between librbd and the OSDs. I
would also check your librbd logs to see
So what's the consensus on CephFS?
Is it ready for prime time or not?
//Tu
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On Sat, Jan 14, 2017 at 7:54 PM, 许雪寒 wrote:
> Thanks for your help:-)
>
> I checked the source code again, and in read_message, it does hold the
> Connection::lock:
You're correct of course; I wasn't looking and forgot about this bit.
This was added to deal with client-allocated buffers and/or o
What's your use case? Do you plan on using kernel or fuse clients?
On 16 Jan 2017 23:03, "Tu Holmes" wrote:
> So what's the consensus on CephFS?
>
> Is it ready for prime time or not?
>
> //Tu
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.c
I could use either one. I'm just trying to get a feel for how stable the
technology is in general.
On Mon, Jan 16, 2017 at 3:19 PM Sean Redmond
wrote:
> What's your use case? Do you plan on using kernel or fuse clients?
>
> On 16 Jan 2017 23:03, "Tu Holmes" wrote:
>
> So what's the consensus on
> Op 17 jan. 2017 om 03:47 heeft Tu Holmes het volgende
> geschreven:
>
> I could use either one. I'm just trying to get a feel for how stable the
> technology is in general.
Stable. Multiple customers of me run it in production with the kernel client
and serious load on it. No major probl
> Op 17 jan. 2017 om 05:31 heeft Hauke Homburg het
> volgende geschreven:
>
> Am 16.01.2017 um 12:24 schrieb Wido den Hollander:
>>> Op 14 januari 2017 om 14:58 schreef Hauke Homburg :
>>>
>>>
>>> Am 14.01.2017 um 12:59 schrieb Wido den Hollander:
> Op 14 januari 2017 om 11:05 schreef Ha
32 matches
Mail list logo