16.12.2014 10:53, Daniel Schwager пишет:
> Hallo Mike,
>
>> This is also have another way.
>> * for CONF 2,3 replace 200Gb SSD to 800Gb and add another 1-2 SSD to
>> each node.
>> * make tier1 read-write cache on SSDs
>> * also you can add journal partition on them if you wish - then data
>> will
Hi,
On Tue, Dec 16, 2014 at 12:54 PM, pushpesh sharma
wrote:
>
> Vivek,
>
> The problem is swift client is only downloading a chunk of object not
> the whole object so the etag mismatch. Could you paste the value of
> 'rgw_max_chunk_size'. Please be sure you set this to a sane
> value(<4MB, atlea
Hi,
root@ppm-c240-ceph3:/var/run/ceph# ceph --admin-daemon
/var/run/ceph/ceph-osd.11.asok config show | less | grep rgw_max_chunk_size
"rgw_max_chunk_size": "524288",
root@ppm-c240-ceph3:/var/run/ceph#
And the value is above 4 MB.
Regards,
--
Vivek Varghese Cherian
Thanks Craig.
I will try that!
I thought it was more complicate than that because of the entries for
the "public_network" and "rgw dns name" in the config file...
I will give it a try.
Best,
George
That shouldnt be a problem. Just have Apache bind to all interfaces
instead of the exte
I have a 3 node Ceph 0.87 cluster. After a while I see an error in radosgw
and I don’t find references in the list archives
heartbeat_map is_healthy 'RGWProcess::m_tp thread 0x7fc4eac2d700' had timed
out after 600
The only solution is restart radosgw and for a while it works just fine
Any idea
Hi there,
today I had an osd crash with ceph 0.87/giant which made my hole cluster
unusable for 45 Minutes.
First it began with a disk error:
sd 0:1:2:0: [sdc] CDB: Read(10)Read(10):: 28 28 00 00 0d 15 fe d0 fd 7b e8 f8
00 00 00 00 b0 08 00 00
XFS (sdc1): xfs_imap_to_bp: xfs_trans_read_buf()
On Tue, 16 Dec 2014 11:26:35 AM you wrote:
> Is this normal? is ceph just really slow at restoring rbd snapshots,
> or have I really borked my setup?
I'm not looking for a fix or a tuning suggestions, just feedback on whether
this is normal
--
Lindsay
signature.asc
Description: This is a digit
Hi
You logs does not provides much information , if you are following any other
documentation for Ceph , i would recommend you to follow official Ceph docs.
http://ceph.com/docs/master/start/quick-start-preflight/
Karan Singh
S
Hi Gregory,
Sorry for the delay getting back.
There was no activity at all on those 3 pools. Activity on the fourth
pool was under 1 Mbps of writes.
I think I waited several hours, but I can't recall exactly. One hour at
least is for sure.
Thanks
Eneko
On 11/12/14 19:32, Gregory Farnum wr
On Tue, 16 Dec 2014 12:10:42 +0300 Mike wrote:
> 16.12.2014 10:53, Daniel Schwager пишет:
> > Hallo Mike,
> >
> >> This is also have another way.
> >> * for CONF 2,3 replace 200Gb SSD to 800Gb and add another 1-2 SSD to
> >> each node.
> >> * make tier1 read-write cache on SSDs
> >> * also you ca
On 2014-12-16 14:53, Lindsay Mathieson wrote:
Is this normal? is ceph just really slow at restoring rbd snapshots,
or have I really borked my setup?
I'm not looking for a fix or a tuning suggestions, just feedback on whether
this is normal
That is my experience as well. I rolled back a 1,5 T
On 12/16/2014 04:14 PM, Carl-Johan Schenström wrote:
> On 2014-12-16 14:53, Lindsay Mathieson wrote:
>
>>> Is this normal? is ceph just really slow at restoring rbd snapshots,
>>> or have I really borked my setup?
>>
>> I'm not looking for a fix or a tuning suggestions, just feedback on
>> whether
Alexandre Derumier
Ingénieur système et stockage
Fixe : 03 20 68 90 88
Fax : 03 20 68 90 81
45 Bvd du Général Leclerc 59100 Roubaix
12 rue Marivaux 75002 Paris
MonSiteEstLent.com - Blog dédié à la webperformance et la gestion de pics de
trafic
De: "Wido den Hollander"
Hi,
>>That is normal behavior. Snapshotting itself is a fast process, but
>>restoring means merging and rolling back.
Any future plan to add something similar to zfs or netapp,
where you can instant rollback a snapshot ?
(Not sure it's technically possible to implement such snapshot with distri
Hello,
Read speed inside our vms (most of them windows) is only ¼ of the write speed.
Write speed is about 450MB/s - 500mb/s and
Read is only about 100/MB/s
Our network is 10Gbit for OSDs and 10GB for MONS. We have 3 Servers with 15
osds each
___
ceph
There are really only two ways to do snapshots that I know of and they have
trade-offs:
COW into the snapshot (like VMware, Ceph, etc):
When a write is committed, the changes are committed to a diff file and the
base file is left untouched. This only has a single write penalty, if you
want to dis
You may need split horizon DNS. The internal machines' DNS should resolve
to the internal IP, and the external machines' DNS should resolve to the
external IP.
There are various ways to do that. The RadosGW config has an example of
setting up Dnsmasq:
http://ceph.com/docs/master/radosgw/config/#
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
On 17/12/14 05:26, VELARTIS Philipp Dürhammer wrote:
> Hello,
>
>
>
> Read speed inside our vms (most of them windows) is only ¼ of the
> write speed.
>
> Write speed is about 450MB/s – 500mb/s and
>
> Read is only about 100/MB/s
>
>
>
> Our
Hello,
I'm trying to create an erasure pool following
http://docs.ceph.com/docs/master/rados/operations/erasure-code/, but when I try
create a pool with a specifc erasure-code-profile ("myprofile") the PGs became
on incomplete state.
Anyone can help me?
Below the profile I created:
root@ceph
Hi,
The 2147483647 means that CRUSH did not find enough OSD for a given PG. If you
check the crush rule associated with the erasure coded pool, you will most
probably find why.
Cheers
On 16/12/2014 23:32, Italo Santos wrote:
> Hello,
>
> I'm trying to create an erasure pool following
> http
On Tue, 16 Dec 2014 07:57:19 AM Leen de Braal wrote:
> If you are trying to see if your mails come through, don't check on the
> list. You have a gmail account, gmail removes mails that you have sent
> yourself.
Not the case, I am on a dozen other mailman lists via gmail, all of them show
my post
On 17 December 2014 at 04:50, Robert LeBlanc wrote:
> There are really only two ways to do snapshots that I know of and they have
> trade-offs:
>
> COW into the snapshot (like VMware, Ceph, etc):
>
> When a write is committed, the changes are committed to a diff file and the
> base file is left un
On Tue, 16 Dec 2014 16:26:17 + VELARTIS Philipp Dürhammer wrote:
> Hello,
>
> Read speed inside our vms (most of them windows) is only ¼ of the write
> speed. Write speed is about 450MB/s - 500mb/s and
> Read is only about 100/MB/s
>
> Our network is 10Gbit for OSDs and 10GB for MONS. We hav
On 12/16/2014 07:08 PM, Christian Balzer wrote:
On Tue, 16 Dec 2014 16:26:17 + VELARTIS Philipp Dürhammer wrote:
Hello,
Read speed inside our vms (most of them windows) is only ¼ of the write
speed. Write speed is about 450MB/s - 500mb/s and
Read is only about 100/MB/s
Our network is 10
From official Ceph docs,i still get the same err:
[root@node3 ceph-cluster]# ceph-deploy osd activate node2:/dev/sdb1
[ceph_deploy.conf][DEBUG ] found configuration file at:
/root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.21): /usr/bin/ceph-deploy osd
activate node2:/dev/sdb1
[ceph
Hello,
I am trying to set the extended attribute to a newly created created
directory (call it "dir" here) using setfattr. I run the following command.
setfattr -n ceph.dir.layout.stripe_count -v 2 dir
And return:
setfattr: dir: Operation not supported
I am wondering if the underlying file syst
On Tue, Dec 16, 2014 at 5:37 PM, Lindsay Mathieson <
lindsay.mathie...@gmail.com> wrote:
>
> On 17 December 2014 at 04:50, Robert LeBlanc wrote:
> > There are really only two ways to do snapshots that I know of and they
> have
> > trade-offs:
> >
> > COW into the snapshot (like VMware, Ceph, etc):
I always wondered why my posts didn't show up until somebody replied to
them. I thought it was my filters.
Thanks!
On Mon, Dec 15, 2014 at 10:57 PM, Leen de Braal wrote:
>
> If you are trying to see if your mails come through, don't check on the
> list. You have a gmail account, gmail removes m
So the problem started once remapping+backfilling started, and lasted until
the cluster was healthy again? Have you adjusted any of the recovery
tunables? Are you using SSD journals?
I had a similar experience the first time my OSDs started backfilling. The
average RadosGW operation latency wen
On 17 December 2014 at 11:50, Robert LeBlanc wrote:
>
>
> On Tue, Dec 16, 2014 at 5:37 PM, Lindsay Mathieson
> wrote:
>>
>> On 17 December 2014 at 04:50, Robert LeBlanc wrote:
>> > There are really only two ways to do snapshots that I know of and they
>> > have
>> > trade-offs:
>> >
>> > COW int
I've found the problem.
The command "ceph osd crush rule create-simple ssd_ruleset ssd root" should
be "ceph osd crush rule create-simple ssd_ruleset ssd host"
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-
Hi All,
I am new to Ceph. Due to physical machines shortage I have installed Ceph
cluster with single OSD and MON in a single Virtual Machine.
I have few queries as below:
1. Whether having the Ceph setup on a VM is fine or it require to be on
Physical server.
2. Since Amazon S3, Azure Blob St
32 matches
Mail list logo