I'm looking at setting up a multi-site radosgw configuration where
data is sharded over multiple clusters in a single physical location;
and would like to understand how Ceph handles requests in this
configuration.
Looking through the radosgw source[1] it looks like radowgw will
return 301 redirec
Marc Roos 于2019年3月18日周一 上午5:46写道:
>
>
>
>
> 2019-03-17 21:59:58.296394 7f97cbbe6700 0 --
> 192.168.10.203:6800/1614422834 >> 192.168.10.43:0/1827964483
> conn(0x55ba9614d000 :6800 s=STATE_OPEN pgs=8 cs=1 l=0).fault server,
> going to standby
>
> What does this mean?
That means the connection is i
On Mon, Mar 18, 2019 at 7:28 PM Yan, Zheng wrote:
>
> On Mon, Mar 18, 2019 at 9:50 PM Dylan McCulloch wrote:
> >
> >
> > >please run following command. It will show where is 4.
> > >
> > >rados -p -p hpcfs_metadata getxattr 4. parent >/tmp/parent
> > >ceph-dencoder import /tmp/par
Hi Stefan,
I think I have missed your reply. I'm interested to know how you manage the
performance on running Ceph with host based VXLAN overlay. May be you can
share the comparison for better understanding of possible performance
impact.
Best regards,
> Date: Sun, 25 Nov 2018 21:17:34 +0100
>
Casey,
I am not sure if this is related, but I cannot seem to retrieve files that
are 524288001 bytes (5MB + 1 byte) or 629145601 bytes (600MB + 1 byte) when
using server side encryption. Without encryption, these files store and
retrieve without issue. I'm sure there are other various permutation
Hi Casey,
Thanks for the quick response. I have just confirmed that if I set the
PartSize to 500MB the file uploads correctly. I am hesitant to do this in
production but I think we are on the right track. Interestingly enough,
when I set the PartSize to 5242880 the file did not store correctly (it
Hi Dan,
We just got a similar report about SSE-C in
http://tracker.ceph.com/issues/38700 that seems to be related to
multipart uploads. Could you please add some details there about your s3
client, its multipart chunk size, and your ceph version?
On 3/18/19 2:38 PM, Dan Smith wrote:
Hello,
Hello,
I have stored more than 167 million files in ceph using the S3 api. Out of
those 167 million+ files, one file is not storing correctly.
The file is 92MB in size. I have stored files much larger and much smaller.
If I store the file WITHOUT using the Customer Provided 256-bit AES key
using
This worked perfectly, thanks.
From: Jason Dillaman
Sent: Monday, March 18, 2019 9:19 AM
To: Wesley Dillingham
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] rbd-target-api service fails to start with address
family not supported
Looks like you have the
Thanks for taking the time to answer! Its a really old cluster, so that
does make sense, thanks for confirming!
-Brent
-Original Message-
From: Hector Martin
Sent: Monday, March 18, 2019 1:07 AM
To: Brent Kennedy ; 'Ceph Users'
Subject: Re: [ceph-users] Rebuild after upgrade
On 18/0
Hello,
thank you for the fix, I'll be on the lookout for the next ceph/daemon
container image.
Regards,
Daniele
On 18/03/19 12:59, Volker Theile wrote:
Hello Daniele,
your problem is tracker by https://tracker.ceph.com/issues/38528 and
fixed in the latest Ceph 14 builds. To workaround the pr
We'll provide Buster packages:
curl https://mirror.croit.io/keys/release.asc | apt-key add -
echo 'deb https://mirror.croit.io/debian-nautilus/ stretch main' >>
/etc/apt/sources.list.d/croit-ceph.list
The mirror currently contains the latest 14.1.1 release candidate.
Paul
--
Paul Emmerich
Loo
Thankyou Marc.
I cloned the GitHub repo and am building the packages. No biggie really and
hey, I do like living on the edge.
On Mon, 18 Mar 2019 at 16:04, Marc Roos wrote:
>
>
> If you want the excitement, can I then wish you my possible future ceph
> cluster problems, so I won't have them ;)
>
If you want the excitement, can I then wish you my possible future ceph
cluster problems, so I won't have them ;)
-Original Message-
From: John Hearns
Sent: 18 March 2019 17:00
To: ceph-users
Subject: [ceph-users] Ceph Nautilus for Ubuntu Cosmic?
May I ask if there is a repository
May I ask if there is a repository for the latest Ceph Nautilus for Ubuntu?
Specifically Ubuntu 18.10 Cosmic Cuttlefish.
Perhaps I am payig a penalty for living on the bleeding edge. But one does
have to have some excitement in life.
Thanks
___
ceph-use
The balancer optimizes # PGs / crush weight. That host looks already
quite balanced for that metric.
If the balancing is not optimal for a specific pool that has most of
the data, then you can use the `optimize myplan ` param.
-- dan
On Mon, Mar 18, 2019 at 4:39 PM Kári Bertilsson wrote:
>
> B
Because i have tested failing the mgr & rebooting all the servers in random
order multiple times. The upmap optimizer did never find more optimizations
to do after the initial optimizations. I tried leaving the balancer ON for
days and also OFF and running manually several times.
i did manually mo
Hi there!
I just started to install a ceph cluster.
I'd like to take the nautilus release.
Because of hardware restrictions (network driver modules) I had to take the
buster release of Debian.
Will there be buster packages of nautilus available after the release?
Thanks for this great storage!
Looks like you have the IPv6 stack disabled. You will need to override
the bind address from "[::]" to "0.0.0.0" via the "api_host" setting
[1] in "/etc/ceph/iscsi-gateway.cfg"
[1]
https://github.com/ceph/ceph-iscsi/blob/master/ceph_iscsi_config/settings.py#L100
On Mon, Mar 18, 2019 at 11:09 AM
I am having some difficulties getting the iscsi gateway and api setup. Working
with a 12.2.8 cluster. And the gateways are Centos 7.6.1810 kernel
3.10.0-957.5.1.el7.x86_64
Using a previous version of ceph iscsi packages:
ceph-iscsi-config-2.6-2.6.el7.noarch
ceph-iscsi-tools-2.1-2.1.el7.noarch
ce
>> >please run following command. It will show where is 4.
>> >
>> >rados -p -p hpcfs_metadata getxattr 4. parent >/tmp/parent
>> >ceph-dencoder import /tmp/parent type inode_backtrace_t decode dump_json
>> >
>>
>> $ ceph-dencoder import /tmp/parent type inode_backtrace_t decode dum
Hi there,
We have some small SSDs we use just to store radosgw metadata. I'm in the
process of replacing some of them, but when I took them out of the ruleset I
use for the radosgw pools I've seen something weird:
21 ssd 0.11099 1.0 111GiB 26.4GiB 84.8GiB 23.75 0.34 0
osd.2
On Mon, Mar 18, 2019 at 9:50 PM Dylan McCulloch wrote:
>
>
> >please run following command. It will show where is 4.
> >
> >rados -p -p hpcfs_metadata getxattr 4. parent >/tmp/parent
> >ceph-dencoder import /tmp/parent type inode_backtrace_t decode dump_json
> >
>
> $ ceph-dencoder
>please run following command. It will show where is 4.
>
>rados -p -p hpcfs_metadata getxattr 4. parent >/tmp/parent
>ceph-dencoder import /tmp/parent type inode_backtrace_t decode dump_json
>
$ ceph-dencoder import /tmp/parent type inode_backtrace_t decode dump_json
{
"ino":
Turns out this was due to a switch misconfiguration on the cluster
network. I use jumbo frames and essentially the new server's
connections were not configured with the correct MTU on the switch. So
this caused some traffic to flow, but eventually the servers wanted to
send larger frame sizes than
Hi,
For one of our Ceph clusters, I'm trying to modify the balancer configuration,
so it will keep working until it achieves a better distribution.
After checking the mailing list, looks like that the key controlling this for
upmap is mgr/balancer/upmap_max_deviation but it does not seem to mak
please run following command. It will show where is 4.
rados -p -p hpcfs_metadata getxattr 4. parent >/tmp/parent
ceph-dencoder import /tmp/parent type inode_backtrace_t decode dump_json
On Mon, Mar 18, 2019 at 8:15 PM Dylan McCulloch wrote:
>
> >> >> >cephfs does not create/u
>> >> >cephfs does not create/use object "4.". Please show us some
>> >> >of its keys.
>> >> >
>> >>
>> >> https://pastebin.com/WLfLTgni
>> >> Thanks
>> >>
>> > Is the object recently modified?
>> >
>> >rados -p hpcfs_metadata stat 4.
>> >
>>
>> $ rados -p hpcfs_metadata stat 4.000
please check if 4. has omap header and xattrs
rados -p hpcfs_data listxattr 4.
rados -p hpcfs_data getomapheader 4.
On Mon, Mar 18, 2019 at 7:37 PM Dylan McCulloch wrote:
>
> >> >
> >> >cephfs does not create/use object "4.". Please show us some
> >> >of its ke
Hello Daniele,
your problem is tracker by https://tracker.ceph.com/issues/38528 and
fixed in the latest Ceph 14 builds. To workaround the problem simply
disable SSL for your specific manager.
$ ceph config set mgr mgr/dashboard//ssl false
See https://tracker.ceph.com/issues/38528#note-1.
Regar
>> >
>> >cephfs does not create/use object "4.". Please show us some
>> >of its keys.
>> >
>>
>> https://pastebin.com/WLfLTgni
>> Thanks
>>
> Is the object recently modified?
>
>rados -p hpcfs_metadata stat 4.
>
$ rados -p hpcfs_metadata stat 4.
hpcfs_metadata/4. m
On Mon, Mar 18, 2019 at 6:05 PM Dylan McCulloch wrote:
>
>
> >
> >cephfs does not create/use object "4.". Please show us some
> >of its keys.
> >
>
> https://pastebin.com/WLfLTgni
> Thanks
>
Is the object recently modified?
rados -p hpcfs_metadata stat 4.
> >On Mon, Mar 18, 2
From what I see, the message is generated by a mon container on each node.
Does mon issue a manual compaction of rocksdb at some point (debug is a rocksdb
one)?
On 3/18/2019 12:33 AM, Konstantin Shalygin wrote:
I am getting a huge number of messages on one out of three nodes showing Manual
c
Konstantin,
I am not sure I understand. You mean something in the container does a manual
compaction job sporadically? What would be doing that? I am confused.
On 3/18/2019 12:33 AM, Konstantin Shalygin wrote:
I am getting a huge number of messages on one out of three nodes showing Manual
co
>
>cephfs does not create/use object "4.". Please show us some
>of its keys.
>
https://pastebin.com/WLfLTgni
Thanks
>On Mon, Mar 18, 2019 at 4:16 PM Dylan McCulloch wrote:
>>
>> Hi all,
>>
>> We have a large omap object warning on one of our Ceph clusters.
>> The only reports I've seen
cephfs does not create/use object "4.". Please show us some
of its keys.
On Mon, Mar 18, 2019 at 4:16 PM Dylan McCulloch wrote:
>
> Hi all,
>
> We have a large omap object warning on one of our Ceph clusters.
> The only reports I've seen regarding the "large omap objects" warning from
>
Hi all,
We have a large omap object warning on one of our Ceph clusters.
The only reports I've seen regarding the "large omap objects" warning from
other users were related to RGW bucket sharding, however we do not have RGW
configured on this cluster.
The large omap object ~10GB resides in a Cep
Hi,
If possible, please add also my account.
Regards
Mateusz
> Wiadomość napisana przez Trilok Agarwal w dniu
> 15.03.2019, o godz. 18:40:
>
> Hi
> Can somebody over here invite me to join the ceph slack channel
>
> Thanks
> TRILOK
> ___
> ceph-users
38 matches
Mail list logo